A testing exercise: ParkCalc

Early the other day, I noticed some of the tweets of testers I followed had a #parkcalc hashtag. At first, I couldn’t exactly get what they were saying, but I guessed that they must have been testing something collaboratively. At the time, I couldn’t stay online long enough to find out more about parkcalc. Luckily, when I got home that evening, I found a blog post on the topic. The author of the post suggested to click the link to the parking calculator before reading any further, and so I did.

The first thing I tried to do was to click the Calculate button without changing any of the default values. A page with error messages (e.g., Warning: mktime() expects parameter 4 to be long, string given in /home/grrorg5/public_html/Includes/Calculator.inc on line 72) got displayed before the page refreshed to show a more user-friendly error message.

The next thing I noticed was the layout seemed a bit off. I thought maybe it was just a Chrome thing. I tried loading the page and the pop-up calendar in Firefox, and I was able to confirm my hunch.

Next thing I did was to play around with the inputs for the entry date and time, and exit date and time. I tried the usual stuff — entering no values, entering invalid values, non-existent values (e.g., 13 for month, non-existent dates like Feb 29 on a non-leap year), cases wherein exit date/time came before entry date/time, etc. Most interesting finds I think were triggering the “Not Acceptable” page and managing to get an estimated cost of $3,946,162,582,627,248.00 for 1.64423440943E+14 Days, 15 Hours, 21.6 Minutes of “Short-Term Parking”.

I then checked out what the rest of the blog had to say as well as the recent tweets tagged with #parkcalc. And apparently, I’m not the only one who enjoyed tinkering with parkcalc. :) Finding the bugs on my own is in itself a good exercise, but more than that, it was pretty interesting to read about how other testers attacked the app and to read about their finds even if they make mine look lame. For instance, someone used exponential notation for the inputs, someone managed to trigger a cost value of $5,124,095,576,028,720.00, and my fave so far is this link that I got from one of the tweets: http://is.gd/bknIb

—-

[Edit] Additional links:

Security fail

I came across an interesting bug the other day as I was trying to think of a good example of URL hacking. I entered the URL to our company’s online time sheet (OTS) http://192.168.4.135:8080/ots/Index.jsp onto my favorite browser and then backspaced a bit. I hit enter when the browser was pointed to http://192.168.4.135:8080/ots/ and ta-dah… a directory listing.

security_fail

Most interesting was that upon checking the contents of the folders, I came across a file with a .conf extension.  That made me do a double-take.  True enough, when I opened the file, it contained the DB server, username and password to our OTS. There was also a very helpful readme.txt file which cited the .conf file and the supposedly confidential information.  This has been fixed though that is, at least the access to the conf and readme files.  The directory listing can still be viewed. :P

Bug isolation

A friend had shared with me a paper on bug isolation. It’s a 2004 paper by Danny Faught entitled How to Make Your Bugs Lonely: Tips on Bug Isolation. It starts off with the quotation: “A problem well-stated is half-solved.” The paper then got me thinking on whether we cover or emphasize this strongly enough during tester training in our company. Sure, our testers indicate the expected behavior versus the actual behavior in their bug reports, and sometimes they provide the steps to replicate the problem. But giving the steps does not necessarily equate to giving the conditions that trigger the bug report.

In bug isolation, one attempts to really zero in on the bug by reducing the factors that obscure the problem.  For instance, you streamline the steps to get to the bug, or you narrow down the data needed to trigger the bug.  Basically, you refrain from posting shoddy bug descriptions like “It doesn’t work”, and instead you try to find out the specific conditions that trigger the bug.  You might think that bug isolation only benefits the developers since they’ve less effort to render in trying to replicate the bug.  But I think, this will also help the tester in thoroughly verifying bug fixes.  How else would one know if a problem is solved if he doesn’t know the problem to begin with.  Verifying the absence of the previously detected symptoms is not always a sure-fire guarantee.

Continue reading

Media- and file-based attacks

It’s been pretty busy at work so I’ve only been able to finish How to Break Software only last week in one of the nights when we had to do overtime work for our demo project. Anyway, there was this section on media-based and file-based attacks. In previous web projects, we had’t considered these as extensively as we had done in JV. In JV, the client’s machine is the environment so we had to be more conscious of the files and folders s/he uses.

Media-based attacks

  1. See if your software can handle a full storage medium. Fill up the hard drive and then force the software to perform file operations (by opening, moving, and saving files).
  2. See if the software can gracefully deal with a busy file system. Some applications don’t have the proper timeout/waiting mechanisms, so they can fail when the file system is busy serving a request from another application. Force the software to perform file operations in conjunction with background applications that are also performing file operations.
  3. Try forcing the software through file operations with a damage medium. Applications with weak fault handling code will often fail in this scenario.

File-based attacks

  1. Try assigning invalid file names to the application’s data files, temporary files, and read-only files. Then force the software to use those files.
  2. Change the access permissions of the application’s data files. User permissions and read-write-execute-delete permissions are often ignored by the developers.
  3. See if the software can handle corrupt data in a file. Since most data corruption results in a failed cyclical redundancy check, Canned HEAT is the perfect mechanism to inject such a faults. Otherwise, use a hex/text editor to change the contents of a file and then force the software to open the file or read from it.

In addition, some other stuff I consider:

  • Try using a removable drive for storing some files. See what happens when the drive has been removed properly or removed in the middle of an operation.
  • Try using a remote location when specifying some files to be used as the program’s input. I encountered cases wherein the behavior hadn’t turned out as expected when I specified \machine-named$filename.

Continue reading

Input and output attacks

I’ve recently started leafing through one of the testing books available in the 28th floor bookshelf. The book is entitled How to Break Software by James Whittaker. And it conveniently has a summary of input and output attacks.

Summary of the Input/Output Attacks — A Checklist for Battle

  1. Make sure you see all error messages at least once by applying invalid input. Think of invalid inputs that the developers might have missed.
  2. Force the software to assign its default values for any internal variable that can be set through the user interface. First display and accept existing values. Then assign bogus values to force the software to calculate good ones.
  3. For every input field, enter values of the wrong type and values that represent strings that may be treated in a special way. Study the OS and programming language and make a list of possible problematic strings. Apply them all in every test entry field.
  4. In every input field enter the maximum number of characters allowed.
  5. Find input panels where a number of inputs are entered before pressing “OK”. Determine legal values for each individual field and try combinations that represent an illegal set of inputs.
  6. Find places where inputs are accepted and apply the same input or series of inputs over and over. Choose inputs that cause some underlying computation or data manipulation over inputs that are simply displayed on the screen.
  7. Pick an input, apply it to the software under test, and note the output. Think about other outputs that could occur when this input is applied in other situations. Apply the inputs in these other situations to ensure that each such output is observed during testing.
  8. Think of outputs that the software cannot or should not generate. Find a combination or sequence of inputs that will cause one of these illegal outputs to be generated.
  9. Apply an input that generates an output with some observable and changeable property, such as size. Force the property to change.
  10. Determine when the software under test is refreshing the screen. Create situations where the software refreshes the screen too often or in which it fails to refresh what it should.

Continue reading

*.* gets all files

For some screens, we sometimes need to upload a file of a certain file type. Often, a filter is added onto the file selection dialog box so that only the expected file types are retrieved. However, this filter shouldn’t give us a reason to be complacent. We can type in “*.*” in the File name field, and this will then retrieve all files within the folder regardless of the file type.

*.* gets all files

*.* gets all files

A positive look at the negative

One of our main references when testing is the specs. Be it called fspecs, rspecs, pspecs or devnotes, it essentially describes how the program is supposed to behave. At the very least, it tells us of the inputs that the program requires, the outputs that it generates, and of the logic that happens in between.

Clearly, what is written is rather limited — the scope of what is unwritten far exceeds that of what is written. In effect, you must test more for what is unwritten as opposed to what is written. In the book “Black-Box Testing”, Boris Beizer wrote that “In mature test suites, dirty tests [this corresponds to negative test cases] typically outnumber clean tests [positive test cases] in a ratio of 4:1 or 5:1.”

For the way that the program is intended to be used, we can immediately derive test cases from the specs. But we musn’t fall in the trap of confirmation bias i.e., our test cases must not be limited to confirming the specs. We must also create negative test cases which are geared towards testing the program in the way it is not intended to be used. Through subjecting the program to both positive and negative test cases, we can establish greater confidence that the program works and is robust.

Stick to the plan… but not too much

Whether we document or not, I would like to think that we all plan before testing. Somehow, we have got an outline of what we need to do to test the given function. Typically, we could expect cases that could explicitly be derived from the specs to be in your list. Say, I’m testing an import function so I’ll try importing files of the valid file type and, of course, invalid file types. I’ll also try the standard test cases applicable to the screen. So far so good, chances are we’ll capture the obvious bugs.

Once you’re actually testing, I hope you don’t stop there! There are still many cases that could potentially be NOT handled by your devs. You’ve got to think of how else to make the function fail. Say, what would happen if I specify a file in a thumb drive and then remove it during import? What if I specify a remote location and get disconnected? For remote locations, will the screen accept this format \\MACHINE-NAME\C$? What if I use an invalid file that had just been renamed so that it’ll look valid? What if there’s no more available disk space in the import location? Etc. Explore. Be persistent.

And if you’re system testing, it’s good to follow the provided test specs. But try to make little detours once in a while since you might find bugs that have been missed. Between someone with a mindset that there are no more bugs and someone who thinks there are still bugs, the latter will have better chances of finding more bugs. Explore. Be persistent.

Buggier and buggier

Here’s a recommended reading material for you and your devs just in case you notice that the function gets even more buggy as changes to the code are introduced: Three Questions About Each Bug You Find. It stresses the importance of asking the following questions for each bug or comment that needs to be addressed:

  1. Is this mistake somewhere else also?
  2. What next bug is hidden behind this one? (What happens if I fix this bug here? Will it wreak havoc elsewhere?)
  3. What should I do to prevent bugs like this?

Its application is not only limited to bugfixing. It holds for comments on other work-products as well.