Whether we document or not, I would like to think that we all plan before testing. Somehow, we have got an outline of what we need to do to test the given function. Typically, we could expect cases that could explicitly be derived from the specs to be in your list. Say, I’m testing an import function so I’ll try importing files of the valid file type and, of course, invalid file types. I’ll also try the standard test cases applicable to the screen. So far so good, chances are we’ll capture the obvious bugs.
Once you’re actually testing, I hope you don’t stop there! There are still many cases that could potentially be NOT handled by your devs. You’ve got to think of how else to make the function fail. Say, what would happen if I specify a file in a thumb drive and then remove it during import? What if I specify a remote location and get disconnected? For remote locations, will the screen accept this format \\MACHINE-NAME\C$? What if I use an invalid file that had just been renamed so that it’ll look valid? What if there’s no more available disk space in the import location? Etc. Explore. Be persistent.
And if you’re system testing, it’s good to follow the provided test specs. But try to make little detours once in a while since you might find bugs that have been missed. Between someone with a mindset that there are no more bugs and someone who thinks there are still bugs, the latter will have better chances of finding more bugs. Explore. Be persistent.