How our team “does Agile”

Super quick background: Our project started Jan 2015. To kick things off on the Agile methodology, our Scrum master conducted a brief training (less than half a day) to the team, and we’ve been playing it by ear ever since.

Over the course of many sprints, retros and releases, we’ve made adjustments on how we’re doing Agile. I’m not sure if there are Agile Purists who would frown down and shake their heads at us for the variations we’ve made. But the thing is, despite our possibly non-canon approaches, what still matters most is the team closely working together to deliver working software.

This post might be TL;DR. But in case you’re still interested in having a peek at how our little team does Agile Scrum, go on and have a read…

Continue reading

Advertisements

Bugs happen — learn from them

So for the past couple of weeks, our team deployed updated versions of our app into production to address some interesting issues. But of course, when we were in the midst of trying to address them, they didn’t seem so interesting then.

Issue 1: Login would fail *sometimes*

Apparently, we were using an old LDAP server that was on its way to being decommissioned. You’d think getting our app to point to the updated LDAP server would be the needed fix. Well, technically, it was! But in the course of deploying, a new version of nodeJS got released wherein one function our app was using got deprecated. This then caused problem in saving records which we hadn’t anticipated when we did our impact analysis. The lesson is not to skip on the smoke tests even though the change seems quite straightforward.

Issue 2: We’ve deployed a new version but the browser keeps using the cached old version.

We typically find Chrome more reliable than IE. But this time around, we found that IE was the one behaving as designed / implemented / intended. Despite the initial setup not to cache, Chrome was still using an older version of the app even though we had already deployed a newer one. It also didn’t help that we kept on clearing our cache during testing so we had always been getting the latest version. The lesson here highlights the value of having a staging environment that is a mirror of production — this way we’d simulate what prod users would encounter when the new version gets deployed. Also, another lesson is to test in another environment where we don’t keep on clearing the cache since prod users most likely won’t be clearing their browsers as often as we do while doing integration testing.

Issue 3: Error on saving a particular profile record

One of the standard test cases from where I previously worked that I somehow carried with me (most of the time) is to check for whether leading and trailing spaces are trimmed when saving data in forms. For our app though, we had to previously make a decision to ship or delay, and opted to go ahead with deployment with that bug still open. Extra spaces in the field values didn’t seem critical compared to not having the app at all. Little did we know that spaces entered into a particular field would somehow cause a circular reference in the json formed to submit the data and cause an error in saving and retrieving the data. Thankfully, the impact wasn’t so bad considering we only had 1 instance of this issue out of around 300 records that had been created or modified. Lesson learned here is well not to skip trimming leading and trailing spaces if you can help it and to test for the impact of spaces in your test data.

So there. Bugs happen. There’s no such thing as perfect software. There’s no sense in kicking yourself endlessly over bumps like these. What’s important is to get some learning out of instances like these and to keep on moving forward.

Are you interested in software testing?

So yesterday I shared a link to 30 Things Every New Software Tester Should Learn in some other social network. Now I know it says “new” and I’m not exactly new anymore. But still, I don’t know everything so I’m sure I’ll pick up something new. Besides, whether you learn something or not depends on your willingness and openness to the possibility of learning.

Anyways, that post consisted of a series of tasks, and the first of which was to do an introspection. It asks this key question:  Are you interested in software testing? I guess it’s pretty safe to say that I am. I’ve been in testing for a long time now and I do enjoy it. I tweet and blog about it. I like finding bugs, figuring things out, working with fellow testers and the devs, and essentially just helping in making our product better (and maybe our project too).

Now this is something I also wonder about whether fellow testers are actually interested in software testing. I totally understand that for some it’s a 9–5 job, and for some their interests lie in their personal pursuits (be it art, sports, pets, other hobbies) — after all, there is more to life than just work! I don’t take it against anyone if they’re not in love with their work (so very few are and that’s in general) or so gung-ho with software testing pride (pumps fists up in the air). But interest is critical. It could mean the difference between just getting by with the motions and excelling or exceeding expectations. And it could mean the difference between drudgery and enjoyment. At the very least, I do hope people like their work and not just for the reason that it pays the bills.

I know there are some folks who fell into software testing by chance — it happened to be an opportunity that was available, or they had to shift from another part of software engineering to testing. Some folks got into testing because they took a programming course in college but aren’t too keen on doing coding. And inversely, there are some who got into testing with the hopes of shifting into coding. But regardless of how you got here and whether you’re still testing the waters to figure out if testing is really for you or not, please exercise diligence. Testing might turn out to be something you can excel in so give it its fair chance.

And maybe to be interested in software testing, the first step is to make a conscious decision to be interested in it.

“The very first step towards success in any occupation is to become interested in it.” – William Osler

Playing around with data entry

Back to work this new year and I’m catching up with what I’ve missed while I was on holiday. I found some notes I made when I checked out an internal site that got deployed. This one is about how data gets handled (or mishandled) in one of the forms in that site.

Usually in testing web applications, I try various inputs like

  • “♥” – sometimes this gets displayed as ♥
  • “alert(‘hello’);” – sometimes the alert / pop-up shows up on screen
  • “<b>hello</b>” – sometimes this gets displayed as hello

In the above cases, what I entered isn’t the same as what gets stored or retrieved. Cases such as these — wherein our actual data input doesn’t get preserved — are things I try to watch out for and bring to the team’s attention.

Read more:

Continue reading

Searching with LIKE

Having a clue on how your search function performs its search could come in very handy. For instance, our search application makes use of SQL’s LIKE clause in the where condition. So if I were to enter a search term “hello world”, the search in the database would be something like:

select * from TABLETOSEARCH 
where searchableField like '%hello world%';

There are certain characters that work differently with LIKE. So knowing these characters could be helpful in exposing bugs that might cause the search function to behave differently from what’s expected. The table below illustrates some examples but the behavior could possible vary depending on the database being used.

Character

LIKE behavior of the character

Search term

Expected

Actual Result

‘ (apostrophe)

I don’t like patatas

Return Paul

BUG -“Sorry, an error has occurred.”

% (percent)

Allows you to match any string of any length (including zero length)

pam%ela

Shouldn’t return Pamela Lesley since her data doesn’t actually contain “pam%ela”

BUG – returned Pam

_ (underscore)

Allows you to match on a single character

pam_la le_le_

Shouldn’t return Pamela Lesley

BUG – returned Pam

[ ] (with brackets)

Allows you to match on any character in the [ ] brackets (for example, [abc] would match on a, b, or c characters)

[pb]amela

Shouldn’t return Pamela Lesley

BUG – returned Pam

[^] (with caret in brackets)

Allows you to match on any character not in the [^] brackets (for example, [^abc] would match on any character that is not a, b, or c characters)

[^b]amela

Shouldn’t return Pamela Lesley

BUG – returned Pam

READ MORE:

Simulating Scenarios

When testing applications that are still being implemented, it’s possible that you have a function or screen that’s already for testing but some dependencies are not yet available. As a tester, you have to weigh whether the function is really not testable or if there’s a way to work-around current limitations. More often, the case is the latter.

To be able to simulate the cases realistically, you have to analyze and find out what would be the conditions needed to be fulfilled. Otherwise, you might end up simulating an invalid case that shouldn’t happen IRL, and worse trigger bugs that normally wouldn’t occur and wouldn’t really have to be handled.

CASE 1

For instance, in the application we’re working on, there are scenarios wherein we have cases where user is logged out but we have no log out function yet, or we have to have cases wherein the user doesn’t have a profile yet but we have no delete function yet to re-initiate the user to having no profile.

CASE 2

We found that in the Profiles List, there are records appearing more than once. There’s just 1 record in the PROFILES table, but what differs is that they have more than 1 value for their Availability Status (should be just 1 per profile).

Our hunch is this is what happened:

  1. The migration tool pulled the initial set of data from the existing system. Say, profile KC has availability status GREEN.
  2. The availability status got updated in the existing or new system under test. Say, profile KC now has RED availability status in the existing, and YELLOW in the new system.
  3. The migration tool is re-executed. This resulted to profile KC having 2 availability statues RED and YELLOW in the new system.

The catch is the 3rd step isn’t part of the intended use cases of the migration tool. It was intended to run onto a clean environment. So the case wherein profiles are appearing multiple times was caused by an invalid scenario or invalid data. As to whether the migration tool’s use case has to be extended to support multiple reruns, that’s a different story.

Capturing timestamps

An often unconsidered scenario here is when testing saving of records are cases wherein you try to save a record that had already been modified by another user. Or when trying to view a record that is no longer available. Often folks are focused on the CRUD of the screens they are testing, and forget that it’s possible for multiple users to update the same record (or it’s also possible that the single user updates the same record using multiple browsers).

In most of the web systems I tested, they are able to better handle such cases through the use of timestamps. There are fields capturing info like who created the record, when the record was created, and more importantly, who last updated the record and when the record was last updated.

Just to illustrate… So when you load the record on screen, the last update information is also retrieved (let’s say 10/13/2016 3:40 PM). Just in case, the system or another user updates the same record, that update would change the last update information. Let’s say that happens, and the last update info becomes 10/13/2016 3:45PM. That means you’re no longer viewing the latest version because what you’ve got is the 3:40PM version. So when you save, that should throw an error saying the record has been modified by another user or that you should refresh.

So aside from testing the cases wherein you consider multiple users viewing and updating the same record, you also need to check whether the timestamp fields are indeed updated accordingly.

READ MORE:

  • You can try googling for timestamping.  A related item is on locking (oft-missed).

Back-end and Front-end validations

When it comes to validations, it’s safer to handle it both in the front-end AND also the back-end. You can’t just rely on front-end validation because the request that gets sent to the server can be tampered/modified. Typically, you have the same set of validations implemented in both the back-end and the front-end. We typically test via the app/system’s user interface only so we don’t get to directly test the back-end validations… most of the time, the front-end has already filtered our inputs / validated such that by the time the request is sent to the back-end, more or less, the data should also pass back-end validations.

Case 1

The case below shows how the back-end validation captured the comparison between the start and end dates. Although, there shouldn’t have been such a comparison to begin with because the end date field was supposed to be hidden, but the front-end had passed on the incorrect information.

Just an elaboration of the case below:

What we see on screen

What we expect

What actually happened

  1. User enters the following for experience:
    • start date Apr 2016
    • end date Feb 2016
  2. User checks box to indicate he’s currently working there
    • That hides the end date
  3. User proceeds with saving the profile

Saving should be successful.

Start date = Apr 2016

isCurrent = 1 (yes)

Error on saving occurs.

This is because the information on end date is still sent. Back-end validation fails because Apr 2016 > Feb 2016.

Case 2 [10/28/2016]

Here’s another case wherein the back-end validation got triggered because the front-end validations didn’t get triggered. Our application works with migrated data from an existing system. So what we tried was to open a migrated record and save it using our edit form. Normally, front-end validations would have prevented a particular field from being null, but since we didn’t use the front-end ui to create/edit the record that field ended up being null. On save, the error was still captured thanks to back-end validation.

READ MORE:

Some basic stuff

I noticed that last month I wrote one note a week related to some issue encountered or a test case that needed to be considered. They seem a bit basic but then I realized they didn’t really teach that to us in school, and I don’t exactly recall having this in the onboarding at work. So basic as they may seem, I’ll go right ahead and post them. I don’t have (m)any readers anyway so I doubt anyone will be complaining. I’ll tag them as #fundamentals

July 2016 Meetup of Software Testing Philippines

I attended a local software testing meetup yesterday. Typically, I’d only join if the venue was near work, but I had the day off so stressing out over getting to the venue wasn’t an issue. What also lured me in is that there would be a talk on chartered exploratory testing and mind mapping. It’s not the first time I’ve ever heard of those things, but to hear about them in a local context was something that got me interested.

Actually, hearing things in a local context is why I generally bother to attend these meetups. You can google the concepts, you can google for tools, but you can’t google how other peers are doing software testing locally.

So the meetup had 2 parts — 3 if you count the part where we got fed free pizza and soft drinks. There was the talk on Collaborative Test Design by Ian Pestelos (I’ll link to the slides if the deck gets shared), and there was an Open Space discussion. Since “Open Space” was in title-case, I went ahead and googled it and found this (from Martin Fowler’s site) and this (how to run an Open Space event). An interesting take-away from googling about Open Space is The Law of Two Feet:

If, during the course of the gathering, any person finds themselves in a situation where they are neither learning nor contributing, they must go to some more productive place.

Overall, I enjoyed Ian’s talk, and I wouldn’t have second thoughts about recommending it to peers. The Open Space discussion, not my favorite thing. Would I go to one of these meetups again? Sure, if there’d be talks on topics I find interesting and if my schedule permits.

(Just some notes after the read-more)

Continue reading