How our team “does Agile”

Super quick background: Our project started Jan 2015. To kick things off on the Agile methodology, our Scrum master conducted a brief training (less than half a day) to the team, and we’ve been playing it by ear ever since.

Over the course of many sprints, retros and releases, we’ve made adjustments on how we’re doing Agile. I’m not sure if there are Agile Purists who would frown down and shake their heads at us for the variations we’ve made. But the thing is, despite our possibly non-canon approaches, what still matters most is the team closely working together to deliver working software.

This post might be TL;DR. But in case you’re still interested in having a peek at how our little team does Agile Scrum, go on and have a read…

Continue reading

Read: The Dance of the Possible

I’ve just consumed Scott Berkun’s newest book, The Dance of the Possible. As promised by the author, it was a short book intended so that we can get what we can out of it, get it out of the way, and dive into actually creating something. It was divided into three parts — each of which I consumed in one sitting of around an hour. You can breeze through it in less, but I liked reflecting on points raised by the author and recalling experiences where I can relate them (or could have related to them).

If I were to describe the 3 different parts of the book, I’d say part one is about the generating ideas. Part two is when you’re already developing your ideas. And part three is when it’s getting extra challenging to keep going and you need that extra boost.

He captures in writing some of the things I personally go through in my own creative process which made me just virtually nod in agreement and think “Oh yeah, that was what I was doing!” And I guess in making me aware, I could be more intentional in applying them and accepting when I feel like I’ve hit some sort of slump (that I will get over, of course).

I’ve long been intending to read a book on creativity (among many other things). Having Scott’s book come along with the invitation to do a book review really pushed me. When he described one of the seven sources of fuel for why people create the things they do (i.e., “Deliberately put yourself in situations where you have no way out but through.”), I couldn’t help being amused and thinking “Yeah, that happened!” I think even without the book review aspect, I’d have enjoyed reading his book as I’ve enjoyed some of his other writings. It just adds another dimension and it feels like it’s full circle because the book on creativity actually prompted me to create!

[Edit: Same content is posted as an Amazon book review over here.]

Honest KC needs to put some brakes on it

I gave a rather tactless feedback during a meeting I was in earlier today. Somehow, we stumbled on to the weekly tidbit email. I might have implied or maybe even explicitly said that it was rather useless, either way that cat’s out of the bag. I was wrong, my feedback could have been more constructive. What I could have said is what I had emailed months ago that their weekly tidbits could be more helpful if they provided some content and/or context to it.

I mean, the one I received and gave feedback to was a tidbit with an image containing the text:

Test Managers, level up your basic excel skills! ☺

That’s it. No link, no content, no context. It doesn’t give me a background on why, what good will this bring, what is it for, where is this coming from, etc. And even if I had been interested in that tip, it would have been helpful if I was given a lead to some recommended reference that had been helpful to them in the past that way I wouldn’t need to scour the internet for some really good and quick reference. That aside, I also think that there could have been more testing-y skills to level up on over Excel skills.

Anyway, should I have even bothered? Well, maybe I could have just ignored it, but it comes in once a week into my mailbox without the option to unsubscribe. I do recognize that the intent is good. But you know what they say about good intentions — aside from paving the way to hell, good intentions aren’t enough.

Credibility, and being leaders people would want to follow

A Facebook friend shared a video of a TEDx talk on leadership. And it starts off by posing the question why would anybody follow you. And I think this is something leaders should ask themselves. And if the reason is just because  “I’m the boss. (mic drop, gangsta pose)” then that’s not leadership to begin with.

Leaders should ask themselves this question — why would anybody follow you. It’s not about being insecure. It’s about empathy and understanding that most people are driven by purpose. It’s about finding out how you can be someone people would willingly follow.

Below’s a link to the video and some excerpts:

“…If you want to lead others, they’ve got to believe that you’re credible. They’ve got to believe that you’re honest, competent, forward-looking and inspiring. You can ask yourself how would people describe me. And you can ask yourself, in my behaviour and in my actions: Do I demonstrate to people my honesty? Do I demonstrate to them my competency? Do I demonstrate to them my enthusiasm, my passion? Do I demonstrate for them what it is that I care about?

We need to be able to tell the truth. We need to be clear on what’s important and why it’s important. And we need to be able to make sure that we act in ways that we say: we say this is important, we follow through with that. We need to continue to develop ourselves, our competency.  Our competency is an asset that appreciates over time. You’ve got to keep filling it up. And leaders are great learners. They’re always open to wonderment and always open to trying to learn more things that they can get better. Good enough never is. You’ve got to be willing to show your enthusiasm, to show your passion. … Show your enthusiasm. Be willing to say: I’m excited about this, this is important, this is significant. And you’ve got to be willing to take a stand. You need to be able to express an opinion. …You need to believe in such a way that other people will believe that you believe and will in fact be infected by your enthusiasm. … The simple truth is this: People will not believe the message if they don’t believe in the messenger.”

Bugs happen — learn from them

So for the past couple of weeks, our team deployed updated versions of our app into production to address some interesting issues. But of course, when we were in the midst of trying to address them, they didn’t seem so interesting then.

Issue 1: Login would fail *sometimes*

Apparently, we were using an old LDAP server that was on its way to being decommissioned. You’d think getting our app to point to the updated LDAP server would be the needed fix. Well, technically, it was! But in the course of deploying, a new version of nodeJS got released wherein one function our app was using got deprecated. This then caused problem in saving records which we hadn’t anticipated when we did our impact analysis. The lesson is not to skip on the smoke tests even though the change seems quite straightforward.

Issue 2: We’ve deployed a new version but the browser keeps using the cached old version.

We typically find Chrome more reliable than IE. But this time around, we found that IE was the one behaving as designed / implemented / intended. Despite the initial setup not to cache, Chrome was still using an older version of the app even though we had already deployed a newer one. It also didn’t help that we kept on clearing our cache during testing so we had always been getting the latest version. The lesson here highlights the value of having a staging environment that is a mirror of production — this way we’d simulate what prod users would encounter when the new version gets deployed. Also, another lesson is to test in another environment where we don’t keep on clearing the cache since prod users most likely won’t be clearing their browsers as often as we do while doing integration testing.

Issue 3: Error on saving a particular profile record

One of the standard test cases from where I previously worked that I somehow carried with me (most of the time) is to check for whether leading and trailing spaces are trimmed when saving data in forms. For our app though, we had to previously make a decision to ship or delay, and opted to go ahead with deployment with that bug still open. Extra spaces in the field values didn’t seem critical compared to not having the app at all. Little did we know that spaces entered into a particular field would somehow cause a circular reference in the json formed to submit the data and cause an error in saving and retrieving the data. Thankfully, the impact wasn’t so bad considering we only had 1 instance of this issue out of around 300 records that had been created or modified. Lesson learned here is well not to skip trimming leading and trailing spaces if you can help it and to test for the impact of spaces in your test data.

So there. Bugs happen. There’s no such thing as perfect software. There’s no sense in kicking yourself endlessly over bumps like these. What’s important is to get some learning out of instances like these and to keep on moving forward.

Are you interested in software testing?

So yesterday I shared a link to 30 Things Every New Software Tester Should Learn in some other social network. Now I know it says “new” and I’m not exactly new anymore. But still, I don’t know everything so I’m sure I’ll pick up something new. Besides, whether you learn something or not depends on your willingness and openness to the possibility of learning.

Anyways, that post consisted of a series of tasks, and the first of which was to do an introspection. It asks this key question:  Are you interested in software testing? I guess it’s pretty safe to say that I am. I’ve been in testing for a long time now and I do enjoy it. I tweet and blog about it. I like finding bugs, figuring things out, working with fellow testers and the devs, and essentially just helping in making our product better (and maybe our project too).

Now this is something I also wonder about whether fellow testers are actually interested in software testing. I totally understand that for some it’s a 9–5 job, and for some their interests lie in their personal pursuits (be it art, sports, pets, other hobbies) — after all, there is more to life than just work! I don’t take it against anyone if they’re not in love with their work (so very few are and that’s in general) or so gung-ho with software testing pride (pumps fists up in the air). But interest is critical. It could mean the difference between just getting by with the motions and excelling or exceeding expectations. And it could mean the difference between drudgery and enjoyment. At the very least, I do hope people like their work and not just for the reason that it pays the bills.

I know there are some folks who fell into software testing by chance — it happened to be an opportunity that was available, or they had to shift from another part of software engineering to testing. Some folks got into testing because they took a programming course in college but aren’t too keen on doing coding. And inversely, there are some who got into testing with the hopes of shifting into coding. But regardless of how you got here and whether you’re still testing the waters to figure out if testing is really for you or not, please exercise diligence. Testing might turn out to be something you can excel in so give it its fair chance.

And maybe to be interested in software testing, the first step is to make a conscious decision to be interested in it.

“The very first step towards success in any occupation is to become interested in it.” – William Osler

Playing around with data entry

Back to work this new year and I’m catching up with what I’ve missed while I was on holiday. I found some notes I made when I checked out an internal site that got deployed. This one is about how data gets handled (or mishandled) in one of the forms in that site.

Usually in testing web applications, I try various inputs like

  • “♥” – sometimes this gets displayed as ♥
  • “alert(‘hello’);” – sometimes the alert / pop-up shows up on screen
  • “<b>hello</b>” – sometimes this gets displayed as hello

In the above cases, what I entered isn’t the same as what gets stored or retrieved. Cases such as these — wherein our actual data input doesn’t get preserved — are things I try to watch out for and bring to the team’s attention.

Read more:

Continue reading

Searching with LIKE

Having a clue on how your search function performs its search could come in very handy. For instance, our search application makes use of SQL’s LIKE clause in the where condition. So if I were to enter a search term “hello world”, the search in the database would be something like:

select * from TABLETOSEARCH 
where searchableField like '%hello world%';

There are certain characters that work differently with LIKE. So knowing these characters could be helpful in exposing bugs that might cause the search function to behave differently from what’s expected. The table below illustrates some examples but the behavior could possible vary depending on the database being used.

Character

LIKE behavior of the character

Search term

Expected

Actual Result

‘ (apostrophe)

I don’t like patatas

Return Paul

BUG -“Sorry, an error has occurred.”

% (percent)

Allows you to match any string of any length (including zero length)

pam%ela

Shouldn’t return Pamela Lesley since her data doesn’t actually contain “pam%ela”

BUG – returned Pam

_ (underscore)

Allows you to match on a single character

pam_la le_le_

Shouldn’t return Pamela Lesley

BUG – returned Pam

[ ] (with brackets)

Allows you to match on any character in the [ ] brackets (for example, [abc] would match on a, b, or c characters)

[pb]amela

Shouldn’t return Pamela Lesley

BUG – returned Pam

[^] (with caret in brackets)

Allows you to match on any character not in the [^] brackets (for example, [^abc] would match on any character that is not a, b, or c characters)

[^b]amela

Shouldn’t return Pamela Lesley

BUG – returned Pam

READ MORE:

Simulating Scenarios

When testing applications that are still being implemented, it’s possible that you have a function or screen that’s already for testing but some dependencies are not yet available. As a tester, you have to weigh whether the function is really not testable or if there’s a way to work-around current limitations. More often, the case is the latter.

To be able to simulate the cases realistically, you have to analyze and find out what would be the conditions needed to be fulfilled. Otherwise, you might end up simulating an invalid case that shouldn’t happen IRL, and worse trigger bugs that normally wouldn’t occur and wouldn’t really have to be handled.

CASE 1

For instance, in the application we’re working on, there are scenarios wherein we have cases where user is logged out but we have no log out function yet, or we have to have cases wherein the user doesn’t have a profile yet but we have no delete function yet to re-initiate the user to having no profile.

CASE 2

We found that in the Profiles List, there are records appearing more than once. There’s just 1 record in the PROFILES table, but what differs is that they have more than 1 value for their Availability Status (should be just 1 per profile).

Our hunch is this is what happened:

  1. The migration tool pulled the initial set of data from the existing system. Say, profile KC has availability status GREEN.
  2. The availability status got updated in the existing or new system under test. Say, profile KC now has RED availability status in the existing, and YELLOW in the new system.
  3. The migration tool is re-executed. This resulted to profile KC having 2 availability statues RED and YELLOW in the new system.

The catch is the 3rd step isn’t part of the intended use cases of the migration tool. It was intended to run onto a clean environment. So the case wherein profiles are appearing multiple times was caused by an invalid scenario or invalid data. As to whether the migration tool’s use case has to be extended to support multiple reruns, that’s a different story.

Capturing timestamps

An often unconsidered scenario here is when testing saving of records are cases wherein you try to save a record that had already been modified by another user. Or when trying to view a record that is no longer available. Often folks are focused on the CRUD of the screens they are testing, and forget that it’s possible for multiple users to update the same record (or it’s also possible that the single user updates the same record using multiple browsers).

In most of the web systems I tested, they are able to better handle such cases through the use of timestamps. There are fields capturing info like who created the record, when the record was created, and more importantly, who last updated the record and when the record was last updated.

Just to illustrate… So when you load the record on screen, the last update information is also retrieved (let’s say 10/13/2016 3:40 PM). Just in case, the system or another user updates the same record, that update would change the last update information. Let’s say that happens, and the last update info becomes 10/13/2016 3:45PM. That means you’re no longer viewing the latest version because what you’ve got is the 3:40PM version. So when you save, that should throw an error saying the record has been modified by another user or that you should refresh.

So aside from testing the cases wherein you consider multiple users viewing and updating the same record, you also need to check whether the timestamp fields are indeed updated accordingly.

READ MORE:

  • You can try googling for timestamping.  A related item is on locking (oft-missed).