So for the past couple of weeks, our team deployed updated versions of our app into production to address some interesting issues. But of course, when we were in the midst of trying to address them, they didn’t seem so interesting then.
Issue 1: Login would fail *sometimes*
Apparently, we were using an old LDAP server that was on its way to being decommissioned. You’d think getting our app to point to the updated LDAP server would be the needed fix. Well, technically, it was! But in the course of deploying, a new version of nodeJS got released wherein one function our app was using got deprecated. This then caused problem in saving records which we hadn’t anticipated when we did our impact analysis. The lesson is not to skip on the smoke tests even though the change seems quite straightforward.
Issue 2: We’ve deployed a new version but the browser keeps using the cached old version.
We typically find Chrome more reliable than IE. But this time around, we found that IE was the one behaving as designed / implemented / intended. Despite the initial setup not to cache, Chrome was still using an older version of the app even though we had already deployed a newer one. It also didn’t help that we kept on clearing our cache during testing so we had always been getting the latest version. The lesson here highlights the value of having a staging environment that is a mirror of production — this way we’d simulate what prod users would encounter when the new version gets deployed. Also, another lesson is to test in another environment where we don’t keep on clearing the cache since prod users most likely won’t be clearing their browsers as often as we do while doing integration testing.
Issue 3: Error on saving a particular profile record
One of the standard test cases from where I previously worked that I somehow carried with me (most of the time) is to check for whether leading and trailing spaces are trimmed when saving data in forms. For our app though, we had to previously make a decision to ship or delay, and opted to go ahead with deployment with that bug still open. Extra spaces in the field values didn’t seem critical compared to not having the app at all. Little did we know that spaces entered into a particular field would somehow cause a circular reference in the json formed to submit the data and cause an error in saving and retrieving the data. Thankfully, the impact wasn’t so bad considering we only had 1 instance of this issue out of around 300 records that had been created or modified. Lesson learned here is well not to skip trimming leading and trailing spaces if you can help it and to test for the impact of spaces in your test data.
So there. Bugs happen. There’s no such thing as perfect software. There’s no sense in kicking yourself endlessly over bumps like these. What’s important is to get some learning out of instances like these and to keep on moving forward.
We had our regular team meeting yesterday and Dwight shared a couple of videos. They’re both from TED talks. The first video is a talk by Derek Sivers where he shows a dancing guy whose top looks like he’ll have a pretty mean sunburn right afterwards. He starts dancing like no one’s watching, first by himself. And then one other guy joins in, and soon there’s a big crowd dancing. The video and a transcript of the video is available here: http://sivers.org/ff. The key lesson I suppose is that although leadership is important, but being a courageous follower is also important.
The next video is by David Damberger, who is the founder of Engineers Without Borders. He discusses how projects that sought to help the needy — through building physical structures like schools, wells, and such — would often fail due to lack of maintenance. And then there would be similar projects also aiming to help the same cause but also ending up with the same problems. In the end, these projects don’t end up helping as much as they should have. Here’s a link to the TEDx vid: http://tedxtalks.ted.com/video/TEDxYYC-David-Damberger-Learnin. This reminds me of the value of lessons learned. And being in the software industry, it reminded me of the Classic Mistakes that I read about from Steve McConnell’s Rapid Development: Taming Wild Software Schedules. I previously blogged about it, so I got the notes from there and shared it to the team. In my post, I said that this list is not about rubbing salt to the would or adding insult to injury. It’s about knowing what most likely could go wrong (based on what had gone wrong a LOT), and taking measures to avoid them.
I suppose one could expect that having a buglist or a defect tracking system should already be pretty standard in a software project. But it’s funny how it’s the basics that gets forgotten or foregone sometimes.
Well, we did have Quality Center set up, but one of the concerns was that the devs weren’t paying attention to it and testers weren’t logging into it. It was a chicken-and-egg thing. Email got inundated with issues, follow-ups and such which kinda sucked since I always got copied into issue email. Eventually, our PM (he got cc’d too) put his foot down and we’ve made the shift back into QC with me goading the testers to use our defect tracker as intended, and with someone from the dev team monitoring the defects with respect to dev assignments. Thankfully, the team has been quite cooperative.
This has also pushed me into tinkering a bit with the reporting capabilities of QC. There are built-in reports that I found to be of use, and I’ve also created my own queries for generating my own reports on defects and test case status.
So far, it’s been working out. With the shift to QC, our email is no longer as abused or misused for defect tracking. We can now misuse it for something else (jk). One major advantage is that we can now, if needed and as needed, easily extract defect data. Instead of having to dig through old email, getting the list of open issues across the many applications that we’re handling at a time is now quite easy. Having all bugs logged into QC also allows for easier detection of red flags e.g., if several testers are reporting similar issues at the same time it’s possible that there’s already a global issue; or if devs are deferring a significant number of defects as non-issues, that could be a red flag on the quality of defect reporting, or that valid issues are getting dismissed.
The lesson learned is simple: use things as intended. Email for comms and the defect tracker for defect tracking. They’re there specifically for those purposes so use them accordingly. And, this is a team thing, so even if the testers were so disciplined in logging the defects, it won’t be as efficient if the devs aren’t using the tool as well. Work to have an alignment within your team so that the available tools can be optimized.
Over the weekend, Pam and I were at the supermarket buying some groceries. When it was our turn to pay at the counter, we were still quite engaged in conversation. When the cashier gave us the total bill, I handed over my credit card to her. In turn, she handed the credit card slip to Pam instead of to me. Pam absentmindedly signed the slip (and I absentmindedly let her; we were still talking :p). The mistake was eventually realized when the cashier handed back the card and our copy of the slip to Pam, and it was actually Pam who pointed it out.
What was odd though was that the cashier actually flipped my card over to see the back side and she did look at Pam’s signature on the slip. She went through the motions but failed to see that the signatures and even the names did not match.
Some lessons learned (yeah, over buying groceries):
- It’s not only inattentional blindness that we should be wary of. Apparently, there’s also attentional blindness wherein we’re looking but we’re not seeing.
- Just because the test steps were performed doesn’t mean the test’s objectives were met. Try to align what you do with your purpose.
- Monotony dulls the senses. Once in a while, it might be a good idea to defocus then refocus. After all, fresh eyes find failure.
I’ve mentioned practice in my previous post. I had been meaning to post an addendum (current project makes use of that word a lot) but with our hectic schedule, I only got the chance to do that now. For the past few days, our team has been going on overtime and today was actually the first time I got home before 9PM! Wait, I’m digressing.
Lesson 214 is: If you really want to know what’s going on, test with your staff.
Advantages: (1) It’ll keep the saw sharp. (2) You’ll get to see what the testers in the team have to deal with — unnecessary steps in the procedures, difficult developers, problematic tools, etc. And you’ll be in a better position to offer and discuss suggestions, and evaluate solutions. (3) It’ll give you a better idea of your product’s quality, the strengths and weaknesses of your teammates, and of the team dynamics. (4) Your teammates would actually be able to talk to you about your product!
One of the testers at work uses an excerpt from Lesson 47 of Lessons Learned in Software Testing as an email signature. The lesson’s heading reads: You can’t master testing unless you reinvent it. I reread the entry in the book and the bit that struck me the most reads:
If you want to be good at this, you have to practice.
(I hear this in my head as if it’s spoken by someone with a shifu-like voice… must be because of hearing one of the character voices in Red Alert 3 in the background.) Although this may not be the main point of the lesson (or it could be), I want to reiterate the point that a prerequisite of mastery is practice. Before you can go about reinventing testing, you’d actually have to be proficient at it. Before you can “be the author of your own wisdom”, I think you need experience (lots and lots of it) from which you’ll draw that wisdom from.
I’d place a nice segue here but I can’t think of any at the moment (it’s way past my bedtime). I just want to post a couple of links here. The first link is to an article giving insights on how to practice software testing, the second link is to a blog post by a test manager who values keeping testing in practice.