Do we size bugs with story points?

I found a question posted in one of the social channels at work: How should one give out story points to bugs/defects that one does not yet know how to fix yet and requires investigation? The original question asks how but I think even before we go there it would be nice to know whether we need to in the first place. I plan to write about this in 2-parts: (1) how I might go about with it — no explanations, but based on past experiences as a member of Agile Scrum teams and what I’ve read on the topic, and (2) links and quotes galore.

How I might go about with it

  • If it’s a bug found during testing of a user story we’re working on in the sprint AND it’s small enough (implicitly sized) to be fixed within the same sprint: It goes into the sprint backlog. No need to size it. Just prioritize it accordingly.
  • If it’s a bug unrelated to user stories that we’re testing this sprint (say, from an older feature) OR it’s too big a bug or complex (again implicitly sized) to be fixed within the sprint: It goes to the product backlog. It’ll be groomed as you would with other user stories to give it enough details for the team to work with. And if it makes it way into the Sprint Planning, then size the bug.
  • Now what if the bug that goes into the product backlog requires more investigation than usual (all bugs require investigation, but in some cases I suppose devs already have an idea of how to fix it, in some, totally no idea hence more investigation is needed): Tag it as a spike (not a term in the Scrum Guide, FYI). If it goes into the Sprint Backlog, meaning the team agrees to invest time on investigating that bug within the Sprint, no need to size it.
    • For that spike in the Sprint, it’ll just mean there’ll be a time-box (1-3 days of effort) for investigating that bug. At the end of the time-box, whoever works on it reports their findings and the team can discuss the next steps.
    • Assuming the team agrees on a resolution, duplicate the bug with the spike tag. Close the original one. In the duplicate, remove the Spike label. If it’s to remain in the Sprint Backlog meaning the team will fix it within the Sprint, then size the bug. Otherwise, the new bug (the duplicate) goes to the Product Backlog and no need to size it yet.
    • But what if there’s still no resolution or identified workaround. The team can opt to extend the time-box. But at some point, you can’t just extend and extend it forever. Once a threshold is met (is 3 months too long/short?): Tag it with a label your team agrees to use on such items, and then archive it.
  • At the end of the Sprint, the Scrum Master will be able to gather the following data in case they want to use it for some forecasting:
    • User Stories – total story points, bugs per user story
    • Bugs – total story points, total number of bugs
    • Spikes – total number of Spikes worked on, total number of Spikes closed, total points from Spikes that were converted to new bugs

That turned out longer than I expected. The next part are for some links on the topic and could give you the opposing views to help you come up with your own answer.

Links and quotes galore

12 common mistakes made when using Story Points – This has a lot of other interesting points not just about on whether you size bugs or not.

  • “Story Points represent the effort required to put a PBI (Product Backlog Item) live.” So story points are not limited to user stories.
  • “Story Points are about effort. Complexity, uncertainty and risk factors that influence effort but each alone is not enough to determine effort.
  • [Common mistake #5: Never Story Pointing Bugs] “A bug which is unrelated to the current sprint should just be story pointed. The bug represents work the team needs to complete. This does not apply if the team reserves a fixed percentage of time for working on bugs during the sprint. A bug related to an issue in the sprint should not be story pointed as this is part of the original estimation.”

Should Story Points Be Assigned to a Bug Fixing Story?

  • [I think this is with respect to legacy bugs or when the team is dealing with a large database of agile defects] “My usual recommendation is to assign points to bug fixing the agile defects. This really achieves the best of both worlds. We are able to see how much work the team is really able to accomplish, but also able to look at the historical data and see how much went into the bug-fixing story each sprint.”

Should you ‘Story Point’ everything? – This is a thread in the Scrum.org forum.

  • (No points for bugs) ‘They are called story points for a reason. They are not call[ed] “Item Points”. Ideally you should only have stories in your backlog and the technical tasks should be inside…’
  • (Yes or no points for bugs) “It is critical as a Scrum Master to ensure that story points are being used properly within an organization. They serve two purposes only: to help the Development Team and Product Owner plan future sprints, and to be accumulated for done items at the end of a sprint for velocity calculation purposes. They are not a proxy for value delivery. … That said, it seems there are a number of different items (bugs, technical tasks, spikes) that have a capacity impact on the Development Team each sprint. For planning purposes, if the team prefers to not point these items, a mechanism to determine the capacity impact is still desired….”
  • (No points altogether) ‘I have found, and this may depend on your team, that removing story points entirely helps the team and stakeholders focus on the sprint goal instead of “How many points”….’

What’s a spike, who should enter it, and how to word it? Since I mentioned “spikes”, I’ve put in this other link about it.

  • “A spike is an investment to make the story estimable or schedule-able.”
  • “Teams should agree that every spike is, say, never more than 1 day of research. (For some teams this might be, say, 3 days, if that’s the common situation.) At the end of the time-box, you have to report out your findings. That might result in another spike, but time-box the experiments. If you weren’t able to answer the question before time runs out, you must still report the results to the team and decide what to do next. What to do next might be to define another spike.”
  • “It’s also best if a spike has one very clear question to answer. Not a bunch of questions or an ambiguous statement of stuff you need to look into. Therefore, split your spikes just as you would large user stories.”

Let me know if you find anything more conclusive or helpful.

Retrospection and learning time

Just recently, my engagement to my project since Nov of last year has ramped down. So the past couple of weeks has been a time of transition for me into my upcoming project and also from me to the new PO of my previous project. This allowed an opportunity for retrospection, and also a chance to pick up on new stuff.

While looking up the available knowledge sharing platforms within the company, I came across the option to host stuff in our enterprise GitHub instance. One link led to another, and I came across…

  • MkDocs – This is for project documentation; it allows the use of Markdown for writing content, and then you generate static site contents from that.
  • Documentation as Code (docs-as-code) – While I’m not a programmer, getting familiar with the concept wouldn’t hurt. And as I read more, it’s not really exclusive to programmers.
  • Diagrams as Code with Mermaid – Part of the family of stuff-as-code, this doesn’t trail far behind. I think what I find promising about this (apart from being free) is that this is going make comparison of versions easier since you’re comparing text files.

As mentioned, I did some retrospection. I collated some of my personal lessons learned and posted it in our project’s Confluence page. I also revisited Software Development’s Classic Mistakes. I tried rereading The Scrum Guide and some stuff on anti-patterns to see where we’re somewhat deviating (for guidance if it’s something we should continue or if we should “realign”). Then I tried to pull out project-agnostic stuff that could be helpful to me for starting a new Agile Scrum project and collated my notes.

With the notes in hand, I’m starting to use it as a reference for the new project, and I plan to just tweak accordingly as I find other useful stuff to add in. At this stage, there’s already a team working on the prototypes, and in theory, they’re prepping the solution or the design which will be handed over to the implementation team. So I’ll be keen on learning a lot more and looking for process improvements for the handover from Design Thinking to prototyping to implementation. Exciting stuff! 🙂

Notes from webinar: Write Better User Stories…

Last week, I attended a free webinar by Mike Cohn of Better User Stories on the topic of “Write Better User Stories in Less Time With Less Aggravation”. Right after, I shared the replay link to some colleagues along with a few bullet points of pros and cons.

(+) interesting, maayos explanation
(+) ok din yung q&a
(+) insightful naman, gives you something to think about, stuff to google further
(-) promotional, the whole course is expensive $395

Posting my notes here since the learning from the webinar is something worth revisiting.

3 Techniques

  1. Conduct a quarterly story-writing workshop
  2. Split stories to demonstrate progress even if the result is not truly shippable
  3. Strive to add just enough detail, just in time

Technique #1: Conduct a quarterly story-writing workshop

  • Deliberate, focused meeting
  • Brainstorm the stories needed to achieve the period’s most important goal
  • Short-term focus causes teams to give in to what’s urgent over what’s important
  • Team able to step away from day to day crisis… Without that big goal, the crisis always wins

Focus on a single objective

  • “What shall we build?” — Wrong question, too broad, anything is fair game
  • PO selects the significant objective (SO)
  • SO typically achievable in about 3 months
  • MVP, sometimes overused, seems can only be used once
  • MMF = Minimum Marketable Feature = subset of overall feature that delivers value when released independently, smaller than MVP

Involve the whole team in writing stories

  • Time investment, pays back on time savings when team works on the user stories
  • They’ll have fewer questions later, they’ll have more context
  • Fewer interruptions to devs’ day
  • Devs may come up with better implementation, increased creativity

Visualize user stories with a story map

  • Story maps invented by Jeff Patton
  • Each card = 1 user story (1 thing the user needs to do)
  • Horizontally = sequence of activities (don’t obsess over combination of sequence at this point, some steps may be optional)
  • Vertically = alternatives (with most important on top)

Technique #2: Split stories to demonstrate progress even if the result is not truly shippable

  • 90% joke – Ask dev how done are you and he replies 90%. Come back after a week, and answer is still 90%.
  • Devs are not evil or liars, Estimating how done we are with something is notoriously difficult.
  • In Agile, easier, no need to estimate. Just 2 states = Not started or Done
  • 5 techniques for splitting stories (Lookup SPIDR), shared in the webinar were Splitting by Interface and by Rules
  • When you split stories remember the goal is to be potentially shippable — (1) high quality, (2) tested, (3) what it does, it does well

Technique #3: Strive to add just enough detail, just in time

  • Too much detail, too early vs Too little detail, too late
  • Bad habit – want to know all before starting — when they do that they’re not doing overlapping work (analysis first, before coding, testing…). Overlapping work is central tenet in most Agile processes (that’s why we don’t have phases in Agile). Time to market gets stretched.
  • Err on the side of too little, too late — you can improve by adding more detail next time
  • Question 1 (during refinement or other discussions on a user story): Do you need the answer before you start on that story? Sometimes you need it before you finish work on that story, not before you start.
  • Question 2 (during retro): Did we get answers just in time in just enough detail?

Reference / links:

Keep the Backlog clean

I just mariekondo’d our backlog. So far, so good, I’ve removed 85 items from the backlog — 53 of which were over 200 days old! My thinking is if we won’t be touching them anytime soon or at all, I want them out of the backlog.

Idk but some lessons to share or possibly reminders to my future self here…

  • Get to know your tool – Find out how you can “archive” user stories that you want shelved, and also how you can access the shelved items in the future just in case you need to. Until recently, my options in the tool was limited to either delete or mark as done (which I didn’t want to do for items we won’t actually work on). We then found out that we can do project customization in the tool contrary to what we’ve been initially told, and so I’ve tweaked the workflow to also consider user stories that I want shelved.
  • Housekeeping keeps the backlog tool more usable – At some point, it was hard to move things around the backlog because of too many useless items that cluttered the list. Having a lean backlog also makes the items we actually need to work on more visible.
  • Maybe it shouldn’t be a list of wishful thinking, or a place for idea dumps – And TIL, using the backlog as a storage of ideas is a Product Backlog Anti-pattern.
  • Keep it aligned with the roadmap – Again, (“The product backlog is not reflecting the roadmap.”) another anti-pattern. I guess in conjunction with the previous item, a lot of the user stories that I cleaned up were raw ideas that they had wanted to build “someday”. Keep it real by keeping the backlog items as a list of things the team will actually work on.
  • Avoid / minimize duplication – For some reason, if a user story has to be kept duplicated, ensure they are linked to each other. The risk of duplication is in case of refinements, updates might be made on just one of the user stories when in reality you want it to be carried out across all.
  • Do periodic cleanups – This clean up is not and should not be a one time thing to keep the backlog relevant. An idea I picked up here is about setting a limit to your Design in Progress (DIP) or the number of items you have in the backlog.
  • Be mindful of what you add in the backlog – You don’t want the backlog items to keep growing and growing and revert back to a state you find less desirable. And an idea I picked up here is about setting a limit to your Design in Progress (DIP) or the number of items you have in the backlog.

So there, future me, keep the backlog clean. Keep it useful not only for yourself, but more importantly, for the rest of the team.

One link leads to another

Sometimes I come across posts or material in the internet on topics that piques my interest. It could be something I want to know more or understand more about. Or it could be related to a conversation or two I’ve had within the day that makes me question certain things. So sometimes I google, and sometimes I just stumble upon them through various feeds — could be Twitter, email, Medium, IG, and Facebook even. And then one link leads to another and before I know it, it’s 2AM and I should be getting some sleep. So anyways, here’s a dump of some recent links, in no particular order. I hope someone finds them helpful or interesting as I have.

Agile Product Ownership in a Nutshell (15 minute video) – I like how the content was easy to follow. There were a lot of points worth highlighting, but I guess what hits home the most is the mention of three things that need to be balanced:

  • Build the right thing (PO tends to focus here)
  • Build the thing right (dev team)
  • Build it fast (SM or Agile coach)

So you want to be a Scrum Master (book) – This is a Leanpub book which you can get for free, or not if you can afford to make a payment / contribution. It’s written by an Agile community of interest with the intent of sharing what they’ve learned and what they’ve seen to have worked.

The 3 most effective ways to build trust as a leader (post/article) – Got this from Rob Lambert but I can’t remember where exactly — “Three typical management activities that get poor results and three that get good results”. I’m not really a leader by title but the three ways of building trust that the post enumerates are still relevant to me and they emphasize points that I value: Empathy, clarity of intent, and follow through.

DISC Profile Types (personality test) – This is something I picked up from Rob Lambert’s webinar. For each profile type, there are recommended ways on how to better communicate with them, and inversely there are recommended ways on how to encourage others to better communicate with you. Took the test myself and got 48% Compliance, then Dominance, Steadiness, and lastly Influence.

12 common mistakes made when using Story Points (post/article) – This reminded me of something a colleague had shared wherein their Scrum Master wants them to estimate in hours rather than in story points, and also her thinking that story points can be easily translated to hours.

Agile Makes No Sense (post/article) – Let me just quote some lines (actually last 2 paragraphs) that I liked…

What is the smallest thing that could add value (and make sense)? A better standup? A better retrospective? Inviting a customer to a demo? Pairing for a day? Agreeing to get something into product in a couple days? Try that. Make one thing make sense as in “wow, I can see how that created value”.

When you take this humble approach — instead of “installing” a bunch of artifacts, tools, roles, and rituals AKA doing Agile — I think you’re embracing the true spirit of Agile.

Something to google: Sprint 0

There was an interesting topic while a couple of colleagues and I were on our way to buy coffee. It was triggered by a question about Sprint 0. Based from my limited working experience, Sprint 0 is like an initiation phase wherein the project gets set up. Dev environments get set up. First set of epics and user stories are created. Some initial designs get created. Project team members get on-boarded. Working agreements get defined. Collaboration tools like where to capture user stories and clarifications get finalized. Etc, etc. Basically, the team buys itself some prep time so that they can hit the ground running by the time Sprint 1 comes — ideally, by then, the team could commit to completing user stories and actually have features working at the end of the sprint. But then I thought, we don’t necessarily release anything after Sprint 1, so would calling Sprint 0 “Sprint 1” make any difference?

Idk. Maybe there’s this extremely high expectation about being or switching to Agile that makes it feel like you’re doing it wrong if you don’t have anything visible to some of your stakeholders by the end of a Sprint N. Being part of the development team, I know that infra setup is important, design work is important, all those other prep work are important. I know how 2 weeks could so easily fly by without seeing something that an end-user could potentially see. But to someone outside of the development team who might not be so familiar with Agile, they might have this extreme notion that “Hey, you’ve just completed 2 weeks! Where’s my working software?” And so maybe project teams resort to having a Sprint 0 to “protect” themselves or the concept of Agile to better manage expectations. Idk.

I went out of that conversation thinking that’s something I’d google. Just a few of the interesting stuff I found, and I’m sure I barely scratched the surface:

  • Sprint 0 (forum topic) – there’s mention of using Sprint 0 as a crutch
  • Scrubbing Sprint Zero – apparently, there’s no official “Sprint 0”; there’s the idea that it’s an anti-pattern; it’s been discussed by Agile Manifesto signatories all way back in 2008. I loved the Alistair Cockburn (often pronounced like “Co-burn”) quote:

I have a sneaking feeling that someone was pressed about his use of Scrum when he did something that had no obvious business value at the start, and he invented “Oh, that was Sprint Zero!” to get the peasants with the pickaxes away from his doorstep.

… and then others thought that was a great answer and started saying it, too. … and then it became part of the culture.

  • Sprint Zero: A Good Idea or Not? – there’s mention of the “project before the project”; and it links another post about using scrum for an analysis project whose output is not necessarily immediately working software
  • Antipattern of the Month: Sprint Zero – Lol, that quote: “Sprint 0 is like Casper, the friendly ghost. Well-meaning, but creepy.”

And, that’s all she wrote! Time for bed!

Brain dump: How I’d like my Agile testers to be

So I was just thinking about how I need the testers to work in our project (and maybe in any other project). This started as a brain dump and then the next thing I know I’ve got this outline already. So anyways, If I were to outline what I’d look for in testers, I’d look into the following areas:

  • Attitude towards testing
  • Technical competence
  • Being a team player

Attitude towards testing

Working on an Agile, greenfield project really needs to have testers who are quick studies, self-sufficient and proactive. Most of the time we are given just the user story and some mock-ups, so I need the testers who can model the user story in their heads or in their notes, ask the right questions about it, and do it on their own (i.e., not wait to be served the information they need on a silver platter).

I think testing is like problem solving. It’s like: Hey, you’re given this user story. How do you test it to make sure it’ll pass the PO’s review? What exactly do you need to test? What do you need to know to test it? It has a save functionality– what does it save, where does it save it, who’s allowed to save, are there stuff that needs to be derived or transformed, etc. It has a read functionality–read from where, how do I know I’m pulling the right data into the fields I’m checking, how do you get data entered to be read in the first place, do we format certain items differently, etc. Hey, look at this screen, what can I do with the controls in the screen, what can I do that I’m not supposed to, etc. Hey, I found a bug–can I consistently replicate it, can I narrow down the cases when this bug would appear, is this just a symptom of an underlying bug, is this even a valid bug, etc. There’s a fix I need to retest–what could possibly be affected, do I need to retest everything, etc. There’s a lot of figuring out and critical thinking involved.

What I don’t like is to reduce testing to an activity where we just write test cases and execute the test steps without thinking about how our work can provide value to the team, to the product, to the client, and to people who’d end up using our product. I really want the testers go into the project with the desire to make their being in the project really matter.

Technical competence

You can’t just WANT to be a solid contributor to the project, you have to BE one. You can be the most idealistic person but that won’t get you to where you want or need to be, you need to be able to execute.

For testing, I don’t necessarily equate technical competence to being able to automate. Being able to automate tests doesn’t mean so much if your tests can’t find the issues that needs fixing. We have to keep in mind that the product is the actual product — not the test automation scripts.

As I mentioned earlier, testing is like problem solving. Part of this includes modeling the application or feature you need to test — figuring out where there’d be if clauses or doing some decision tables, figuring out the data flows, state transitions, figuring out combination of valid/invalid input, etc. Figuring out what you actually need to test given that the lines defining the scope could sometimes get blurry. Then there’s instances where you have to work with the database, parse some flat files, or with some API. There are also instances when you need to simulate a certain scenario — and you have to figure how to do this right otherwise you might just bring up an invalid test case or bug. There are also instances when looking under the hood allows for more efficient testing; for instance I’ve reviewed database scripts and that reduced the effort as opposed to executing the test cases in fully black box mode.

You’ll also need to collaborate with developers and you need to be able to keep up with the discussions. You can’t rely on the layer of having the test lead interpret stuff for you. And it just saves people time from having to explain things if you can keep up with the technical discussions. When you report bugs, it is also very helpful if you’ve done your own investigation to narrow down the possible causes. When it comes to bug reporting, I always say “A problem well stated is a problem half solved.”

There’s a lot of collaboration within the Agile project. Roles of people you’ll engage with include fellow testers, devs, UX designer, BA, PO, and possibly support. You’ll need to share status updates, raise impediments, raise bugs, raise potential enhancements, raise a lot of clarifications, and possibly conduct demos of the user story. There’s going to be a lot of communication going on so you really need to know what you’re talking about, and you have to know how to talk about it.

Being a good team player

This is good to have in any project or in any team. You want to work with people who are responsible, reliable and who keeps each other informed as needed. You want people to pull their own weight in the project, and to help each other out esp when the load gets heavier for some. And it’s all the more appreciated when people help without having to be asked to help.

Meeting the Sprint goals is the primary focus, and so when needed, the lines defining the roles are blurred and folks try to contribute whenever and wherever they can. For instance, I’ve taken on the BA role while another tester has taken on the PM/scrum master role. We have front end devs who also work on back end tasks. When we needed load testing to be done and we couldn’t get another tester to work on it, one of our devs took on the task. When there were some data update needed, the team split the task among those who can help so as to get the job done faster.

Summing it up

It’s hard to come up with a checklist of traits for what I’d like in the testers in my team. Essentially, I want testers who sincerely want to contribute to the project. I want testers who can actually test, who respect testing per se, and who can build their credibility within the team. And of course, I want team players to help make the not so easy task of building software hopefully less hard. People won’t always fit the bill off the bat, but what’s important is to advance towards improving.

How our team “does Agile”

Super quick background: Our project started Jan 2015. To kick things off on the Agile methodology, our Scrum master conducted a brief training (less than half a day) to the team, and we’ve been playing it by ear ever since.

Over the course of many sprints, retros and releases, we’ve made adjustments on how we’re doing Agile. I’m not sure if there are Agile Purists who would frown down and shake their heads at us for the variations we’ve made. But the thing is, despite our possibly non-canon approaches, what still matters most is the team closely working together to deliver working software.

This post might be TL;DR. But in case you’re still interested in having a peek at how our little team does Agile Scrum, go on and have a read…

Continue reading

Read: Leading the Transformation

Our product owner is one of the rare few individuals I know at work who actually still reads books. Last month, he recommended that we read Leading the Transformation: Applying Agile and DevOps Principles at Scale by Gary Gruver and Tommy Mouser. It’s a thin book with only 112 pages on paperback and around a 3-hour read. It’s intended for leaders/executives so it gives a high level overview of the changes teams and the organization need to make and the benefit of those changes, and it repeatedly emphasizes management’s role in pushing for those changes. In particular, the changes that they want to drive at center around Agile, DevOps and Continuous Delivery (CD).

At work, small teams have now been shifting to Agile, our own team has been in this Agile project since January of last year, and I’ve heard of proposals wherein the methodology they suggest is already Agile instead of Waterfall. But then, I pick up from the book that trying to scale up Agile adoption across the board with small teams as the starting point doesn’t quite work for large organizations. Whoops. The book suggests that if you want an enterprise-level change, you have to plan for it and drive it from the management level down to us lowly minions. A key difference though is that within our organization (at least locally that I know of), we don’t really have hundreds of developers working on the same product or code base. And in our case, we’re only under 20 in the team, but even so the book still offers a good introduction to a lot of mature development practices that we need to look into.

Key items highlighted in the book that I’d like to reiterate further:

Importance of having quick feedback loops

Unit tests and static analysis tools can already weed out a lot of problems so that defective code won’t even get committed to the repository to begin with. And having those fixes done even before passing it to the test team — instead of fixing them only after the code has been deployed and testers found issues that were caused by those defects — will definitely help reduce the turnaround time.

Quick feedback loops will also help the team work and resolve issues while the code or user story is still relatively fresh in their heads. It’s more difficult for both the devs and testers to fix and retest an issue on a behavior that they’ve pretty much forgotten about.

Having builds as release- or production-ready as possible

With regular and stable builds in place, it’ll be easier to identify when a commit breaks the build. Since you don’t have to backtrack through days or weeks of commits, it’ll be easier to narrow down and identify the problematic commit.

Having dev/test environments as close to production as possible

One problem that we’ve personally encountered in not having a test environment in sync with the production version was that whenever we encountered an odd behavior in the test environment we had to double check whether the issue was also in prod. We also had to be mindful of issues that were already resolved in prod but not in the test environment. But I guess this problem is a combination of why it’s good to have the test environment as close to prod as possible and the next item related to why it’s good to have good deployment procedures in place.

Having repeatable build, deploy and test processes

From experience and the example above, having a reliable and repeatable deployment process could’ve saved us all effort and heartache. It could be so frustrating to test the same build (supposedly) but then get different outputs even if you’ve done the same steps using the same test data. In the same vein, you’d hate for a feature not to work in prod even if it had already been thoroughly code reviewed, tested and signed-off in UAT/PO review.

And last, but not the least, having test automation

You simply will never achieve the full benefit of Agile development until you get your automated testing properly built out and integrated into the development pipeline.

Test automation is key to the first item I mentioned since it enables quick feedback loops. It also allows repeatable tests to be executed across the different environments, and it allows repeated execution of the regression tests which you might not be able to afford to do so manually.

Having test automation, by itself, will not suffice. Tests have to be designed such that it’ll be easy to localize the cause of failed tests should any be encountered. Maintainability of the automated tests also have to be considered. Otherwise, the benefits of test automation won’t be realized since the team ends up ignoring the test results on account of being not sure whether the issue encountered is a code issue or a test issue.

One last thing… it’s a cultural shift

You can’t just invest on tools for CD or test automation or announce “Let’s do the Agile thing”, and expect the benefits to magically follow right away. This kind of thing takes time because there’s the technical learning overhead, plus shifting to a new way of doing things requires discipline and resolve so that folks won’t revert to the old habits that they’re trying to change.

It is important for executives to understand early on if the organization is embracing this cultural change, because if it doesn’t, all the investments in technical changes will be a waste of time.

It’s not going to be enough for the project team alone to be invested in the changes. The management and executives need to be aligned with this. In fact, they should help drive it. Otherwise, they might give demands that would bypass the adoption of change and instead force people back to their old habits (just because it might appear faster but only in the short term).

The book, after all, isn’t entitled “Leading the Transformation” for nothing. Management’s presence and push isn’t merely a suggestion; it’s a necessity. Sure, the project teams are the ones making the technical changes; but management needs to understand and support the changes. Essentially, people need to be in the same page in order to move in the same direction.

Sprint retro: Testing worked!

So my friend is in this other agile project doing 1 week sprints implementing a web-based system. Until last week, their team didn’t have a tester. Of course, they probably dev-tested their own work and they did have sprint reviews with their product owner. But, as they’ve also brought up in their retro, and what’s also great about their team is that they recognize the need or value of testing (I quote: “need proper testing”).

A couple of weeks ago, my friend asked me to take a look at their site, give it a quick run through, and give feedback. So I sat down on it for an hour or so, and did some exploratory testing on a few available features. After that session, I had spewed out as many comments (bugs) as I could and consolidated them along with screenshots. For the sprint they were doing a retro on, they fixed some of the items I raised and finally got a tester in their team. It felt great to see this dev team post under “what worked”:

  • Tester
  • Bugs are raised
  • Some bugs are fixed / completed
  • KC (that’s me!)

Yay for testing!