Notes from webinar: Write Better User Stories…

Last week, I attended a free webinar by Mike Cohn of Better User Stories on the topic of “Write Better User Stories in Less Time With Less Aggravation”. Right after, I shared the replay link to some colleagues along with a few bullet points of pros and cons.

(+) interesting, maayos explanation
(+) ok din yung q&a
(+) insightful naman, gives you something to think about, stuff to google further
(-) promotional, the whole course is expensive $395

Posting my notes here since the learning from the webinar is something worth revisiting.

3 Techniques

  1. Conduct a quarterly story-writing workshop
  2. Split stories to demonstrate progress even if the result is not truly shippable
  3. Strive to add just enough detail, just in time

Technique #1: Conduct a quarterly story-writing workshop

  • Deliberate, focused meeting
  • Brainstorm the stories needed to achieve the period’s most important goal
  • Short-term focus causes teams to give in to what’s urgent over what’s important
  • Team able to step away from day to day crisis… Without that big goal, the crisis always wins

Focus on a single objective

  • “What shall we build?” — Wrong question, too broad, anything is fair game
  • PO selects the significant objective (SO)
  • SO typically achievable in about 3 months
  • MVP, sometimes overused, seems can only be used once
  • MMF = Minimum Marketable Feature = subset of overall feature that delivers value when released independently, smaller than MVP

Involve the whole team in writing stories

  • Time investment, pays back on time savings when team works on the user stories
  • They’ll have fewer questions later, they’ll have more context
  • Fewer interruptions to devs’ day
  • Devs may come up with better implementation, increased creativity

Visualize user stories with a story map

  • Story maps invented by Jeff Patton
  • Each card = 1 user story (1 thing the user needs to do)
  • Horizontally = sequence of activities (don’t obsess over combination of sequence at this point, some steps may be optional)
  • Vertically = alternatives (with most important on top)

Technique #2: Split stories to demonstrate progress even if the result is not truly shippable

  • 90% joke – Ask dev how done are you and he replies 90%. Come back after a week, and answer is still 90%.
  • Devs are not evil or liars, Estimating how done we are with something is notoriously difficult.
  • In Agile, easier, no need to estimate. Just 2 states = Not started or Done
  • 5 techniques for splitting stories (Lookup SPIDR), shared in the webinar were Splitting by Interface and by Rules
  • When you split stories remember the goal is to be potentially shippable — (1) high quality, (2) tested, (3) what it does, it does well

Technique #3: Strive to add just enough detail, just in time

  • Too much detail, too early vs Too little detail, too late
  • Bad habit – want to know all before starting — when they do that they’re not doing overlapping work (analysis first, before coding, testing…). Overlapping work is central tenet in most Agile processes (that’s why we don’t have phases in Agile). Time to market gets stretched.
  • Err on the side of too little, too late — you can improve by adding more detail next time
  • Question 1 (during refinement or other discussions on a user story): Do you need the answer before you start on that story? Sometimes you need it before you finish work on that story, not before you start.
  • Question 2 (during retro): Did we get answers just in time in just enough detail?

Reference / links:

Advertisements

Keep the Backlog clean

I just mariekondo’d our backlog. So far, so good, I’ve removed 85 items from the backlog — 53 of which were over 200 days old! My thinking is if we won’t be touching them anytime soon or at all, I want them out of the backlog.

Idk but some lessons to share or possibly reminders to my future self here…

  • Get to know your tool – Find out how you can “archive” user stories that you want shelved, and also how you can access the shelved items in the future just in case you need to. Until recently, my options in the tool was limited to either delete or mark as done (which I didn’t want to do for items we won’t actually work on). We then found out that we can do project customization in the tool contrary to what we’ve been initially told, and so I’ve tweaked the workflow to also consider user stories that I want shelved.
  • Housekeeping keeps the backlog tool more usable – At some point, it was hard to move things around the backlog because of too many useless items that cluttered the list. Having a lean backlog also makes the items we actually need to work on more visible.
  • Maybe it shouldn’t be a list of wishful thinking, or a place for idea dumps – And TIL, using the backlog as a storage of ideas is a Product Backlog Anti-pattern.
  • Keep it aligned with the roadmap – Again, (“The product backlog is not reflecting the roadmap.”) another anti-pattern. I guess in conjunction with the previous item, a lot of the user stories that I cleaned up were raw ideas that they had wanted to build “someday”. Keep it real by keeping the backlog items as a list of things the team will actually work on.
  • Avoid / minimize duplication – For some reason, if a user story has to be kept duplicated, ensure they are linked to each other. The risk of duplication is in case of refinements, updates might be made on just one of the user stories when in reality you want it to be carried out across all.
  • Do periodic cleanups – This clean up is not and should not be a one time thing to keep the backlog relevant. An idea I picked up here is about setting a limit to your Design in Progress (DIP) or the number of items you have in the backlog.
  • Be mindful of what you add in the backlog – You don’t want the backlog items to keep growing and growing and revert back to a state you find less desirable. And an idea I picked up here is about setting a limit to your Design in Progress (DIP) or the number of items you have in the backlog.

So there, future me, keep the backlog clean. Keep it useful not only for yourself, but more importantly, for the rest of the team.

One link leads to another

Sometimes I come across posts or material in the internet on topics that piques my interest. It could be something I want to know more or understand more about. Or it could be related to a conversation or two I’ve had within the day that makes me question certain things. So sometimes I google, and sometimes I just stumble upon them through various feeds — could be Twitter, email, Medium, IG, and Facebook even. And then one link leads to another and before I know it, it’s 2AM and I should be getting some sleep. So anyways, here’s a dump of some recent links, in no particular order. I hope someone finds them helpful or interesting as I have.

Agile Product Ownership in a Nutshell (15 minute video) – I like how the content was easy to follow. There were a lot of points worth highlighting, but I guess what hits home the most is the mention of three things that need to be balanced:

  • Build the right thing (PO tends to focus here)
  • Build the thing right (dev team)
  • Build it fast (SM or Agile coach)

So you want to be a Scrum Master (book) – This is a Leanpub book which you can get for free, or not if you can afford to make a payment / contribution. It’s written by an Agile community of interest with the intent of sharing what they’ve learned and what they’ve seen to have worked.

The 3 most effective ways to build trust as a leader (post/article) – Got this from Rob Lambert but I can’t remember where exactly — “Three typical management activities that get poor results and three that get good results”. I’m not really a leader by title but the three ways of building trust that the post enumerates are still relevant to me and they emphasize points that I value: Empathy, clarity of intent, and follow through.

DISC Profile Types (personality test) – This is something I picked up from Rob Lambert’s webinar. For each profile type, there are recommended ways on how to better communicate with them, and inversely there are recommended ways on how to encourage others to better communicate with you. Took the test myself and got 48% Compliance, then Dominance, Steadiness, and lastly Influence.

12 common mistakes made when using Story Points (post/article) – This reminded me of something a colleague had shared wherein their Scrum Master wants them to estimate in hours rather than in story points, and also her thinking that story points can be easily translated to hours.

Agile Makes No Sense (post/article) – Let me just quote some lines (actually last 2 paragraphs) that I liked…

What is the smallest thing that could add value (and make sense)? A better standup? A better retrospective? Inviting a customer to a demo? Pairing for a day? Agreeing to get something into product in a couple days? Try that. Make one thing make sense as in “wow, I can see how that created value”.

When you take this humble approach — instead of “installing” a bunch of artifacts, tools, roles, and rituals AKA doing Agile — I think you’re embracing the true spirit of Agile.

Something to google: Sprint 0

There was an interesting topic while a couple of colleagues and I were on our way to buy coffee. It was triggered by a question about Sprint 0. Based from my limited working experience, Sprint 0 is like an initiation phase wherein the project gets set up. Dev environments get set up. First set of epics and user stories are created. Some initial designs get created. Project team members get on-boarded. Working agreements get defined. Collaboration tools like where to capture user stories and clarifications get finalized. Etc, etc. Basically, the team buys itself some prep time so that they can hit the ground running by the time Sprint 1 comes — ideally, by then, the team could commit to completing user stories and actually have features working at the end of the sprint. But then I thought, we don’t necessarily release anything after Sprint 1, so would calling Sprint 0 “Sprint 1” make any difference?

Idk. Maybe there’s this extremely high expectation about being or switching to Agile that makes it feel like you’re doing it wrong if you don’t have anything visible to some of your stakeholders by the end of a Sprint N. Being part of the development team, I know that infra setup is important, design work is important, all those other prep work are important. I know how 2 weeks could so easily fly by without seeing something that an end-user could potentially see. But to someone outside of the development team who might not be so familiar with Agile, they might have this extreme notion that “Hey, you’ve just completed 2 weeks! Where’s my working software?” And so maybe project teams resort to having a Sprint 0 to “protect” themselves or the concept of Agile to better manage expectations. Idk.

I went out of that conversation thinking that’s something I’d google. Just a few of the interesting stuff I found, and I’m sure I barely scratched the surface:

  • Sprint 0 (forum topic) – there’s mention of using Sprint 0 as a crutch
  • Scrubbing Sprint Zero – apparently, there’s no official “Sprint 0”; there’s the idea that it’s an anti-pattern; it’s been discussed by Agile Manifesto signatories all way back in 2008. I loved the Alistair Cockburn (often pronounced like “Co-burn”) quote:

I have a sneaking feeling that someone was pressed about his use of Scrum when he did something that had no obvious business value at the start, and he invented “Oh, that was Sprint Zero!” to get the peasants with the pickaxes away from his doorstep.

… and then others thought that was a great answer and started saying it, too. … and then it became part of the culture.

  • Sprint Zero: A Good Idea or Not? – there’s mention of the “project before the project”; and it links another post about using scrum for an analysis project whose output is not necessarily immediately working software
  • Antipattern of the Month: Sprint Zero – Lol, that quote: “Sprint 0 is like Casper, the friendly ghost. Well-meaning, but creepy.”

And, that’s all she wrote! Time for bed!

Brain dump: How I’d like my Agile testers to be

So I was just thinking about how I need the testers to work in our project (and maybe in any other project). This started as a brain dump and then the next thing I know I’ve got this outline already. So anyways, If I were to outline what I’d look for in testers, I’d look into the following areas:

  • Attitude towards testing
  • Technical competence
  • Being a team player

Attitude towards testing

Working on an Agile, greenfield project really needs to have testers who are quick studies, self-sufficient and proactive. Most of the time we are given just the user story and some mock-ups, so I need the testers who can model the user story in their heads or in their notes, ask the right questions about it, and do it on their own (i.e., not wait to be served the information they need on a silver platter).

I think testing is like problem solving. It’s like: Hey, you’re given this user story. How do you test it to make sure it’ll pass the PO’s review? What exactly do you need to test? What do you need to know to test it? It has a save functionality– what does it save, where does it save it, who’s allowed to save, are there stuff that needs to be derived or transformed, etc. It has a read functionality–read from where, how do I know I’m pulling the right data into the fields I’m checking, how do you get data entered to be read in the first place, do we format certain items differently, etc. Hey, look at this screen, what can I do with the controls in the screen, what can I do that I’m not supposed to, etc. Hey, I found a bug–can I consistently replicate it, can I narrow down the cases when this bug would appear, is this just a symptom of an underlying bug, is this even a valid bug, etc. There’s a fix I need to retest–what could possibly be affected, do I need to retest everything, etc. There’s a lot of figuring out and critical thinking involved.

What I don’t like is to reduce testing to an activity where we just write test cases and execute the test steps without thinking about how our work can provide value to the team, to the product, to the client, and to people who’d end up using our product. I really want the testers go into the project with the desire to make their being in the project really matter.

Technical competence

You can’t just WANT to be a solid contributor to the project, you have to BE one. You can be the most idealistic person but that won’t get you to where you want or need to be, you need to be able to execute.

For testing, I don’t necessarily equate technical competence to being able to automate. Being able to automate tests doesn’t mean so much if your tests can’t find the issues that needs fixing. We have to keep in mind that the product is the actual product — not the test automation scripts.

As I mentioned earlier, testing is like problem solving. Part of this includes modeling the application or feature you need to test — figuring out where there’d be if clauses or doing some decision tables, figuring out the data flows, state transitions, figuring out combination of valid/invalid input, etc. Figuring out what you actually need to test given that the lines defining the scope could sometimes get blurry. Then there’s instances where you have to work with the database, parse some flat files, or with some API. There are also instances when you need to simulate a certain scenario — and you have to figure how to do this right otherwise you might just bring up an invalid test case or bug. There are also instances when looking under the hood allows for more efficient testing; for instance I’ve reviewed database scripts and that reduced the effort as opposed to executing the test cases in fully black box mode.

You’ll also need to collaborate with developers and you need to be able to keep up with the discussions. You can’t rely on the layer of having the test lead interpret stuff for you. And it just saves people time from having to explain things if you can keep up with the technical discussions. When you report bugs, it is also very helpful if you’ve done your own investigation to narrow down the possible causes. When it comes to bug reporting, I always say “A problem well stated is a problem half solved.”

There’s a lot of collaboration within the Agile project. Roles of people you’ll engage with include fellow testers, devs, UX designer, BA, PO, and possibly support. You’ll need to share status updates, raise impediments, raise bugs, raise potential enhancements, raise a lot of clarifications, and possibly conduct demos of the user story. There’s going to be a lot of communication going on so you really need to know what you’re talking about, and you have to know how to talk about it.

Being a good team player

This is good to have in any project or in any team. You want to work with people who are responsible, reliable and who keeps each other informed as needed. You want people to pull their own weight in the project, and to help each other out esp when the load gets heavier for some. And it’s all the more appreciated when people help without having to be asked to help.

Meeting the Sprint goals is the primary focus, and so when needed, the lines defining the roles are blurred and folks try to contribute whenever and wherever they can. For instance, I’ve taken on the BA role while another tester has taken on the PM/scrum master role. We have front end devs who also work on back end tasks. When we needed load testing to be done and we couldn’t get another tester to work on it, one of our devs took on the task. When there were some data update needed, the team split the task among those who can help so as to get the job done faster.

Summing it up

It’s hard to come up with a checklist of traits for what I’d like in the testers in my team. Essentially, I want testers who sincerely want to contribute to the project. I want testers who can actually test, who respect testing per se, and who can build their credibility within the team. And of course, I want team players to help make the not so easy task of building software hopefully less hard. People won’t always fit the bill off the bat, but what’s important is to advance towards improving.

How our team “does Agile”

Super quick background: Our project started Jan 2015. To kick things off on the Agile methodology, our Scrum master conducted a brief training (less than half a day) to the team, and we’ve been playing it by ear ever since.

Over the course of many sprints, retros and releases, we’ve made adjustments on how we’re doing Agile. I’m not sure if there are Agile Purists who would frown down and shake their heads at us for the variations we’ve made. But the thing is, despite our possibly non-canon approaches, what still matters most is the team closely working together to deliver working software.

This post might be TL;DR. But in case you’re still interested in having a peek at how our little team does Agile Scrum, go on and have a read…

Continue reading

Read: Leading the Transformation

Our product owner is one of the rare few individuals I know at work who actually still reads books. Last month, he recommended that we read Leading the Transformation: Applying Agile and DevOps Principles at Scale by Gary Gruver and Tommy Mouser. It’s a thin book with only 112 pages on paperback and around a 3-hour read. It’s intended for leaders/executives so it gives a high level overview of the changes teams and the organization need to make and the benefit of those changes, and it repeatedly emphasizes management’s role in pushing for those changes. In particular, the changes that they want to drive at center around Agile, DevOps and Continuous Delivery (CD).

At work, small teams have now been shifting to Agile, our own team has been in this Agile project since January of last year, and I’ve heard of proposals wherein the methodology they suggest is already Agile instead of Waterfall. But then, I pick up from the book that trying to scale up Agile adoption across the board with small teams as the starting point doesn’t quite work for large organizations. Whoops. The book suggests that if you want an enterprise-level change, you have to plan for it and drive it from the management level down to us lowly minions. A key difference though is that within our organization (at least locally that I know of), we don’t really have hundreds of developers working on the same product or code base. And in our case, we’re only under 20 in the team, but even so the book still offers a good introduction to a lot of mature development practices that we need to look into.

Key items highlighted in the book that I’d like to reiterate further:

Importance of having quick feedback loops

Unit tests and static analysis tools can already weed out a lot of problems so that defective code won’t even get committed to the repository to begin with. And having those fixes done even before passing it to the test team — instead of fixing them only after the code has been deployed and testers found issues that were caused by those defects — will definitely help reduce the turnaround time.

Quick feedback loops will also help the team work and resolve issues while the code or user story is still relatively fresh in their heads. It’s more difficult for both the devs and testers to fix and retest an issue on a behavior that they’ve pretty much forgotten about.

Having builds as release- or production-ready as possible

With regular and stable builds in place, it’ll be easier to identify when a commit breaks the build. Since you don’t have to backtrack through days or weeks of commits, it’ll be easier to narrow down and identify the problematic commit.

Having dev/test environments as close to production as possible

One problem that we’ve personally encountered in not having a test environment in sync with the production version was that whenever we encountered an odd behavior in the test environment we had to double check whether the issue was also in prod. We also had to be mindful of issues that were already resolved in prod but not in the test environment. But I guess this problem is a combination of why it’s good to have the test environment as close to prod as possible and the next item related to why it’s good to have good deployment procedures in place.

Having repeatable build, deploy and test processes

From experience and the example above, having a reliable and repeatable deployment process could’ve saved us all effort and heartache. It could be so frustrating to test the same build (supposedly) but then get different outputs even if you’ve done the same steps using the same test data. In the same vein, you’d hate for a feature not to work in prod even if it had already been thoroughly code reviewed, tested and signed-off in UAT/PO review.

And last, but not the least, having test automation

You simply will never achieve the full benefit of Agile development until you get your automated testing properly built out and integrated into the development pipeline.

Test automation is key to the first item I mentioned since it enables quick feedback loops. It also allows repeatable tests to be executed across the different environments, and it allows repeated execution of the regression tests which you might not be able to afford to do so manually.

Having test automation, by itself, will not suffice. Tests have to be designed such that it’ll be easy to localize the cause of failed tests should any be encountered. Maintainability of the automated tests also have to be considered. Otherwise, the benefits of test automation won’t be realized since the team ends up ignoring the test results on account of being not sure whether the issue encountered is a code issue or a test issue.

One last thing… it’s a cultural shift

You can’t just invest on tools for CD or test automation or announce “Let’s do the Agile thing”, and expect the benefits to magically follow right away. This kind of thing takes time because there’s the technical learning overhead, plus shifting to a new way of doing things requires discipline and resolve so that folks won’t revert to the old habits that they’re trying to change.

It is important for executives to understand early on if the organization is embracing this cultural change, because if it doesn’t, all the investments in technical changes will be a waste of time.

It’s not going to be enough for the project team alone to be invested in the changes. The management and executives need to be aligned with this. In fact, they should help drive it. Otherwise, they might give demands that would bypass the adoption of change and instead force people back to their old habits (just because it might appear faster but only in the short term).

The book, after all, isn’t entitled “Leading the Transformation” for nothing. Management’s presence and push isn’t merely a suggestion; it’s a necessity. Sure, the project teams are the ones making the technical changes; but management needs to understand and support the changes. Essentially, people need to be in the same page in order to move in the same direction.

Sprint retro: Testing worked!

So my friend is in this other agile project doing 1 week sprints implementing a web-based system. Until last week, their team didn’t have a tester. Of course, they probably dev-tested their own work and they did have sprint reviews with their product owner. But, as they’ve also brought up in their retro, and what’s also great about their team is that they recognize the need or value of testing (I quote: “need proper testing”).

A couple of weeks ago, my friend asked me to take a look at their site, give it a quick run through, and give feedback. So I sat down on it for an hour or so, and did some exploratory testing on a few available features. After that session, I had spewed out as many comments (bugs) as I could and consolidated them along with screenshots. For the sprint they were doing a retro on, they fixed some of the items I raised and finally got a tester in their team. It felt great to see this dev team post under “what worked”:

  • Tester
  • Bugs are raised
  • Some bugs are fixed / completed
  • KC (that’s me!)

Yay for testing!

Active testers should be the norm

So yesterday we had a sprint review wherein the testers conducted the demo to our product owner. I reckon it went rather smoothly. I am pleased with how my tester team mates did their presentations. The delivery team overall got good feedback from our PO (yay!), and we testers got some kudos from our PM (yay, again!).

Later in the day, our team had our sprint retrospective. Under the header of what worked well, one of our team mates wrote “DEV-TEST” (which someone corrected to DEV♥TEST) referring to how well the devs and testers were working together. It was actually one of the most upvoted items. And from the retro deck I quote: “Strong and instantaneous link-up between developers and testers. Notification and resolution of defects are faster due to great work relationships between the team”.

Under the header of what to do differently of our sprint retro, not wanting to make scrummerfall a norm in our coming sprints, I suggested to plan sprints such that there are testable features earlier in the sprint (and definitely not 3 days before the end of the sprint as what we’ve encountered for Sprints 0 and 1). This concern didn’t go unheard and the team actually brought up a couple of action items which addresses this.

Earlier today, we got word from our PO on the priorities for this sprint. The devs had to prioritize several bug fixes and that left one of the high-priority items on data encoding to us testers. It’s not exactly testing work, and it’s not exactly dev work either. But it needs to be done this week, so we went ahead and owned it. We actually completed the task in half the time we estimated. Yay for efficiency (or overestimation, but I’d say the former).

So where exactly am I going with this post. I guess I’m just glad that the testers are having such an active role in this project. We’re being heard and (I’d like to think) appreciated. And I guess this shouldn’t be an expectation only in Agile projects. This should go for any project.

Hello again, world

I haven’t been in this space for some time. Lately, I’ve been instagramming about my lovely dogs and tumblring my attempts of doing art. I am still in software testing. I still feel that the work we testers do highly contributes to making software better (or at least less sucky).

I’m around month and a week into my 2nd mobile testing engagement. We’re at the 2nd week of our 2nd Sprint. There are a lot of things that are new in this project so it’s a really great learning opportunity for myself and more so for my younger tester team mates. For one, we’re adopting an Agile Scrum methodology, and it’s probably the closest we’ve gotten to an actual agile project (we have some scrumbuts). This project is also an implementation project, so this provides them with the opportunity to see something get built from scratch. I think testers who haven’t tried working in an implementation project are missing out. This is where the fun is, and where testing can make the most impact. Plus, this one’s building a mobile app so it has a really modern feel to it.

So tomorrow’s a holiday and we still haven’t received a build for half of the user stories in scope of this sprint or for the fixes of the bugs that were already reported, and Friday is the last day of the sprint. Ah, scrummerfall! It’s a word I picked up yesterday. One of the questions in a webinar I attended touched on the topic of agilefall or scrummerfall. It happens in Agile projects wherein the testing work starts at around the 8th day and the testers are scrambling to test the features and doing overtime work to catch up, and the developers are already moving on to other things. If you’re experiencing scrummerfall, sorry but you planned it that way. The good thing is you can choose to not plan it that way (or so the speaker says). Someone needs to raise their hand and point it out so that the team is planning for a sprint that has something testable as early as day 2 or 3.

Something to raise for our retro. :p