Read: A Practical Guide to Testing in DevOps

Reading this book was long overdue, and I’m glad I can finally scratch this off my TBR list. I purchased it within a week of its release back in 2017, but only got around to reading it this week. Oh, well. The book is Katrina Clokie’s A Practical Guide to Testing in DevOps. I feel like it’s such a mine of references, and a jumping point for so many other things to read or watch.

For each of the chapters in the book, I tried to highlight some of the references or topics that I found interesting.

Testing in a DevOps Culture

  • Test strategy retrospective
  • Agile assessment How agile is your testing?, and book recommendations if you scored less than five out of ten
  • Two pizza rule for team sizes — I just thought that, depending on the restaurant and the sizes they offer, I can easily eat a whole pizza by myself.
  • Section on blazing a trail – “When I’m blazing a trail I’m building a path to someone new in the organisation…” — This isn’t limited to DevOps; feels relatable as we’re building the local practice and the CoP.
  • Ioana Serban’s “Three Little Questions” from TestOps: Chasing the White Whale (YouTube)
  • Links to blog posts of applying visualization when testing a product
  • Pair testing experiment framework

Testing in Development

Testing in Production

  • Sections on testing monitoring, analytics and logging
  • “There’s the well-known example of how Google used A/B testing to determine which shade of blue to use on their toolbar. They tested 41 gradients to learn the colour people preferred, which was seen by some as a trivial attribute to experiment with.”
  • “Google are well known for running lengthy beta programs. Gmail, was labelled as beta for five years from 2004 to 2009.”
  • Should Tesla be ‘beta testing’ autopilot if there is a chance someone might die?
  • TIL passive validation. Differences between active and passive validation reference is Testing in Production (Vimeo) by Seth Eliot.
  • The terms to describe variations of exposure control such as canary release, staged rollout, dogfooding, and dark launching

Testing in DevOps Environments

Industry Examples

No bullet points here. It’s not that I didn’t find anything in this section interesting. On the contrary, I feel like this section is something you can check out to look for content that fits or closely aligns to your own testing context.

Test Strategy in DevOps

  • Conducting a Risk Workshop
  • Several links around rethinking and reimagining the testing pyramid – 1, 2, 3, 4
  • The testing pendulum depicting the extremes of whether your testing is too deep or too shallow
  • A couple of links around the shift to TestOps – 1, 2
  • A section on heuristics for removing a tester
  • A couple of links on visual test strategies – 1*, 2

*Original links no longer worked so I had to google for the latest link.

And again, here’s the book’s link: https://leanpub.com/testingindevops

TIL Zeigarnik effect

I attended a knowledge sharing session on behavioral science last week (ok, so technically, it’s not “TIL”… but as of that time, I thought “TIL!”). It was particularly in the context of designing products and services. It touched on concepts—some I’ve come across before, some new. And among the new, what piqued my interest the most was the…

Zeigarnik effect:
not finishing a task creates mental tension, which keeps it at the forefront of our memory

I guess it struck a chord because I was dealing with something that I felt I shouldn’t still be dealing with had someone done their work more promptly (which I later found out, should be corrected to “had someone done their work more correctly”). Anyways, I feel that mental tension — that extra cognitive load — brought about by that that unfinished thing that you have to have to remember.

It also made me think about some agile stuff particularly limiting WIP. In limiting WIP, the team ideally works within their capacity and will prioritize completing work over starting new work so that they aren’t juggling so many open items at a time. There’s also the lead time which is like the difference between when the work is started and when the work is completed. The longer the lead time — say when you start discussing a possible feature to the time that feature actually gets implemented, the longer you have to remember the details around that feature. The longer you have to remember stuff and the more stuff you need to remember—both are mentally taxing especially when the work piles up.

Then there’s also the definition of done. This is something that’s part of the team’s working agreements wherein you align on what has to be completed to consider the story as “Done.” And it gives some sort of comfort in knowing that once a story is done in a Sprint, you sort of have closure, and you can move on to other stories without having to worry about that story coming back to life an haunting you.

Anyways, I haven’t really read up much on the subject. It was just one of the concepts mentioned in the session, and I just find it interesting how some of those agile practices or concepts support combatting that mental tension that is the Zeigarnik effect.

Is it ready?

Whether a user story is ready or not is a question I get asked during the Sprint Planning. I reckon it’s not really a question that I alone (as BA/PO/PPO) ought to answer. The Scrum Team answers that question. Prior to the Sprint Planning, those user stories had been groomed with the architects and dev leads, and they’d have been covered in the team backlog grooming sessions. And again prior to the Sprint Planning, the user stories for possible inclusion in the coming sprint are added into the Sprint Backlog for the rest of the team to preview so that they can have an idea of what makes sense for them to assign to themselves and so they can ask questions. During Sprint Planning, those stories are covered again and the floor is opened to questions if any. And even after Sprint Planning, the floor remains open for questions. The floor is just always open for conversations.

Now whether a user story can be absolutely ready is another thing. This is not a waterfall project where the designs had been laid out upfront. And even with a waterfall project, some questions arise only as you are implementing the functionality, or even as it gets tested in UAT, or even when it’s already out in production.

This is where the agility and the self-management of team members are invaluable in Agile. The grooming of user stories become a conversation (ideally among the three amigos–PO, Dev, Test) that feeds into the readiness of the user story. Is it ready? We make it ready. And as things arise (as they almost always do), it’s the agility and the self-management of team members that again becomes necessary for them to navigate through this rather than be stalled by each and every hiccup that comes along or rather than whining on how the user story was not ready. It’s as ready as we can make it.


I think I’ve digressed in this post. I initially wanted to write about how the Definition of Ready (DoR) is not even in the Scrum Guide. There’s this interesting post in Medium that details the history: The rise and fall of the Definition of Ready in Scrum (estimated by Medium as a 7-minute read).

Some of the points I highlighted:

  • “All you need is a product vision and enough priority items on the backlog to begin one iteration, or Sprint, of incremental development of the product.” — Ken Schwaber and Mike Beedle 2001
  • 2008 — First definition and inclusion in official Scrum Alliance training material… The DoR has the following traits:
    • A user story exists;
    • The “formulation” is general enough;
    • The user story is small enough to fit inside a Sprint;
    • It has its unique priority;
    • It has a description of how it can be tested;
    • It has been estimated.
  • 2010 — First edition of Scrum Guide… 15 years after Ken and Jeff started discussing Scrum, they created the first Scrum Guide. This first guide doesn’t mention the Definition of Ready.
  • “Product Backlog items that can be “Done” by the Development Team within one Sprint are deemed “ready” or “actionable” for selection in a Sprint Planning meeting.” — Ken Schwaber and Jeff Sutherland 2011
  • “Product Backlog items that can be “Done” by the Development Team within one Sprint are deemed “Ready” for selection in a Sprint Planning.” — Ken Schwaber and Jeff Sutherland 2013
  • “Product Backlog items that can be Done by the Scrum Team within one Sprint are deemed ready for selection in a Sprint Planning event.” — Ken Schwaber and Jeff Sutherland 2020

If I could, I wouldn’t: Separate back-end and front-end user stories

In my experience with Agile projects, we usually have user stories that somewhat correspond to features. Under the user stories, there’d be the front-end task, the back-end task, the testing task, and so on. There’d occasionally be user stories that’ll only have the front-end part or only have the back-end part, and that’s done as needed by the requirement. But it wasn’t a case wherein you have that particular requirement and you split it off to back-end and front-end stories.

I guess it’s always the team’s call on how they would work. That’s what makes agile agile. But I’ve tried it, i.e., having separate user stories for the back-end and front-end part. And it’s not my favorite for the various reasons I’ll enumerate below. Maybe it’s also just me and how I agree with that Agile principle that “Working software is the primary measure of progress.” Technically, the APIs could work and they’re functioning as expected; and the screen components are front-end validations could also work. But it feels incomplete for me.

Anyways, onto the reasons….

Repetition, Overhead

There are a lot of details that we have to repeat in both stories for the same feature. Why not just link them with each other? Well, we do that but the back-end guys expect that they can just focus on the BE story, same with the front-end guys. So there’s overhead in the repetition and in keeping things consistent.

Dependencies

We have to line up the BE user stories first or a sprint ahead. Then while we’re grooming for the next sprint or a couple of sprints down the line, we have to remember whether the FE story has already been covered by a BE story. Of course, that’s what related links are for. But still, humans are human, linkages could be missed.

In case there’s a bug in the user story, but it wasn’t really in scope, typically in previous projects we’d just create the needed user story for it. We’d then decide if it’s something we can take on within the sprint, or if it’s something we’d have to defer. But in this case, we’d need to line up two user stories — first for the BE and then another one for the FE, and there’s the possibility that we can’t get them in the same sprint.

Maybe it’s just psychological, and maybe it actually is, but it feels like it takes longer to complete a user story (feature).

Bloated Backlog

You nearly have twice as much user stories. So it feels like the backlog is bloated. You see a lot of user stories and it feels like a lot of work, but it’s just for that single feature or requirement. Again, maybe psychological.

Demos during Sprint Review

There’d be demos of the FE story which is already an integration of the back-end along with it, so that’s the actual demo of the feature. But there are also the demos of the BE stories where the interface would be via Postman or via Chrome Dev Tool or via the SQL client. As a tester, you have some appreciation for it; but as someone who empathizes with the client or business, I wonder if they’re as keen on seeing that API response as opposed to seeing the integrated feature.

Coincidentally, a team mate just asked me for certain details of a user story

So while writing this post, a team mate asked me for certain details of a user story. As expected, the related API story was linked to a corresponding FE story. However, she was working on a related feature and so that one wasn’t linked to the original API story. So it had to take some searching to find the answers from her. I ended up answering from memory to give her quick feedback, but of course I had to do due diligence and cross-check with the actual story and then add the linkages. Anyway, I guess this is my cue to get back to work. Although at 1:27 AM, I think I can already call it a day.

Potentially releasable

[Edit Oct 22] At the end of the Sprint, I feel a better sense of accomplishment if there’s a feature that’s actually built. Not just the data model, not just the API, not just the front-end that’s blocked because it’s still waiting on the API. This ties up to the 3rd item I mentioned above. But as I was just browsing, I came across this page again: What does it mean to be potentially releasable?

  • “The goal for any Scrum or agile team is the same: develop a potentially releasable product increment by the end of each sprint.”
  • And of course, it goes on to describe what is meant by potentially releasable — emphasizing “potentially” meaning it doesn’t always have to mean that you release every Sprint.
  • It shares and expounds on 3 key characteristics of the product increment to be potentially releasable: High quality, well tested, complete
  • “…reaching a potentially releasable state as often as possible is simply a good habit for Scrum teams. Teams that miss reaching this standard often begin to do so more frequently or for prolonged periods.”
Continue reading

Interesting… DoD is defined by the development organization

All along I thought it was supposed to be defined by the Scrum team — with the development team and the Product Owner aligning on the Definition of Done. That’s how we’ve done it in the past projects I’ve had. But then I’ve come across a material saying otherwise. Apparently, based on the 2017 Scrum Guide:

…If the definition of “Done” for an increment is part of the conventions, standards or guidelines of the development organization, all Scrum Teams must follow it as a minimum.

If “Done” for an increment is not a convention of the development organization, the Development Team of the Scrum Team must define a definition of “Done” appropriate for the product. If there are multiple Scrum Teams working on the system or product release, the Development Teams on all the Scrum Teams must mutually define the definition of “Done”.

To be fair though, there was no convention that we knew of or that we were advised to adopt. And so the Scrum Team went ahead to define “Done” (which apparently is more consistent with the 2020 version).

My past projects happened to be consistent around our DoDs. Unless there’s some exception which then had to be documented in the acceptance criteria, to be considered “Done”, user stories generally were…

  • dev tested
  • peer reviewed
  • deployed in the test environment (via automated scripts)
  • tested by testers in the test environment, with all major to critical bugs resolved
  • reviewed by the BA in the test environment
  • reviewed and accepted by the PO in the test environment

Looking into the 2020 Scrum Guide, it seems they changed it such that if there’s no such organizational DoD, it’s the Scrum Team rather than the Development Team (as in the 2017 version) who defines it.

If the Definition of Done for an increment is part of the standards of the organization, all Scrum Teams must follow it as a minimum. If it is not an organizational standard, the Scrum Team must create a Definition of Done appropriate for the product.

The Developers are required to conform to the Definition of Done. If there are multiple Scrum Teams working together on a product, they must mutually define and comply with the same Definition of Done.

Do we size bugs with story points?

I found a question posted in one of the social channels at work: How should one give out story points to bugs/defects that one does not yet know how to fix yet and requires investigation? The original question asks how but I think even before we go there it would be nice to know whether we need to in the first place. I plan to write about this in 2-parts: (1) how I might go about with it — no explanations, but based on past experiences as a member of Agile Scrum teams and what I’ve read on the topic, and (2) links and quotes galore.

How I might go about with it

  • If it’s a bug found during testing of a user story we’re working on in the sprint AND it’s small enough (implicitly sized) to be fixed within the same sprint: It goes into the sprint backlog. No need to size it. Just prioritize it accordingly.
  • If it’s a bug unrelated to user stories that we’re testing this sprint (say, from an older feature) OR it’s too big a bug or complex (again implicitly sized) to be fixed within the sprint: It goes to the product backlog. It’ll be groomed as you would with other user stories to give it enough details for the team to work with. And if it makes it way into the Sprint Planning, then size the bug.
  • Now what if the bug that goes into the product backlog requires more investigation than usual (all bugs require investigation, but in some cases I suppose devs already have an idea of how to fix it, in some, totally no idea hence more investigation is needed): Tag it as a spike (not a term in the Scrum Guide, FYI). If it goes into the Sprint Backlog, meaning the team agrees to invest time on investigating that bug within the Sprint, no need to size it.
    • For that spike in the Sprint, it’ll just mean there’ll be a time-box (1-3 days of effort) for investigating that bug. At the end of the time-box, whoever works on it reports their findings and the team can discuss the next steps.
    • Assuming the team agrees on a resolution, duplicate the bug with the spike tag. Close the original one. In the duplicate, remove the Spike label. If it’s to remain in the Sprint Backlog meaning the team will fix it within the Sprint, then size the bug. Otherwise, the new bug (the duplicate) goes to the Product Backlog and no need to size it yet.
    • But what if there’s still no resolution or identified workaround. The team can opt to extend the time-box. But at some point, you can’t just extend and extend it forever. Once a threshold is met (is 3 months too long/short?): Tag it with a label your team agrees to use on such items, and then archive it.
  • At the end of the Sprint, the Scrum Master will be able to gather the following data in case they want to use it for some forecasting:
    • User Stories – total story points, bugs per user story
    • Bugs – total story points, total number of bugs
    • Spikes – total number of Spikes worked on, total number of Spikes closed, total points from Spikes that were converted to new bugs

That turned out longer than I expected. The next part are for some links on the topic and could give you the opposing views to help you come up with your own answer.

12 common mistakes made when using Story Points – This has a lot of other interesting points not just about on whether you size bugs or not.

  • “Story Points represent the effort required to put a PBI (Product Backlog Item) live.” So story points are not limited to user stories.
  • “Story Points are about effort. Complexity, uncertainty and risk factors that influence effort but each alone is not enough to determine effort.
  • [Common mistake #5: Never Story Pointing Bugs] “A bug which is unrelated to the current sprint should just be story pointed. The bug represents work the team needs to complete. This does not apply if the team reserves a fixed percentage of time for working on bugs during the sprint. A bug related to an issue in the sprint should not be story pointed as this is part of the original estimation.”

Should Story Points Be Assigned to a Bug Fixing Story?

  • [I think this is with respect to legacy bugs or when the team is dealing with a large database of agile defects] “My usual recommendation is to assign points to bug fixing the agile defects. This really achieves the best of both worlds. We are able to see how much work the team is really able to accomplish, but also able to look at the historical data and see how much went into the bug-fixing story each sprint.”

Should you ‘Story Point’ everything? – This is a thread in the Scrum.org forum.

  • (No points for bugs) ‘They are called story points for a reason. They are not call[ed] “Item Points”. Ideally you should only have stories in your backlog and the technical tasks should be inside…’
  • (Yes or no points for bugs) “It is critical as a Scrum Master to ensure that story points are being used properly within an organization. They serve two purposes only: to help the Development Team and Product Owner plan future sprints, and to be accumulated for done items at the end of a sprint for velocity calculation purposes. They are not a proxy for value delivery. … That said, it seems there are a number of different items (bugs, technical tasks, spikes) that have a capacity impact on the Development Team each sprint. For planning purposes, if the team prefers to not point these items, a mechanism to determine the capacity impact is still desired….”
  • (No points altogether) ‘I have found, and this may depend on your team, that removing story points entirely helps the team and stakeholders focus on the sprint goal instead of “How many points”….’

What’s a spike, who should enter it, and how to word it? Since I mentioned “spikes”, I’ve put in this other link about it.

  • “A spike is an investment to make the story estimable or schedule-able.”
  • “Teams should agree that every spike is, say, never more than 1 day of research. (For some teams this might be, say, 3 days, if that’s the common situation.) At the end of the time-box, you have to report out your findings. That might result in another spike, but time-box the experiments. If you weren’t able to answer the question before time runs out, you must still report the results to the team and decide what to do next. What to do next might be to define another spike.”
  • “It’s also best if a spike has one very clear question to answer. Not a bunch of questions or an ambiguous statement of stuff you need to look into. Therefore, split your spikes just as you would large user stories.”

Let me know if you find anything more conclusive or helpful.

Retrospection and learning time

Just recently, my engagement to my project since Nov of last year has ramped down. So the past couple of weeks has been a time of transition for me into my upcoming project and also from me to the new PO of my previous project. This allowed an opportunity for retrospection, and also a chance to pick up on new stuff.

While looking up the available knowledge sharing platforms within the company, I came across the option to host stuff in our enterprise GitHub instance. One link led to another, and I came across…

  • MkDocs – This is for project documentation; it allows the use of Markdown for writing content, and then you generate static site contents from that.
  • Documentation as Code (docs-as-code) – While I’m not a programmer, getting familiar with the concept wouldn’t hurt. And as I read more, it’s not really exclusive to programmers.
  • Diagrams as Code with Mermaid – Part of the family of stuff-as-code, this doesn’t trail far behind. I think what I find promising about this (apart from being free) is that this is going make comparison of versions easier since you’re comparing text files.

As mentioned, I did some retrospection. I collated some of my personal lessons learned and posted it in our project’s Confluence page. I also revisited Software Development’s Classic Mistakes. I tried rereading The Scrum Guide and some stuff on anti-patterns to see where we’re somewhat deviating (for guidance if it’s something we should continue or if we should “realign”). Then I tried to pull out project-agnostic stuff that could be helpful to me for starting a new Agile Scrum project and collated my notes.

With the notes in hand, I’m starting to use it as a reference for the new project, and I plan to just tweak accordingly as I find other useful stuff to add in. At this stage, there’s already a team working on the prototypes, and in theory, they’re prepping the solution or the design which will be handed over to the implementation team. So I’ll be keen on learning a lot more and looking for process improvements for the handover from Design Thinking to prototyping to implementation. Exciting stuff! 🙂

Notes from webinar: Write Better User Stories…

Last week, I attended a free webinar by Mike Cohn of Better User Stories on the topic of “Write Better User Stories in Less Time With Less Aggravation”. Right after, I shared the replay link to some colleagues along with a few bullet points of pros and cons.

(+) interesting, maayos explanation
(+) ok din yung q&a
(+) insightful naman, gives you something to think about, stuff to google further
(-) promotional, the whole course is expensive $395

Posting my notes here since the learning from the webinar is something worth revisiting.

3 Techniques

  1. Conduct a quarterly story-writing workshop
  2. Split stories to demonstrate progress even if the result is not truly shippable
  3. Strive to add just enough detail, just in time

Technique #1: Conduct a quarterly story-writing workshop

  • Deliberate, focused meeting
  • Brainstorm the stories needed to achieve the period’s most important goal
  • Short-term focus causes teams to give in to what’s urgent over what’s important
  • Team able to step away from day to day crisis… Without that big goal, the crisis always wins

Focus on a single objective

  • “What shall we build?” — Wrong question, too broad, anything is fair game
  • PO selects the significant objective (SO)
  • SO typically achievable in about 3 months
  • MVP, sometimes overused, seems can only be used once
  • MMF = Minimum Marketable Feature = subset of overall feature that delivers value when released independently, smaller than MVP

Involve the whole team in writing stories

  • Time investment, pays back on time savings when team works on the user stories
  • They’ll have fewer questions later, they’ll have more context
  • Fewer interruptions to devs’ day
  • Devs may come up with better implementation, increased creativity

Visualize user stories with a story map

  • Story maps invented by Jeff Patton
  • Each card = 1 user story (1 thing the user needs to do)
  • Horizontally = sequence of activities (don’t obsess over combination of sequence at this point, some steps may be optional)
  • Vertically = alternatives (with most important on top)

Technique #2: Split stories to demonstrate progress even if the result is not truly shippable

  • 90% joke – Ask dev how done are you and he replies 90%. Come back after a week, and answer is still 90%.
  • Devs are not evil or liars, Estimating how done we are with something is notoriously difficult.
  • In Agile, easier, no need to estimate. Just 2 states = Not started or Done
  • 5 techniques for splitting stories (Lookup SPIDR), shared in the webinar were Splitting by Interface and by Rules
  • When you split stories remember the goal is to be potentially shippable — (1) high quality, (2) tested, (3) what it does, it does well

Technique #3: Strive to add just enough detail, just in time

  • Too much detail, too early vs Too little detail, too late
  • Bad habit – want to know all before starting — when they do that they’re not doing overlapping work (analysis first, before coding, testing…). Overlapping work is central tenet in most Agile processes (that’s why we don’t have phases in Agile). Time to market gets stretched.
  • Err on the side of too little, too late — you can improve by adding more detail next time
  • Question 1 (during refinement or other discussions on a user story): Do you need the answer before you start on that story? Sometimes you need it before you finish work on that story, not before you start.
  • Question 2 (during retro): Did we get answers just in time in just enough detail?

Keep the Backlog clean

I just mariekondo’d our backlog. So far, so good, I’ve removed 85 items from the backlog — 53 of which were over 200 days old! My thinking is if we won’t be touching them anytime soon or at all, I want them out of the backlog.

Idk but some lessons to share or possibly reminders to my future self here…

  • Get to know your tool – Find out how you can “archive” user stories that you want shelved, and also how you can access the shelved items in the future just in case you need to. Until recently, my options in the tool was limited to either delete or mark as done (which I didn’t want to do for items we won’t actually work on). We then found out that we can do project customization in the tool contrary to what we’ve been initially told, and so I’ve tweaked the workflow to also consider user stories that I want shelved.
  • Housekeeping keeps the backlog tool more usable – At some point, it was hard to move things around the backlog because of too many useless items that cluttered the list. Having a lean backlog also makes the items we actually need to work on more visible.
  • Maybe it shouldn’t be a list of wishful thinking, or a place for idea dumps – And TIL, using the backlog as a storage of ideas is a Product Backlog Anti-pattern.
  • Keep it aligned with the roadmap – Again, (“The product backlog is not reflecting the roadmap.”) another anti-pattern. I guess in conjunction with the previous item, a lot of the user stories that I cleaned up were raw ideas that they had wanted to build “someday”. Keep it real by keeping the backlog items as a list of things the team will actually work on.
  • Avoid / minimize duplication – For some reason, if a user story has to be kept duplicated, ensure they are linked to each other. The risk of duplication is in case of refinements, updates might be made on just one of the user stories when in reality you want it to be carried out across all.
  • Do periodic cleanups – This clean up is not and should not be a one time thing to keep the backlog relevant. An idea I picked up here is about setting a limit to your Design in Progress (DIP) or the number of items you have in the backlog.
  • Be mindful of what you add in the backlog – You don’t want the backlog items to keep growing and growing and revert back to a state you find less desirable. And an idea I picked up here is about setting a limit to your Design in Progress (DIP) or the number of items you have in the backlog.

So there, future me, keep the backlog clean. Keep it useful not only for yourself, but more importantly, for the rest of the team.

One link leads to another

Sometimes I come across posts or material in the internet on topics that piques my interest. It could be something I want to know more or understand more about. Or it could be related to a conversation or two I’ve had within the day that makes me question certain things. So sometimes I google, and sometimes I just stumble upon them through various feeds — could be Twitter, email, Medium, IG, and Facebook even. And then one link leads to another and before I know it, it’s 2AM and I should be getting some sleep. So anyways, here’s a dump of some recent links, in no particular order. I hope someone finds them helpful or interesting as I have.

Agile Product Ownership in a Nutshell (15 minute video) – I like how the content was easy to follow. There were a lot of points worth highlighting, but I guess what hits home the most is the mention of three things that need to be balanced:

  • Build the right thing (PO tends to focus here)
  • Build the thing right (dev team)
  • Build it fast (SM or Agile coach)

So you want to be a Scrum Master (book) – This is a Leanpub book which you can get for free, or not if you can afford to make a payment / contribution. It’s written by an Agile community of interest with the intent of sharing what they’ve learned and what they’ve seen to have worked.

The 3 most effective ways to build trust as a leader (post/article) – Got this from Rob Lambert but I can’t remember where exactly — “Three typical management activities that get poor results and three that get good results”. I’m not really a leader by title but the three ways of building trust that the post enumerates are still relevant to me and they emphasize points that I value: Empathy, clarity of intent, and follow through.

DISC Profile Types (personality test) – This is something I picked up from Rob Lambert’s webinar. For each profile type, there are recommended ways on how to better communicate with them, and inversely there are recommended ways on how to encourage others to better communicate with you. Took the test myself and got 48% Compliance, then Dominance, Steadiness, and lastly Influence.

12 common mistakes made when using Story Points (post/article) – This reminded me of something a colleague had shared wherein their Scrum Master wants them to estimate in hours rather than in story points, and also her thinking that story points can be easily translated to hours.

Agile Makes No Sense (post/article) – Let me just quote some lines (actually last 2 paragraphs) that I liked…

What is the smallest thing that could add value (and make sense)? A better standup? A better retrospective? Inviting a customer to a demo? Pairing for a day? Agreeing to get something into product in a couple days? Try that. Make one thing make sense as in “wow, I can see how that created value”.

When you take this humble approach — instead of “installing” a bunch of artifacts, tools, roles, and rituals AKA doing Agile — I think you’re embracing the true spirit of Agile.