One of my best friends: Chrome DevTools

A colleague recently shared in one of the channels at work that there’s this YouTube video covering Chrome DevTools for web testing. I pretty much use Chrome DevTools day in and day out so of course it piqued my interest. If you search “BFF” in my blog, the result that’ll turn up is a recent post where I said Chrome DevTools is a BFF.

Since way back when I started using it, I can’t recall taking an actual course to learn how to use it. I think you pretty much just dove into it and figured things out as you went along. When you get to collaborate with devs and see how they’re using it, you also get pointers on what you can use for your own testing.

So I checked the shared link and it turns out to be a video by Alan Richardson aka the Evil Tester. I’ve watched some of his videos/courses in the past and I’ve also bought one of his books in LeanPub. On an unrelated note, I love his accent. And going back to the video, I think it serves as a good introduction. No need to wait for a bug that you and your dev can troubleshoot together to expose yourself to the DevTools feature that they’re using. The video goes through the tool from left to right and shares how it’s been usually used for testing.

I wouldn’t even have much to supplement as he covers a lot of its usage already. Totally agree on having the Elements and Network tabs as the most used. I also often use the mobile view device toolbar for testing responsiveness across different resolutions. Let me just add a few more notes here.

  • Mobile view device toolbar – What I’ve done is added the usual resolutions I need to test against in the list of emulated devices. There are throttling options that can be set here if I need to simulate some network performance lag. Like for instance I had to test whether we’re preventing multiple calls of the API for when the user clicks a lot very fast. Simulating the lag helped me in testing that.
  • Elements selector + Elements tab – I often use this for inspecting the element to confirm the style. In list pages wherein I want to add more entries, my options include modifying the mock data, or I can also use a duplicate element feature in the Elements tab. He already mentioned you can manipulate stuff around here — so when I need a quick view of longer text or a change in a css property, I can do it here.
  • Console tab – Recently I use the verbose option, because the default level setting wasn’t exposing some violations that we were coming across with. I don’t really use it for JavaScript. I pretty much just use ‘innerWidth’ for when I come across a particular bug that gets triggered a a particular range of width.
  • Network tab – I use this a lot for checking the requests and their payload. And also for checking the API calls that are being made when I load a particular screen or call a particular function. In checking the response tab, it might be missed that there’s the pretty print icon at the bottom. It looks like the “{ }” is just some text rather than something clickable. Clicking it will make the json “prettier”. There is also a dropdown for throttling options. I haven’t looked up how that one differs from the mobile view device toolbar or if they’re just the same.
  • Application tab – Useful for testing where there’s an expectation that the data must be stored and retrieved from the local or session storage.
  • Lighthouse – I’m not sure if it shows up on your extension tab by default. It’s just been how it is with my Chrome browser setup. So with my setup, I can access it from the extension bar or from DevTools.
  • Kebab menu at the far right – It has the options of where you want to dock your the DevTools, or if you want it undocked. The More tools option can also be called from that kebab menu.
  • Sensors – This one I used when I was testing something time zone related and I had to simulate different locations. This is accessed from the kebab menu > More tools > Sensors.

I guess that’s it for what I can recall at the top of my head. And just sharing my keycaps for two of my favorite keys on the keyboard…

Astronaut helmet keycap = F12, for exploring with Chrome DevTools

Cheese keycap = PrintScreen for taking screenshots

First 180 days as Test Manager

The first 180 days in my new work as Test Manager had gone by without much fanfare. I had been quite preoccupied with my project and admin work that I haven’t really had a chance to celebrate. But now that I’m writing about it, here’s a virtual pat on the back, self.

So what’s been up since the last time?

  • Still engaged in the same project as before
  • Mokku continues to be my bestest friend. Chrome DevTools is a BFF.
  • Fun times were had when the dev and I created test users with names like Pol Axes, Patty Kim Lang, May Tama. But the humor will likely be lost for non-Filipinos.
  • Finally got the answer to the mystery of how Benny (our scrum master’s dog) almost died. Backstory: She intro’d the story while we were doing small talk in our daily stand up. But then we had to get down to business, not to defeat the Huns, but to have the scrum updates. Then we ended up leaving the call without getting the details. And I was like the last person to disconnect from the call because I thought it was going to be shared after the updates.
  • Interviews, hiring, and onboarding are still ongoing. And I’m sort of helping onboard folks even those outside of our test team.
  • Miro, which I used long before I transferred, continues to be so helpful I haven’t given up my consultant plan even though I pay for it myself
  • Test Assessments seem to be a thing, so I’m glad I had a chance experiencing a bit of it in my previous project
  • Career progression has also been a recent topic within the wider QA/Test team (not just PH). It’s just something I haven’t been prioritizing within the local team because we’re all newly hired so I don’t think anyone’s getting promoted at 6 months in.
  • Experienced a couple of site visits
  • Met one of the NZ managers who flew over
  • We also had somewhat of a COVID scare
  • And a problem with molds (not me personally, the office had a mold problem)
  • We had two developments on our work space. We moved into the new workspace (the one that had molds) last June, and again this August.
  • Tried to engage bench team mates into team building (not the fun and games kind, more like building the test team)
  • Had a team lunch which we of course extended to the dev team. First time make an expense claim.

I guess that’s pretty much the highlights at the top of my head.

I’ll close this off by sharing an interesting bug I found recently. It’s something we were able to replicate on the iPhone when the regional settings are for a location that uses the date format ‘dd/mm/yyyy’. As captured in the gif, what happened was I tapped on Aug 4, but what got selected was April 8. I was more amused than troubled by the bug because it reminded me of when I got a free birthday cake because of the date-month mix-up.

🍰

Testing project lessons learned

I was just thinking about where to post or how to go about with knowledge sharing of lessons learned within the team. I ended up collating a bullet list of what could be classified as my lessons learned (or relearned) or just some stuff I found interesting that I managed to keep note of. Just to give some context, I joined this project in March and it’s still ongoing. We’re building the front-end of a web application, and so for the data and the back-end we’re just relying on mocking. We are two testers in this project where I do the functional testing manually, and my fellow tester is responsible for the test automation. Now I’ll dive right into my bullet lists.

Responsive Breakpoints

I’ve already posted on this topic previously in Testing responsive breakpoints so I won’t elaborate on the items I’ve covered there.

Mocking

  • WireMock
  • Mokku
  • Emulating delays or a slow network can be done via Chrome DevTools:
    • Chrome DevTools > Toggle Device Toolbar > (Adjust the option from “No throttling”)
    • Chrome DevTools > Network tab > (Adjust option from “No throttling”)
  • MS Excel – This is old school but still comes in pretty handy for generating test data via concatenation and formulas.
  • Mockaroo – This is also for generating test data. What’s interesting about it is that it has this built-in option for naughty strings.
  • Free Online GUID Generator – This was also just the first search result when I googled for a GUID generator.
  • HTTP Response Codes – It’s a bit more fun to read with dog and cat photos involved via https://httpstatusdogs.com/ and https://http.cat/, respectively.

Time and Time Zones

  • MS Excel’s TIME(h, m, s)
    • Given a time value in UTC format, say in cell A1, that I’d want to convert to my local UTC+8, I can use the formula A1+TIME(8, 0, 0).
    • Interestingly though when I wanted to add more than 23 hours, TIME(24, 0, 0) won’t work. And based on their documentation, that’s not a bug, that’s how the feature works.
  • Changing the time zone
    • This is usually done by adjusting your Window’s date/time settings.
    • Alternatively, you can change your Chrome browser’s time via: Chrome DevTools > Kebab menu > More tools > Sensors.
  • Just an interesting read: A literary appreciation of the Olson/Zoneinfo/tz database

Form Field Validation

  • We do it on lost focus or on save. But here’s an interesting post in Medium about the topic: Inline validation in forms — designing the experience
  • Some interesting bugs around our numeric fields which are supposedly not allowed to accept non-numeric values
    • Some characters that slipped the cracks: e, -, +, .. For the numeric field that did allow decimals, we allowed the “.”; and similarly, for those that allowed negative values, we allowed the “-“. As for e, the dev and I spoke afterwards that we’ll just try to start saying 1e0 instead of 1 in normal conversation.
    • We also had a validation that the input must not be bigger than 180. Interestingly, 180.00000000000001 didn’t trigger the error; while it worked if there’s one zero less.
    • Typing in some characters replaced the value with NaN. No consecutive NaNs bug found though so we didn’t get to enjoy the Batman NaN joke.

Others

  • WhatFont Chrome Extension – With respect to the font, it’s an alternative to using Chrome DevTools for inspecting the element.
  • The card PAN or primary account number gets truncated because of some Payment Card Industry Data Security Standard. I had to look up PAN because I’ve never heard the card number referred to as “PAN” before.
  • For cases wherein the field is null, we’re currently not expecting the field in the JSON anymore. I found content for and against that. One of the contents for it is this: JSON null property values.
  • It’s “Chrome DevTools” — not dev tools, not levioSA — based on their documentation.

First 90 days in my new work as Test Manager

I received a notification that I’ve just gone through my first 90 days. And I thought: wow, that was fast! They say time flies when you’re having fun, or maybe when you’re busy, or a mix of both. I remember having a Covid scare early this year, having to self isolate, and stressing out about having to complete both my exit and entry requirements amidst the concern of needing to go out for some of the requirements while the country was around the peak of its Covid cases. I wanted to fast forward to February. And now it’s already May!

Had I written this in April, I would’ve put in “It’s gonna be May!” along with an NSYNC or Justin Timberlake photo.

So what’s been up so far, me?

  • Started in another company this 2/2/22
  • Been part of two projects so far
  • Worked on a test maturity assessment for one project
  • Collaborated with other testers for end-to-end scenarios
  • Conducted interviews, reviewed CVs
  • Had a couple or so iterations of my pool of interview questions
  • Did admin and figured out onboarding stuff for our growing team of testers here in Manila
  • Currently involved in an Agile project that’s building the front-end of a web application
  • Collaborated with an SDET, providing him with my test cases and consultation on his Gherkin scenarios
  • Worked with Mokku, one of my best friends for the past three sprints
  • Worked with WireMock
  • Reconnected with Postman
  • Tested a few stuff around breakpoints and time (time zone, time differences)
  • Recently received our project team’s access to BrowserStack

Outside of work, but somewhat work-related

  • Moved some of my project-notes into GitHub
  • Reread The Effective Executive, and read The Making of a Manager

It’s been a busy (a happy kind) quarter!

Totally outside of work…

Continue reading

Checking out WireMock

Our project is currently using WireMock as our mock server. From their documentation:

“WireMock is an HTTP mock server. At its core it is web server that can be primed to serve canned responses to particular requests (stubbing) and that captures incoming requests so that they can be checked later (verification).”

Due to some constraints though, I’ve been using Mokku to mock the mock API to serve my own canned responses. But of course, I was still curious about WireMock. So this good Friday, I went and checked it out. Since I was just going to have a little play around, I went for running it as a standalone process.

Step 1 was to install JDK.

Step 2 was to download the standalone jar, and then running it. Both the download link and the command to run are in the previous link for running it as a standalone process. After running it, a couple of folders get generated, one of which is the mappings folder.

Step 3 was to create a sample mapping file inside the mappings folder. After making changes, I just needed to restart by rerunning the java command. And with the sample below, I was able to get a response when I tried to access: http://localhost:8080/records.

get-records.json

{
  "request": {
    "url": "/records",
    "method": "GET"
  },
  "response": {
    "status": 200,
    "jsonBody": {
      "msg": "One response to one request"
    }
  }
}

Then I tried to check out multiple responses from within the same file. For that, the needed request-response pairs were nested under a mappings array. And the results for the given example below is:

  • http://localhost/record?name=KC gets a 200 response with message “Viewing KC”.
  • http://localhost/record?name=Mario gets an error 400.
  • If I try other name values, I get a 200 response with message saying “Viewing everyone else”.

get-record.json

{
  "mappings": [
    {
      "priority": 1,
      "request":  { "url": "/record?name=KC",  "method": "GET" },
      "response": { "status": 200, "jsonBody": { "msg": "Viewing KC" } }
    },
    {
      "priority": 2,
      "request":  { "url": "/record?name=Mario",  "method": "GET" },
      "response": { "status": 400, "jsonBody": { "msg": "Error! Itsameee Mario!" } }
    },
    {
      "priority": 3,
      "request": {
        "urlPattern": "/record\\?name=.*",
        "method": "GET"
      },
      "response": { "status": 200, "jsonBody": { "msg": "Viewing everyone else" } }
    }
  ]
}

Of course, these are very simple examples but it also illustrates how it’s feasible to get started with it in under an hour or so, and that you can have this playground before pushing stuff into the mock server being used by the rest of the team.

Using the Mokku chrome extension

Mokku is this Chrome extension that I’ve been using this sprint. It was pretty easy to pick up and I was able to almost immediately use it for my testing. For my particular use case, I needed to confirm that the page is able to display a list of items based on the response of an API. The catch is I don’t have the usual access to a database where the data is pulled from. I also can’t use the current implementation to create test records (that part’s not yet built). And the actual API isn’t available — only a mock one so I just have that and some information on the expected JSON.

So with Mokku, I mocked the mock API to simulate other responses. Like modifying the number of records returned, changing the data around to see if certain values would render as expected, emptying the list, delaying the response so I could view the loading animation, and triggering errors.

Installing Mokku is a cinch. Just search for it in the Chrome web store and install from there.

Using Mokku is easy.

  1. Open Chrome dev tools (use it docked rather than in a separate window) and go to the Mokku tab.
  2. Go to the page you need to test.
  3. In Mokku, there’s a tab for Logs. You can look for the one that you need to mock, and click the corresponding “Mock” link.
  4. Added mocks show up under the Mocks tab. And from there you can edit.
  5. Then you just need to navigate to the page that will call the mock, and hopefully your changes would reflect.

And it’s pretty much repeat 4 and 5 for the test cases that you’d like to cover with Mokku.


P.S. I wanted to use “Gamitin Mokku” as the title. It sounds like something that would translate to “use me”. Future me might cringe or do a facepalm, and think that’s such a tita joke.

Testing responsive breakpoints

Switching between the different breakpoints is something I’ve been needing to do quite frequently. It’s part of testing our screens that are supposed to be responsive. And in conversations with the developers when discussing a particular bug, it would help if I can easily jump to the breakpoint in question. I’ll be listing here what I’ve been using so far–not just the ones for resizing, but tools that had been handy in this area so far.

  • Chrome Dev tools a.k.a. my work bestie. I’ve added in the list of Emulated Devices the ones that I particularly need a lot.
  • mattkersley/Responsive-Design-Testing – I downloaded the files in this repo and tweaked accordingly so that the frames correspond to the ones I need. This is primarily just for a quick view across the different resolutions. You can’t really test the functionality of the page via those iFrames.
  • JRuler – Sometimes you just need a ruler because your man-calipers (i.e., your fingers) just aren’t that accurate. It’s a portable app that’s been in my Dropbox for who knows how long. But since it’s still able to do what it’s supposed to, I haven’t felt compelled to look up a replacement.

That’s about it. Maybe I’ll just edit this post in case I come across other tools for responsive web testing while I’m in my current project.

Is it ready?

Whether a user story is ready or not is a question I get asked during the Sprint Planning. I reckon it’s not really a question that I alone (as BA/PO/PPO) ought to answer. The Scrum Team answers that question. Prior to the Sprint Planning, those user stories had been groomed with the architects and dev leads, and they’d have been covered in the team backlog grooming sessions. And again prior to the Sprint Planning, the user stories for possible inclusion in the coming sprint are added into the Sprint Backlog for the rest of the team to preview so that they can have an idea of what makes sense for them to assign to themselves and so they can ask questions. During Sprint Planning, those stories are covered again and the floor is opened to questions if any. And even after Sprint Planning, the floor remains open for questions. The floor is just always open for conversations.

Now whether a user story can be absolutely ready is another thing. This is not a waterfall project where the designs had been laid out upfront. And even with a waterfall project, some questions arise only as you are implementing the functionality, or even as it gets tested in UAT, or even when it’s already out in production.

This is where the agility and the self-management of team members are invaluable in Agile. The grooming of user stories become a conversation (ideally among the three amigos–PO, Dev, Test) that feeds into the readiness of the user story. Is it ready? We make it ready. And as things arise (as they almost always do), it’s the agility and the self-management of team members that again becomes necessary for them to navigate through this rather than be stalled by each and every hiccup that comes along or rather than whining on how the user story was not ready. It’s as ready as we can make it.


I think I’ve digressed in this post. I initially wanted to write about how the Definition of Ready (DoR) is not even in the Scrum Guide. There’s this interesting post in Medium that details the history: The rise and fall of the Definition of Ready in Scrum (estimated by Medium as a 7-minute read).

Some of the points I highlighted:

  • “All you need is a product vision and enough priority items on the backlog to begin one iteration, or Sprint, of incremental development of the product.” — Ken Schwaber and Mike Beedle 2001
  • 2008 — First definition and inclusion in official Scrum Alliance training material… The DoR has the following traits:
    • A user story exists;
    • The “formulation” is general enough;
    • The user story is small enough to fit inside a Sprint;
    • It has its unique priority;
    • It has a description of how it can be tested;
    • It has been estimated.
  • 2010 — First edition of Scrum Guide… 15 years after Ken and Jeff started discussing Scrum, they created the first Scrum Guide. This first guide doesn’t mention the Definition of Ready.
  • “Product Backlog items that can be “Done” by the Development Team within one Sprint are deemed “ready” or “actionable” for selection in a Sprint Planning meeting.” — Ken Schwaber and Jeff Sutherland 2011
  • “Product Backlog items that can be “Done” by the Development Team within one Sprint are deemed “Ready” for selection in a Sprint Planning.” — Ken Schwaber and Jeff Sutherland 2013
  • “Product Backlog items that can be Done by the Scrum Team within one Sprint are deemed ready for selection in a Sprint Planning event.” — Ken Schwaber and Jeff Sutherland 2020

RIMGEA (2nd ed.) aka RIMGEN

Wow, it’s been 10 years since this blog post: RIMGEA – 6 approaches to bug reporting. There I wrote about a very useful mnemonic I picked up from a bug advocacy class (it’s more about bug reporting, than loving bugs). Lately, I’m not in a role where functional testing is part of the official R&R. But I think I’m a tester through and through — I still test or review, and I still log bugs or give feedback. And I still care about how bad bug reporting can give testing a bad rap.

So anyways, now’s a good time as any to retouch that post.


RIMGEN (and maybe plus S)

Originally, the mnemonic I picked up from the bug advocacy class was “RIMGEA“. It stood for the 6 factors or approaches to bug reporting which are: Replicate, Isolate, Maximize, Generalize, Externalize, And say it dispassionately. Then there was a suggestion in the comments before and I also found some recent BBST material that made the shift from “A” to “N” to suggest the neutrality in tone. The last letter S was from the comments, with a suggestion to spellcheck. I would generalize it as proofread but RIMGENP doesn’t roll off the tongue quite easily.

Replicate it – Try to replicate the bug.

If you can’t replicate it yourself, then it might be harder to persuade your developer to fix a problem that they can’t see. This also doesn’t only benefit the developer. Somewhere down the line, you or a fellow tester will also need the same information. The replication steps would be helpful in preparing test data or setting up the conditions to test the fix for the bug.

Related Link:  How helpful are my bug reports

Isolate it – Limit the steps or the conditions that trigger the bug.

Here you try to narrow down your repro steps or find what exactly are the critical conditions that led to the bug. Here you want to get to the bug in the easiest way possible. You want to rule out factors that have nothing to do with the bug.

Related Link: Bug isolation

Maximize it – Try to trigger a worse or bigger failure.

The bug you find might just be the tip of the iceberg, or a symptom of an even bigger bug. Follow-up tests could help uncover the bigger problem if there is indeed one. For instance, you find a bug regarding the positioning of a button in a modal when you tried it out on mobile. Turns out, if you try it on your laptop browser, the problem still exists. Turns out, other similar modals have a different positioning of the buttons. Turns out, the bigger problem is on the consistency of the modals across your app — and not just with particular modal screen you tried on mobile.

There are various tactics that you can try:

  • vary your behavior – e.g., instead of doing A then B, try changing the sequence; or try a different way of doing the task like using a shortcut key instead of the button
  • vary your program settings – e.g., there could be program settings that you can toggle on or off or adjust; for browser testing, you can try adjusting the zoom sizes or enable/disable caching
  • vary your inputs – e.g., if the bug was encountered when file X was used, try a similar file Y or another file Z
  • vary your configuration – e.g., try using a different OS/browser combination

Generalize it – Try to broaden the extent of the bug.

Here we try to uncorner corner cases. We try to find other ranges in which the bug can be reproduced. For instance, we find that a bug occurs when we have 1M records. In generalizing, we try to see if the bug occurs at lower more realistic values or if the 1M records is truly a critical condition.

I find this quite similar to Maximize. But essentially, with both, you are trying to push or find out how damaging the bug actually is in depth (how much does it cascade) and in breadth (in terms of the users and cases it can affect).

Externalize it – Try to see the value or impact of the bug from other stakeholders’ perspective.

Here we try to go beyond our roles as testers and try to get a sense of the bigger picture. In understanding the value that could be lost, we could paint a more compelling picture of why the bug needs to be fixed in our bug reports.

As testers, we might also find ourselves focusing too much on the functional specs. The devil’s in the details and zooming in on the details is important. But so is taking a step back and seeing the bigger picture on whether the functionality can actually deliver what the users need.

Neutral tone – Try to write our bug reports as clearly and as neutral as possible.

Having worked in building apps for over ten years, I know that there’s some egos that bruise far too easily. The bug report isn’t really about the people working on the project, so keep it that way. It only has to be about the bug. Keep it on point and add in only what would be relevant to help address the bug.

Related Link: Article: How to report bugs effectively

Spellcheck – Proofread your bug report.

It’s amazing how often people forget to proofread. So when and if you can, just do a quick pass to see there are no glaring errors and to also check if it’s clear enough. Be mindful that the audience of that bug report could possibly be future you, so you’d literally be helping yourself if you write it clearly.


So there. Let me awkwardly close with one of my favorite quotes. It’s one that I often associate with bug reporting.

A problem well stated is a problem half solved.

If I could, I wouldn’t: Separate back-end and front-end user stories

In my experience with Agile projects, we usually have user stories that somewhat correspond to features. Under the user stories, there’d be the front-end task, the back-end task, the testing task, and so on. There’d occasionally be user stories that’ll only have the front-end part or only have the back-end part, and that’s done as needed by the requirement. But it wasn’t a case wherein you have that particular requirement and you split it off to back-end and front-end stories.

I guess it’s always the team’s call on how they would work. That’s what makes agile agile. But I’ve tried it, i.e., having separate user stories for the back-end and front-end part. And it’s not my favorite for the various reasons I’ll enumerate below. Maybe it’s also just me and how I agree with that Agile principle that “Working software is the primary measure of progress.” Technically, the APIs could work and they’re functioning as expected; and the screen components are front-end validations could also work. But it feels incomplete for me.

Anyways, onto the reasons….

Repetition, Overhead

There are a lot of details that we have to repeat in both stories for the same feature. Why not just link them with each other? Well, we do that but the back-end guys expect that they can just focus on the BE story, same with the front-end guys. So there’s overhead in the repetition and in keeping things consistent.

Dependencies

We have to line up the BE user stories first or a sprint ahead. Then while we’re grooming for the next sprint or a couple of sprints down the line, we have to remember whether the FE story has already been covered by a BE story. Of course, that’s what related links are for. But still, humans are human, linkages could be missed.

In case there’s a bug in the user story, but it wasn’t really in scope, typically in previous projects we’d just create the needed user story for it. We’d then decide if it’s something we can take on within the sprint, or if it’s something we’d have to defer. But in this case, we’d need to line up two user stories — first for the BE and then another one for the FE, and there’s the possibility that we can’t get them in the same sprint.

Maybe it’s just psychological, and maybe it actually is, but it feels like it takes longer to complete a user story (feature).

Bloated Backlog

You nearly have twice as much user stories. So it feels like the backlog is bloated. You see a lot of user stories and it feels like a lot of work, but it’s just for that single feature or requirement. Again, maybe psychological.

Demos during Sprint Review

There’d be demos of the FE story which is already an integration of the back-end along with it, so that’s the actual demo of the feature. But there are also the demos of the BE stories where the interface would be via Postman or via Chrome Dev Tool or via the SQL client. As a tester, you have some appreciation for it; but as someone who empathizes with the client or business, I wonder if they’re as keen on seeing that API response as opposed to seeing the integrated feature.

Coincidentally, a team mate just asked me for certain details of a user story

So while writing this post, a team mate asked me for certain details of a user story. As expected, the related API story was linked to a corresponding FE story. However, she was working on a related feature and so that one wasn’t linked to the original API story. So it had to take some searching to find the answers from her. I ended up answering from memory to give her quick feedback, but of course I had to do due diligence and cross-check with the actual story and then add the linkages. Anyway, I guess this is my cue to get back to work. Although at 1:27 AM, I think I can already call it a day.

Potentially releasable

[Edit Oct 22] At the end of the Sprint, I feel a better sense of accomplishment if there’s a feature that’s actually built. Not just the data model, not just the API, not just the front-end that’s blocked because it’s still waiting on the API. This ties up to the 3rd item I mentioned above. But as I was just browsing, I came across this page again: What does it mean to be potentially releasable?

  • “The goal for any Scrum or agile team is the same: develop a potentially releasable product increment by the end of each sprint.”
  • And of course, it goes on to describe what is meant by potentially releasable — emphasizing “potentially” meaning it doesn’t always have to mean that you release every Sprint.
  • It shares and expounds on 3 key characteristics of the product increment to be potentially releasable: High quality, well tested, complete
  • “…reaching a potentially releasable state as often as possible is simply a good habit for Scrum teams. Teams that miss reaching this standard often begin to do so more frequently or for prolonged periods.”
Continue reading