One of my best friends: Chrome DevTools

A colleague recently shared in one of the channels at work that there’s this YouTube video covering Chrome DevTools for web testing. I pretty much use Chrome DevTools day in and day out so of course it piqued my interest. If you search “BFF” in my blog, the result that’ll turn up is a recent post where I said Chrome DevTools is a BFF.

Since way back when I started using it, I can’t recall taking an actual course to learn how to use it. I think you pretty much just dove into it and figured things out as you went along. When you get to collaborate with devs and see how they’re using it, you also get pointers on what you can use for your own testing.

So I checked the shared link and it turns out to be a video by Alan Richardson aka the Evil Tester. I’ve watched some of his videos/courses in the past and I’ve also bought one of his books in LeanPub. On an unrelated note, I love his accent. And going back to the video, I think it serves as a good introduction. No need to wait for a bug that you and your dev can troubleshoot together to expose yourself to the DevTools feature that they’re using. The video goes through the tool from left to right and shares how it’s been usually used for testing.

I wouldn’t even have much to supplement as he covers a lot of its usage already. Totally agree on having the Elements and Network tabs as the most used. I also often use the mobile view device toolbar for testing responsiveness across different resolutions. Let me just add a few more notes here.

  • Mobile view device toolbar – What I’ve done is added the usual resolutions I need to test against in the list of emulated devices. There are throttling options that can be set here if I need to simulate some network performance lag. Like for instance I had to test whether we’re preventing multiple calls of the API for when the user clicks a lot very fast. Simulating the lag helped me in testing that.
  • Elements selector + Elements tab – I often use this for inspecting the element to confirm the style. In list pages wherein I want to add more entries, my options include modifying the mock data, or I can also use a duplicate element feature in the Elements tab. He already mentioned you can manipulate stuff around here — so when I need a quick view of longer text or a change in a css property, I can do it here.
  • Console tab – Recently I use the verbose option, because the default level setting wasn’t exposing some violations that we were coming across with. I don’t really use it for JavaScript. I pretty much just use ‘innerWidth’ for when I come across a particular bug that gets triggered a a particular range of width.
  • Network tab – I use this a lot for checking the requests and their payload. And also for checking the API calls that are being made when I load a particular screen or call a particular function. In checking the response tab, it might be missed that there’s the pretty print icon at the bottom. It looks like the “{ }” is just some text rather than something clickable. Clicking it will make the json “prettier”. There is also a dropdown for throttling options. I haven’t looked up how that one differs from the mobile view device toolbar or if they’re just the same.
  • Application tab – Useful for testing where there’s an expectation that the data must be stored and retrieved from the local or session storage.
  • Lighthouse – I’m not sure if it shows up on your extension tab by default. It’s just been how it is with my Chrome browser setup. So with my setup, I can access it from the extension bar or from DevTools.
  • Kebab menu at the far right – It has the options of where you want to dock your the DevTools, or if you want it undocked. The More tools option can also be called from that kebab menu.
  • Sensors – This one I used when I was testing something time zone related and I had to simulate different locations. This is accessed from the kebab menu > More tools > Sensors.

I guess that’s it for what I can recall at the top of my head. And just sharing my keycaps for two of my favorite keys on the keyboard…

Astronaut helmet keycap = F12, for exploring with Chrome DevTools

Cheese keycap = PrintScreen for taking screenshots

First 180 days as Test Manager

The first 180 days in my new work as Test Manager had gone by without much fanfare. I had been quite preoccupied with my project and admin work that I haven’t really had a chance to celebrate. But now that I’m writing about it, here’s a virtual pat on the back, self.

So what’s been up since the last time?

  • Still engaged in the same project as before
  • Mokku continues to be my bestest friend. Chrome DevTools is a BFF.
  • Fun times were had when the dev and I created test users with names like Pol Axes, Patty Kim Lang, May Tama. But the humor will likely be lost for non-Filipinos.
  • Finally got the answer to the mystery of how Benny (our scrum master’s dog) almost died. Backstory: She intro’d the story while we were doing small talk in our daily stand up. But then we had to get down to business, not to defeat the Huns, but to have the scrum updates. Then we ended up leaving the call without getting the details. And I was like the last person to disconnect from the call because I thought it was going to be shared after the updates.
  • Interviews, hiring, and onboarding are still ongoing. And I’m sort of helping onboard folks even those outside of our test team.
  • Miro, which I used long before I transferred, continues to be so helpful I haven’t given up my consultant plan even though I pay for it myself
  • Test Assessments seem to be a thing, so I’m glad I had a chance experiencing a bit of it in my previous project
  • Career progression has also been a recent topic within the wider QA/Test team (not just PH). It’s just something I haven’t been prioritizing within the local team because we’re all newly hired so I don’t think anyone’s getting promoted at 6 months in.
  • Experienced a couple of site visits
  • Met one of the NZ managers who flew over
  • We also had somewhat of a COVID scare
  • And a problem with molds (not me personally, the office had a mold problem)
  • We had two developments on our work space. We moved into the new workspace (the one that had molds) last June, and again this August.
  • Tried to engage bench team mates into team building (not the fun and games kind, more like building the test team)
  • Had a team lunch which we of course extended to the dev team. First time make an expense claim.

I guess that’s pretty much the highlights at the top of my head.

I’ll close this off by sharing an interesting bug I found recently. It’s something we were able to replicate on the iPhone when the regional settings are for a location that uses the date format ‘dd/mm/yyyy’. As captured in the gif, what happened was I tapped on Aug 4, but what got selected was April 8. I was more amused than troubled by the bug because it reminded me of when I got a free birthday cake because of the date-month mix-up.

🍰

Testing project lessons learned

I was just thinking about where to post or how to go about with knowledge sharing of lessons learned within the team. I ended up collating a bullet list of what could be classified as my lessons learned (or relearned) or just some stuff I found interesting that I managed to keep note of. Just to give some context, I joined this project in March and it’s still ongoing. We’re building the front-end of a web application, and so for the data and the back-end we’re just relying on mocking. We are two testers in this project where I do the functional testing manually, and my fellow tester is responsible for the test automation. Now I’ll dive right into my bullet lists.

Responsive Breakpoints

I’ve already posted on this topic previously in Testing responsive breakpoints so I won’t elaborate on the items I’ve covered there.

Mocking

  • WireMock
  • Mokku
  • Emulating delays or a slow network can be done via Chrome DevTools:
    • Chrome DevTools > Toggle Device Toolbar > (Adjust the option from “No throttling”)
    • Chrome DevTools > Network tab > (Adjust option from “No throttling”)
  • MS Excel – This is old school but still comes in pretty handy for generating test data via concatenation and formulas.
  • Mockaroo – This is also for generating test data. What’s interesting about it is that it has this built-in option for naughty strings.
  • Free Online GUID Generator – This was also just the first search result when I googled for a GUID generator.
  • HTTP Response Codes – It’s a bit more fun to read with dog and cat photos involved via https://httpstatusdogs.com/ and https://http.cat/, respectively.

Time and Time Zones

  • MS Excel’s TIME(h, m, s)
    • Given a time value in UTC format, say in cell A1, that I’d want to convert to my local UTC+8, I can use the formula A1+TIME(8, 0, 0).
    • Interestingly though when I wanted to add more than 23 hours, TIME(24, 0, 0) won’t work. And based on their documentation, that’s not a bug, that’s how the feature works.
  • Changing the time zone
    • This is usually done by adjusting your Window’s date/time settings.
    • Alternatively, you can change your Chrome browser’s time via: Chrome DevTools > Kebab menu > More tools > Sensors.
  • Just an interesting read: A literary appreciation of the Olson/Zoneinfo/tz database

Form Field Validation

  • We do it on lost focus or on save. But here’s an interesting post in Medium about the topic: Inline validation in forms — designing the experience
  • Some interesting bugs around our numeric fields which are supposedly not allowed to accept non-numeric values
    • Some characters that slipped the cracks: e, -, +, .. For the numeric field that did allow decimals, we allowed the “.”; and similarly, for those that allowed negative values, we allowed the “-“. As for e, the dev and I spoke afterwards that we’ll just try to start saying 1e0 instead of 1 in normal conversation.
    • We also had a validation that the input must not be bigger than 180. Interestingly, 180.00000000000001 didn’t trigger the error; while it worked if there’s one zero less.
    • Typing in some characters replaced the value with NaN. No consecutive NaNs bug found though so we didn’t get to enjoy the Batman NaN joke.

Others

  • WhatFont Chrome Extension – With respect to the font, it’s an alternative to using Chrome DevTools for inspecting the element.
  • The card PAN or primary account number gets truncated because of some Payment Card Industry Data Security Standard. I had to look up PAN because I’ve never heard the card number referred to as “PAN” before.
  • For cases wherein the field is null, we’re currently not expecting the field in the JSON anymore. I found content for and against that. One of the contents for it is this: JSON null property values.
  • It’s “Chrome DevTools” — not dev tools, not levioSA — based on their documentation.

One link leads to another: TestOps

This is just a rambly post of some interesting stuff I read this morning in between chores.

One of the links shared in the book I recently read was a reference to “Three Little Questions” from a talk by Ioana Serban. I decided to check out the video posted in 2016 in YouTube, just under 40 minutes long. The title of the talk was “TestOps – Chasing the White Whale“. It was pretty interesting. The presenter shared her lines of reasoning, briefly touched on Oracles, gave three very concrete examples, and (this last one’s just a personal aside) one of the images used seemed quite timely because I’ve been seeing a lot of Johnny Depp in my social media feeds.

Coming from there, I went on to check out her twitter @ioanasaysgo, and the latest post was a retweet about http status codes by @b0rk. I found that timely because in my current testing project I often need to trigger or simulate various response codes. Several of the replies were just as interesting particularly about 1xx, 418 (I’m a Teapot), and a link to a decision tree for choosing which status code to return. And from there, I came across these three:

Taking a step back to @b0rk’s tweets. One of the recent things she posted was this link to this blog post, Not My Job.

Taking a step back to @ioanasaysgo’s profile. I saw a link she posted in her profile to her Medium account and from there I came across her post from 2014, The Resident Expert. In that post, there was mention of some slides about a presentation on raising the test bar. Too bad, the link no longer works. But there was a link to the presenter’s twitter, @testchick, and in turn to her site, roadlesstested.com. Next up I’ll probably check out are the interview related links mentioned in that site’s list of resources.

And it’s time for try to look for lunch.

Read: A Practical Guide to Testing in DevOps

Reading this book was long overdue, and I’m glad I can finally scratch this off my TBR list. I purchased it within a week of its release back in 2017, but only got around to reading it this week. Oh, well. The book is Katrina Clokie’s A Practical Guide to Testing in DevOps. I feel like it’s such a mine of references, and a jumping point for so many other things to read or watch.

For each of the chapters in the book, I tried to highlight some of the references or topics that I found interesting.

Testing in a DevOps Culture

  • Test strategy retrospective
  • Agile assessment How agile is your testing?, and book recommendations if you scored less than five out of ten
  • Two pizza rule for team sizes — I just thought that, depending on the restaurant and the sizes they offer, I can easily eat a whole pizza by myself.
  • Section on blazing a trail – “When I’m blazing a trail I’m building a path to someone new in the organisation…” — This isn’t limited to DevOps; feels relatable as we’re building the local practice and the CoP.
  • Ioana Serban’s “Three Little Questions” from TestOps: Chasing the White Whale (YouTube)
  • Links to blog posts of applying visualization when testing a product
  • Pair testing experiment framework

Testing in Development

Testing in Production

  • Sections on testing monitoring, analytics and logging
  • “There’s the well-known example of how Google used A/B testing to determine which shade of blue to use on their toolbar. They tested 41 gradients to learn the colour people preferred, which was seen by some as a trivial attribute to experiment with.”
  • “Google are well known for running lengthy beta programs. Gmail, was labelled as beta for five years from 2004 to 2009.”
  • Should Tesla be ‘beta testing’ autopilot if there is a chance someone might die?
  • TIL passive validation. Differences between active and passive validation reference is Testing in Production (Vimeo) by Seth Eliot.
  • The terms to describe variations of exposure control such as canary release, staged rollout, dogfooding, and dark launching

Testing in DevOps Environments

Industry Examples

No bullet points here. It’s not that I didn’t find anything in this section interesting. On the contrary, I feel like this section is something you can check out to look for content that fits or closely aligns to your own testing context.

Test Strategy in DevOps

  • Conducting a Risk Workshop
  • Several links around rethinking and reimagining the testing pyramid – 1, 2, 3, 4
  • The testing pendulum depicting the extremes of whether your testing is too deep or too shallow
  • A couple of links around the shift to TestOps – 1, 2
  • A section on heuristics for removing a tester
  • A couple of links on visual test strategies – 1*, 2

*Original links no longer worked so I had to google for the latest link.

And again, here’s the book’s link: https://leanpub.com/testingindevops

First 90 days in my new work as Test Manager

I received a notification that I’ve just gone through my first 90 days. And I thought: wow, that was fast! They say time flies when you’re having fun, or maybe when you’re busy, or a mix of both. I remember having a Covid scare early this year, having to self isolate, and stressing out about having to complete both my exit and entry requirements amidst the concern of needing to go out for some of the requirements while the country was around the peak of its Covid cases. I wanted to fast forward to February. And now it’s already May!

Had I written this in April, I would’ve put in “It’s gonna be May!” along with an NSYNC or Justin Timberlake photo.

So what’s been up so far, me?

  • Started in another company this 2/2/22
  • Been part of two projects so far
  • Worked on a test maturity assessment for one project
  • Collaborated with other testers for end-to-end scenarios
  • Conducted interviews, reviewed CVs
  • Had a couple or so iterations of my pool of interview questions
  • Did admin and figured out onboarding stuff for our growing team of testers here in Manila
  • Currently involved in an Agile project that’s building the front-end of a web application
  • Collaborated with an SDET, providing him with my test cases and consultation on his Gherkin scenarios
  • Worked with Mokku, one of my best friends for the past three sprints
  • Worked with WireMock
  • Reconnected with Postman
  • Tested a few stuff around breakpoints and time (time zone, time differences)
  • Recently received our project team’s access to BrowserStack

Outside of work, but somewhat work-related

  • Moved some of my project-notes into GitHub
  • Reread The Effective Executive, and read The Making of a Manager

It’s been a busy (a happy kind) quarter!

Totally outside of work…

Continue reading

Checking out WireMock

Our project is currently using WireMock as our mock server. From their documentation:

“WireMock is an HTTP mock server. At its core it is web server that can be primed to serve canned responses to particular requests (stubbing) and that captures incoming requests so that they can be checked later (verification).”

Due to some constraints though, I’ve been using Mokku to mock the mock API to serve my own canned responses. But of course, I was still curious about WireMock. So this good Friday, I went and checked it out. Since I was just going to have a little play around, I went for running it as a standalone process.

Step 1 was to install JDK.

Step 2 was to download the standalone jar, and then running it. Both the download link and the command to run are in the previous link for running it as a standalone process. After running it, a couple of folders get generated, one of which is the mappings folder.

Step 3 was to create a sample mapping file inside the mappings folder. After making changes, I just needed to restart by rerunning the java command. And with the sample below, I was able to get a response when I tried to access: http://localhost:8080/records.

get-records.json

{
  "request": {
    "url": "/records",
    "method": "GET"
  },
  "response": {
    "status": 200,
    "jsonBody": {
      "msg": "One response to one request"
    }
  }
}

Then I tried to check out multiple responses from within the same file. For that, the needed request-response pairs were nested under a mappings array. And the results for the given example below is:

  • http://localhost/record?name=KC gets a 200 response with message “Viewing KC”.
  • http://localhost/record?name=Mario gets an error 400.
  • If I try other name values, I get a 200 response with message saying “Viewing everyone else”.

get-record.json

{
  "mappings": [
    {
      "priority": 1,
      "request":  { "url": "/record?name=KC",  "method": "GET" },
      "response": { "status": 200, "jsonBody": { "msg": "Viewing KC" } }
    },
    {
      "priority": 2,
      "request":  { "url": "/record?name=Mario",  "method": "GET" },
      "response": { "status": 400, "jsonBody": { "msg": "Error! Itsameee Mario!" } }
    },
    {
      "priority": 3,
      "request": {
        "urlPattern": "/record\\?name=.*",
        "method": "GET"
      },
      "response": { "status": 200, "jsonBody": { "msg": "Viewing everyone else" } }
    }
  ]
}

Of course, these are very simple examples but it also illustrates how it’s feasible to get started with it in under an hour or so, and that you can have this playground before pushing stuff into the mock server being used by the rest of the team.

Using the Mokku chrome extension

Mokku is this Chrome extension that I’ve been using this sprint. It was pretty easy to pick up and I was able to almost immediately use it for my testing. For my particular use case, I needed to confirm that the page is able to display a list of items based on the response of an API. The catch is I don’t have the usual access to a database where the data is pulled from. I also can’t use the current implementation to create test records (that part’s not yet built). And the actual API isn’t available — only a mock one so I just have that and some information on the expected JSON.

So with Mokku, I mocked the mock API to simulate other responses. Like modifying the number of records returned, changing the data around to see if certain values would render as expected, emptying the list, delaying the response so I could view the loading animation, and triggering errors.

Installing Mokku is a cinch. Just search for it in the Chrome web store and install from there.

Using Mokku is easy.

  1. Open Chrome dev tools (use it docked rather than in a separate window) and go to the Mokku tab.
  2. Go to the page you need to test.
  3. In Mokku, there’s a tab for Logs. You can look for the one that you need to mock, and click the corresponding “Mock” link.
  4. Added mocks show up under the Mocks tab. And from there you can edit.
  5. Then you just need to navigate to the page that will call the mock, and hopefully your changes would reflect.

And it’s pretty much repeat 4 and 5 for the test cases that you’d like to cover with Mokku.


P.S. I wanted to use “Gamitin Mokku” as the title. It sounds like something that would translate to “use me”. Future me might cringe or do a facepalm, and think that’s such a tita joke.

Testing responsive breakpoints

Switching between the different breakpoints is something I’ve been needing to do quite frequently. It’s part of testing our screens that are supposed to be responsive. And in conversations with the developers when discussing a particular bug, it would help if I can easily jump to the breakpoint in question. I’ll be listing here what I’ve been using so far–not just the ones for resizing, but tools that had been handy in this area so far.

  • Chrome Dev tools a.k.a. my work bestie. I’ve added in the list of Emulated Devices the ones that I particularly need a lot.
  • mattkersley/Responsive-Design-Testing – I downloaded the files in this repo and tweaked accordingly so that the frames correspond to the ones I need. This is primarily just for a quick view across the different resolutions. You can’t really test the functionality of the page via those iFrames.
  • JRuler – Sometimes you just need a ruler because your man-calipers (i.e., your fingers) just aren’t that accurate. It’s a portable app that’s been in my Dropbox for who knows how long. But since it’s still able to do what it’s supposed to, I haven’t felt compelled to look up a replacement.

That’s about it. Maybe I’ll just edit this post in case I come across other tools for responsive web testing while I’m in my current project.

TIL Zeigarnik effect

I attended a knowledge sharing session on behavioral science last week (ok, so technically, it’s not “TIL”… but as of that time, I thought “TIL!”). It was particularly in the context of designing products and services. It touched on concepts—some I’ve come across before, some new. And among the new, what piqued my interest the most was the…

Zeigarnik effect:
not finishing a task creates mental tension, which keeps it at the forefront of our memory

I guess it struck a chord because I was dealing with something that I felt I shouldn’t still be dealing with had someone done their work more promptly (which I later found out, should be corrected to “had someone done their work more correctly”). Anyways, I feel that mental tension — that extra cognitive load — brought about by that that unfinished thing that you have to have to remember.

It also made me think about some agile stuff particularly limiting WIP. In limiting WIP, the team ideally works within their capacity and will prioritize completing work over starting new work so that they aren’t juggling so many open items at a time. There’s also the lead time which is like the difference between when the work is started and when the work is completed. The longer the lead time — say when you start discussing a possible feature to the time that feature actually gets implemented, the longer you have to remember the details around that feature. The longer you have to remember stuff and the more stuff you need to remember—both are mentally taxing especially when the work piles up.

Then there’s also the definition of done. This is something that’s part of the team’s working agreements wherein you align on what has to be completed to consider the story as “Done.” And it gives some sort of comfort in knowing that once a story is done in a Sprint, you sort of have closure, and you can move on to other stories without having to worry about that story coming back to life an haunting you.

Anyways, I haven’t really read up much on the subject. It was just one of the concepts mentioned in the session, and I just find it interesting how some of those agile practices or concepts support combatting that mental tension that is the Zeigarnik effect.