Lightweight QA Strategies For Startups That Care About Quality

Reading Time: 9 minutes

Note: I originally posted this article on 11Sigma blog.

This article is "part two" of two articles about setting a QA process in a startup. I recommend reading part 1 first: Setting a QA process in a startup from scratch.

Spoiler alert: references to last Harry Potter book 😉

Define QA role, QA goals, and a vision

When I started setting up the QA, we already had some automated e2e tests and a test management system in place. Other than that, we had no QA process at all and didn't have definite answers to most of the questions from part 1.

Once I established the baseline, the first thing I did was putting together a short, mid, and long-term visions. I knew that the long-term vision should be as broad as possible because everything may change before I have a chance to get to it.

Roughly speaking, the short-term (1 month) plan assumed we would cover all existing features with manual test cases. At the same time, we started an on-schedule, monthly release followed by a QA hardening.

The long-term vision was all about Continuous Delivery, and integrating QA in multiple aspects of the software development lifecycle across the organization.

We defined four main QA goals:

  • Increase product's quality, stability & our confidence
  • Reduce the QA effort (cost)
  • Define QA processes and best practices
  • Improve release process

Each of the above goals has measurable objectives. For example, one of the aims of the first goal is:

Reduce the number of bugs in production (especially critical & blocker).

The 3F Technique: Focus, Focus, Focus

Startups have to operate at a high pace with drastically limited resources. Working efficiently in such an environment requires knowing precisely what to do. Otherwise, you are wasting time and potentially risking the company's failure.

When starting to build a QA process in a startup, identify all "low-hanging fruits" first. Don't waste time building complex systems. Deal what's most pressing and can be done fast. Aim at quick, powerful wins. Whenever possible, reduce the number of things you do until you can take care of them.

If, for example, you're struggling with on-time releases (which puts at risk contracts with your customers) focus on shipping your product on-schedule. Find a way to minimize the effort of assuring the release's quality that achieves the best outcome possible.

Here is a bunch of ideas for cutting corners in the first month:

  • reduce the number of browsers you test (if possible, focus on most used ones)
  • focus efforts on automating critical flows or building a manual test plan - not both
  • ditch all automation that takes too much time to implement or maintain - focus on the critical ones
  • test wide vs. deep - don't bother testing dozens of corner cases - you will have time for that later
  • if you have many products but few people, test only critical products - that will help you build processes and experience to scale to other products later

Humans & Robots

When it comes to testing the application, there are - pretty much - two approaches:

  • test manually
  • automation

Discussing the pros and cons of each is a topic for a separate article, so let me briefly touch on the matter.

Choosing what works for you will depend on your team structure obviously (if you don't have people capable of writing code in your crew, automation is not an option). Another aspect to consider is that manual test cases are easy to write but are slow to execute; automation is harder to write but faster to execute. You need to choose between those two parameters.

Focusing on one of the approaches first makes a lot of sense when you begin.

However, you may later want to mix the two.

If I were starting a QA process entirely from scratch, I would recommend writing several automated test cases first. Once you get critical paths covered (logging in, signing up, etc.), I recommend you start using a Test Management System (TMS) to track manual cases additionally.

There are goods reasons to use TMSs. To name the few:

  • adding manual test cases to automate when you have the time
  • calculating test coverage
  • "documenting" existing features
  • designing release test plans
  • performing manual (and automated) test runs
  • monitoring regressions

Hint: choose a system that allows you to document your features and test cases fast. Simplicity is key.

I, Robot - notes on automation

The first excitement of witnessing your tests running automatically is priceless. Seeing the pages flickering "magically" before your very eyes is really powerful. Suddenly, you are like this:

Over time, though, you may feel more like this:

Usually, automating the first couple of test cases is brutally simple. When choosing an automation framework, you will read about "how fast and easy" writing tests is. Marketing pages will show you speed up screencasts of code editors where it takes seconds to write a test.

Don't worry, though, you will smash into reality very soon, and you will be able to adjust your strategy.

Three main problems you will face are:

  • test flakiness (don't trust frameworks that say they are flake-free)
  • test speed
  • seeding app state

Test flakiness is a common problem. If you don't know that is, it happens when you tests fail and give false-negatives. For example, they fail because a DOM element you tried clicking was detached from DOM for a split of a second. A user will not notice it but your code will randomly fail.

Reproducing and fixing flaky tests can take hours. They can be a result of many weird problems including:

  • stale DOM elements
  • database deadlocks
  • app's routing races
  • app's worker races
  • framework bugs

I do not mean to discourage you from automating tests. I'm telling you that so that you can make an educated decision on how much time to spend automating vs. manual testing.

Flaky and slow tests can cause distrust in QA. You may think it's just a technical issue to overcome. However, if your tests fail regularly or are slow, they will "get in a way" and become a blocker more than a helper. That will cause frustration among other engineers. Be aware of that and prioritize fixing flaky tests.

Seeding your state may be non-trivial. When testing UI and isolating features, you may want to set the app to a particular state programmatically. For example, there is no reason to use UI to log in in every single test case that requires an authenticated user. You can test that case once, and use your API to log in. Depending on your codebase, you may not have public APIs, or they may not be documented.

I recommend setting at least one type of seeding in the beginning: reset your database. Write a script that wipes the DB out so that every test case starts from a fresh start.

Test Cases Kon-Mari

The first couple of test cases you write are dead-simple. It'll probably be "logging in", and similarly trivial workflows.

Eventually, though, you will get to a point where organizing tests becomes complex. For instance, you may become quite confused about where to add a new test case and whether you have covered something already.

You don't even need to have 1000 test cases to struggle with this. At 100, you will already spot the problem.

It's the typical "how to organize stuff" problem.

There are several ways you can approach this. Each has pros and cons.

Keep it flat

Just don't organize at all or very, very lightly. It will work in the beginning when you have ten tests or so. Ideal when you start.

Organize by page and components

That works until you realize that some of your tests span across multiple pages.

For example:

  • Login Page
    • As a user log into the system with correct email and password
    • As a user log into the system with incorrect email and password
  • Editor Page
    • Model Editor Component
    • As a logged in user use the JSON Schema Editor to create an API model
    • Sidebar Component
    • As a logged in user use Sidebar to navigate to an API

This structure becomes flawed rather quickly. Your components will be reused between several pages, so nesting them under a specific page will not make sense.

You can imagine, you are reusing the top navigation component on every single page. It wouldn't make sense to nest it inside a page.

You can also imagine that some tests will start on page A, but the expected results can be observed on page B. Even something as simple as logging in starts at the login screen but ends at the user's welcome page.

Besides, things don't even have to start on a particular page. A server event can trigger them, and the result can be observable on multiple pages at the same time.

To sum up: this is the right approach when you're first trying to understand "where" things are, but it won't scale.

Organize by features

Compared to the previous, this focuses on something more abstract. For example, a feature can be:

  • Designing OpenAPI Documents
  • Creating an organization workspace
  • Allowing users from a particular domain to sign up

That is a compelling strategy. It doesn't organize your test cases "structurally" (like the "per page"), and is very flexible.

Your test cases can span across multiple pages, and you don't neet to care about components. If you think about it from the user's point of view - they don't care about it either. They care about the generated value.

One caveat here is that it's sometimes tricky to decide whether sometimes it is an independent feature or a case of a feature.

Imagine your application allows adding new text files via the UI's top navigation menu.

You have covered that with a test case, and the feature sounds like "Adding text files".

As your product continues to grow, engineers add new ways to create a file:

  • via the sidebar
  • via push events from the server
  • via cloning git repositories, etc

All those operations result in creating text files. The trick here is to draw a line between a feature and a "case".

Organize by flows

There are situations where organizing your tests by specific features is too limiting. I'm sure you can think of scenarios where a user must go through multiple steps before reaching their goals. Additionally, none of these steps should be isolated because they are all fragile and critical to the result.

For instance, a user may want to clone a git repository, open it in an editor, verify that files were cloned correctly, go back to the previous page, and open the project again. It's a longer and more realistic "flow" than only testing each step in isolation.

It's easier to reason about it when you think about extremes. On one extreme, the longest possible flow would be walking through the entire application and using all existing features. On the other hand, a minimal flow could be opening a single page and verifying that a single component renders.

Organizing by flows is a good trade-off when you first try to identify areas to test. Be careful not to go overboard, though.

Conclusion

The organization you use depends on the size of your project, documentation, dependencies between components, etc.

In short, the relation between the presented approaches is:

Flows may span across one or many pages and may test one or many features. Page may be a source of one or many features. Feature may span across one or many pages. Feature might be build using one or many components. Component may be reused in one or many features.

We use a mix of those approaches. We first started with a couple of well-defined flows and then began to automate testing features is isolation. To ensure we achieve the highest cost/impact ratio, we focus on covering happy paths first (also known as "wide and shallow" testing).

If you can assure that all happy paths work as expected, you're on your way to establishing a high quality for the majority of users.

You can approach corner cases reactively - whenever a new bug is reported and fixed, write an e2e test or let engineers cover it with unit tests.

Establish "any" QA process

Heads up! If you are managing the QA efforts in your organization, prepare yourself for coordinating many departments. Keep in mind that each unit in the organization will have its ideas, requirements, limitations, and constraints. It will take time - embrace the process.

You will need to persevere, have a solid vision, and be able to negotiate, articulate your ideas well, and take the lead on most of the things you want to achieve - but these are general qualities you need to have as a leader.

There may be many problems that you will need to resolve (either on your own or with the help of other departments). They may include:

  • establish release dates, frequency, steps
  • figure out product versioning strategy and implementation
  • build continuous integration
  • set up a testing environment (servers, builds, etc.)

That's a lot to tackle. Start simple.

A quick win that can build your credibility as a team can be spotting (preventing) regressions before production.

Here is a step-by-step example of a simple process you can implement first. We will assume you focus on manual testing, but similar steps can apply to automation.

  1. Identify major existing features in the system.
  2. Write test cases covering critical flows.
  3. Agree on the next release date and negotiate a hardening period (e.g., a week).
  4. On "day one" of hardening, freeze adding new features.
  5. When hardening, test all "net-new" features first. Report any regression immediately to give engineers time to fix.
  6. Run the regression test plan for existing features.
  7. Once all tests pass (might be after several rounds), give the green light for release.
  8. After releasing, publish a simple report with the number of prevented regressions, new features, performed test cases, etc.

The actual steps you will implement will vary, of course. We did something quite more involved than the above, but our goals were the same. The results we had were very positive, and we successfully released on-time, prevented regressions, and increased confidence in releases.

Wrapping it all up

Introducing a QA process at a startup is possible, and (at some point) necessary.

If you work at a small startup, you will need support from the CEO. Business needs to understand why they are spending money on quality. We were lucky enough to get fantastic support from the management.

In retrospect, I think the most significant challenges are finding your role/place in the organization, and adjusting other people's workflows.

Communicating your goals and process across the board, and assuring things work the way you planned is hard. QA is all about teamwork. You cannot do it alone.

You will need a lot of patience and perseverance. I believe that we live in a world where people often take hight quality for granted. If you are a startup, your early adopters will have a higher tolerance for defects. The competition is fierce, though.

As new tools help us build exceptional products faster than ever, customers' expectations will inevitably rise. The bar is set high, but the reward is great. Keep on creating excellent products and keep the quality high. Or, in other words: don't ship crap!

Leave a Comment

Your email address will not be published. Required fields are marked *