DEV Community

Miklós Szeles
Miklós Szeles

Posted on • Originally published at mszeles.com on

How Google Tests Software - What we can learn from Google? - A book extract By Miki Szeles

Image description

By documenting much of the magic of Google’s test engineering practices, the authors have written the equivalent of the Kama Sutra for modern software testing. Alberto Savoia, Engineering Director, Google

I've just finished reading "How Google Tests Software (James A. Whittaker, Jason Arbon, Jeff Carollo)" book and I have to tell it is awesome. It provides so many insights on how Google tests software.

However it is a more than 10 years old book, I could still learn a lot from it. I have collected the parts which were most relevant for me, to share with you and to reinforce my learnings, and I also summarized my takeaways to share with my team and my company.

Extracts

The foreword

One foreword is written by Alberto Savio:

Writing a foreword for a book you wish you had written yourself is a dubious honor; it's a bit like serving as best man for a friend who is about to spend the rest of his life with the girl you wanted to marry.

First joined Google as engineering director in 2001. At the time, we had about two hundred developers and...a whopping three testers!

Another foreword is written by Patrick Copeland:

Joined Google when engineering was just shy of 1,000 people. The testing team had about 50 full timers and some number of temporary workers I never could get my head around. The team was called “Testing Services” and focused the majority of its energy on UI validation and jumping into projects on an as-needed basis. As you might imagine, it wasn’t exactly the most glorified team at Google.

In all my interactions up to this point, one thing about Google was clear. It respected computer science and coding skill. Ultimately, if testers were to join this club, they would have to have good computer science fundamentals and some coding prowess. First-class citizenship demanded it.6

The only way a team can write quality software is when the entire team is responsible for quality. That meant product managers, developers, testers...everyone. From my perspective, the best way to do this was to have testers capable of making testing an actual feature of the code base. The testing feature should be equal to any feature an actual customer might see. The skill set I needed to build features was that of a developer.

Developers had the attitude that if we were going to hire people skilled enough to do feature development, then we should have them do feature development. Some of them were so against the idea that they filled my manager’s inbox with candid advice on how to deal with my madness. Fortunately, my manager ignored those recommendations. Testers, to my surprise, reacted similarly. They were vested in the way things were and quick to bemoan their status, but slow to do anything about it. My manager’s reaction to the complaints was telling: “This is Google, if you want something done, then do it.”

Preface

Software development is hard. Testing that software is hard, too.

As it stands today, Google testers are paid on the same scale as developers with equivalent bonuses and promotion velocity.

From the preface, we can learn there are 2 very important roles at Google testing. The Software Engineer in Test (SET) and the Test Engineer (TE).

About the authors

James Whittaker is an engineering director at Google and has been responsible for testing Chrome, maps, and Google web apps. He used to work for Microsoft and was a professor before that. James is one of the bestknown names in testing the world over.

Jason Arbon is a test engineer at Google and has been responsible for testing Google Desktop, Chrome, and Chrome OS. He also served as development lead for an array of open-source test tools and personalization experiments. He worked at Microsoft prior to joining Google.

Jeff Carollo is a software engineer in test at Google and has been responsible for testing Google Voice, Toolbar, Chrome, and Chrome OS. He has consulted with dozens of internal Google development teams helping them improve initial code quality. He converted to a software engineer in 2010 and leads development of Google+ APIs. He also worked at Microsoft prior to joining Google.”

Chapter 1. Introduction to Google Software Testing

At Google, software testing is part of a centralized organization called Engineering Productivity that spans the developer and tester tool chain, release engineering, and testing from the unit level all the way to exploratory testing.

Engineering Productivity was rocketing from a couple of hundred people to the 1,200 it has today.

Google testers are willing to try anything once but are quick to abandon techniques that do not prove useful.

At times, testing is interwoven with development to the point that the two practices are indistinguishable from each other, and at other times, it is so completely independent that developers aren’t even aware it is going on.

Testing must not create friction that slows down innovation and development. At least it will not do it twice.

Google Test is no million-man army. We are small and elite Special Forces that have to depend on superior tactics and advanced weaponry to stand a fighting chance at success.

The first piece of advice I give people when they ask for the keys to our success: Don’t hire too many testers.

If you are an engineer, you are a tester. If you are an engineer with the word test in your title, then you are an enabler of good testing for those other engineers who do not.

Quality ≠ Test

Although it is true that quality cannot be tested in, it is equally evident that without testing, it is impossible to develop anything of quality.

Quality is achieved by putting development and testing into a blender and mixing them until one is indistinguishable from the other.

If a product breaks in the field, the first point of escalation is the developer who created the problem, not the tester who didn’t catch it. This means that quality is more an act of prevention than it is detection. Quality is a development issue, not a testing issue.

Testing must be an unavoidable aspect of development, and the marriage of development and testing is where quality is achieved.

At Google, we have created roles in which some engineers are responsible for making other engineers more productive and more quality-minded. These engineers often identify themselves as testers, but their actual mission is one of productivity.

That’s right, if a SWE has to modify a function and that modification breaks an existing test or requires a new one, they must author that test. SWEs spend close to 100 percent of their time writing code.

The software engineer in test (SET) is also a developer role, except his focus is on testability and general test infrastructure. SETs review designs and look closely at code quality and risk. They refactor code to make it more testable and write unit testing frameworks and automation. They are a partner in the SWE codebase, but are more concerned with increasing quality and test coverage than adding new features or increasing performance. SETs also spend close to 100 percent of their time writing code, but they do so in service of quality rather than coding features a customer might use.

The test engineer (TE) is related to the SET role, but it has a different focus. It is a role that puts testing on behalf of the user first and developers second. Some Google TEs spend a good deal of their time writing code in the form of automation scripts and code that drives usage scenarios and even mimics the user. They also organize the testing work of SWEs and SETs, interpret test results, and drive test execution, particularly in the late stages of a project as the push toward release intensifies. TEs are product experts, quality advisers, and analyzers of risk. Many of them write a lot of code; many of them write only a little.

Clearly, an SET’s primary focus is on the developer. Individual feature quality is the target and enabling developers to easily test the code they write is the primary focus of the SET. User-focused testing is the job of the Google TE.

Testers are essentially on loan to the product teams and are free to raise quality concerns and ask questions about functional areas that are missing tests or that exhibit unacceptable bug rates.

The on-loan status of testers also facilitates movement of SETs and TEs from project to project, which not only keeps them fresh and engaged, but also ensures that good ideas move rapidly around the company. A test technique or tool that works for a tester on a Geo product is likely to be used again when that tester moves to Chrome. There’s no faster way of moving innovations in test than moving the actual innovators. It is generally accepted that 18 months on a product is enough for a tester and that after that time, he or she can (but doesn’t have to) leave without repercussion to another team. One can imagine the downside of losing such expertise, but this is balanced by a company full of generalist testers with a wide variety of product and technology familiarity.

One of the key ways Google achieves good results with fewer testers than many companies is that we rarely attempt to ship a large set of features at once. In fact, the exact opposite is the goal: Build the core of a product and release it the moment it is useful to as large a crowd as feasible, and then get their feedback and iterate.

Google often builds the “minimum useful product” as an initial version and then quickly iterates successive versions allowing for internal and user feedback and careful consideration of quality with every small step. Products proceed through canary, development, testing, beta, and release channels before making it to users.

Instead of distinguishing between code, integration, and system testing, Google uses the language of small, medium, and large tests

Small tests cover a single unit of code in a completely faked environment. Medium tests cover multiple and interacting units of code in a faked or real environment. Large tests cover any number of units of code in the actual production environment with real and not faked resources.

The general rule of thumb is to start with a rule of 70/20/10: 70 percent of tests should be small, 20 percent medium, and 10 percent large. If projects are user-facing, have a high degree of integration, or complex user interfaces, they should have more medium and large tests. Infrastructure or data-focused projects such as indexing or crawling have a very large number of small tests and far fewer medium or large tests.

Whatever a test’s size, Google’s test execution system requires the following behavior. It is, after all, a shared environment:

• Each test must be independent from other tests so that tests can be executed in any order.

• Tests must not have any persistent side effects. They must leave their environment exactly in the state when it started.

The primary driver of what gets tested and how much is a very dynamic process and varies from product to product.

Finally, the mix between automated and manual testing definitely favors the former for all three sizes of tests. If it can be automated and the problem doesn’t require human cleverness and intuition, then it should be automated.

Having said that, it is important to note that Google performs a great deal of manual testing, both scripted and exploratory, but even this testing is done under the watchful eye of automation.

We also automate the submission of bug reports and the routing of manual testing tasks.7 For example, if an automated test breaks, the system determines the last code change that is the most likely culprit, sends email to its authors, and files a bug automatically.

Chapter 2. The Software Engineer in Test

There is a different kind of thinking involved in writing feature code and writing test code.

Google SWEs are feature developers. Google SETs are test developers. Google TEs are user developers.

In the early days of any company, testers generally don’t exist.2 Neither do PMs, plan ners, release engineers, system administrators, or any other role. Every employee performs all of these roles as one.

All engineers must reuse existing libraries, unless they have very good reason not to based on a project-specific need. All shared code is written first and foremost to be easily located and readable. It must be stored in the shared portion of the repository so it can be easily located. Because it is shared among various engineers, it must be easy to understand. All code is treated as though others will need to read or modify it in the future. Shared code must be as reusable and as self-contained as possible. Engineers get a lot of credit for writing a service that is picked up by multiple teams. Reuse is rewarded far more than complexity or cleverness. Dependencies must be surfaced and impossible to overlook. If a project depends on shared code, it should be difficult or impossible to modify that shared code without engineers on dependent projects being made aware of the changes. If an engineer comes up with a better way of doing something, he is tasked with refactoring all existing libraries and assisting dependent projects to migrate to the new libraries. Again, such benevolent community work is the subject of any number of available reward mechanisms. Code in the shared repository has a higher bar for testing (we discuss this more later).

Keeping it simple and uniform is a specific goal of the Google platform: a common Linux distribution for engineering workstations and production deployment machines; a centrally managed set of common, core libraries; a common source, build, and test infrastructure; a single compiler for each core programming language; language independent, common build specification; and a culture that respects and rewards the maintenance of these shared resources.

A typical Google product is a composition of many services and the goal is to have a 1:1 ratio between a SWE and a service on any given product team. This means that each service can be constructed, built, and tested in parallel and then integrated together in a final build target once they are all ready. To enable dependent services to be built in parallel, the interfaces that each service exposes are agreed on early in the project.

Who Are These SETs Anyway? SETs are the engineers involved in enabling testing at all levels of the Google development process we just described. SETs are Software Engineers in Test. First and foremost, SETs are software engineers and the role is touted as a 100 percent coding role in our recruiting literature and internal job promotion ladders.

Test is just another feature of the application, and SETs are the owner of the testing feature.

SETs sit side by side with feature developers (literally, it is the goal to have SWEs and SETs co-located). It is fair to characterize test as just another feature of the application and SETs as the owner of the testing feature. SETs also participate in reviewing code written by SWEs and vice versa.

Focusing on quality before a product concept is fully baked and determined to be feasible is an exercise in misplaced priorities. Many of the early prototypes that we have seen come from Google 20 percent efforts end up being redesigned to the point that little of the original code even exists by the time a version is ready for dogfood or beta. Clearly testing in this experimental context is a fool’s errand.

Google rarely creates projects as a big bang event where months of planning (which would include quality and test) are followed by a large development effort. Google projects are born much less formally.

No project gets testing resources as some right of its existence. The onus is on the development teams to solicit help from testers and convince them that their project is exciting and full of potential.

SWEs often get caught up in the code they are writing and generally that code is a single feature or perhaps even smaller in scope than that. SWEs tend to make decisions optimized for this local and narrow view of a product. A good SET must take the exact opposite approach and assume not only a broad view of the entire product and consider all of its features but also understand that over a product’s lifetime, many SWEs will come and go and the product will outlive the people who created it.

Every project at Google has a primary design doc. It is a living document, which means that it evolves alongside the project it describes. In its earliest form, a design doc describes the project’s objective, background, proposed team members, and proposed designs. During the early phase, the team works together to fill in the remaining relevant sections of the primary design doc. For sufficiently large projects, this might involve creating and linking to smaller design docs that describe the major subsystems. By the end of the early design phase, the project’s collection of design documents should be able to serve as a roadmap of all future work to be done. At this point, design docs might undergo one or more reviews by other tech leads in a project’s domain. Once a project’s design docs have received sufficient review, the early phase of the project draws to a close and the implementation phase officially begins.

As SETs, we are fortunate to join a project during its early phase. There is significant high-impact work to be done here. If we play our cards right, we can simplify the life of everyone on the project while accelerating the work of all those around us. Indeed, SETs have the one major advantage of being the engineer on the team with the broadest view of the product. A good SET can put this breadth of expertise to work for the more laserfocused developers and have impact far beyond the code they write. Generally broad patterns of code reuse and component interaction design are identified by SETs and not SWEs. The remainder of this section focuses on the high-value work an SET can do during the early phase of a project.

A good SET is eager to review such documents and proactively volunteers his time to review documents written by the team and adds quality or reliability sections as necessary. Here are several reasons why:

•An SET needs to be familiar with the design of the system she tests (reading all of the design docs is part of this) so being a reviewer accomplishes both SET and SWE needs.

• Suggestions made early on are much more likely to make it into the document and into the codebase, increasing an SET’s overall impact.

• By being the first person to review all design documents (and thus seeing all their iterations), an SET’s knowledge of the project as a whole will rival that of the tech lead’s knowledge.

• It is a great chance to establish a working relationship with each of the engineers whose code and tests the SET will be working with when development begins.

A good SET is purposeful during his review. Here are some things we recommend: • > • Completeness: Identify parts of the document that are incomplete or that require special knowledge not generally available on the team, particularly to new members of the team. Encourage the document’s author to write more details or link to other documentation that fill in these gaps.

• Correctness: Look for grammar, spelling, and punctuation mistakes; this is sloppy work that does not bode well for the code they will write later. Don’t set a precedent for sloppiness.

• Consistency: Ensure that wording matches diagrams. Ensure that the document does not contradict claims made in other documents.

• Design: Consider the design proposed by the document. Is it achievable given the resources available? What infrastructure does it propose to build upon? (Read the documentation of that infrastructure and learn its pitfalls.) Does the proposed design make use of that infrastructure in a supported way? Is the design too complex? Is it possible to simplify? Is it too simple? What more does the design need to address?

• Interfaces and protocols: Does the document clearly identify the protocols it will use? Does it completely describe the interfaces and protocols that the product will expose? Do these interfaces and protocols accomplish what they are meant to accomplish? Are they standard across other Google products? Can you encourage the developer to go one step further and define his protocol buffers? (We discuss more about protocol buffers later.)

• Testing: How testable is the system or set of systems described by the document? Are new testing hooks required? If so, ensure those get added to the documentation. Can the design of the system be tweaked to make testing easier or use pre-existing test infrastructure? Estimate what must be done to test the system and work with the developer to have this information added to the design document.

Documenting interfaces and protocols at Google is easy for developers as it involves writing code! Google’s protocol buffer language7 is a language-neutral, platform-neutral, extensible mechanism for serializing structured data—think XML, but smaller, faster, and easier. A developer defines how data is to be structured after it is in the protocol buffer language, and then uses generated source code to read and write the structured data from and to a variety of data streams in a variety of languages (Java, C++, or Python). Protocol buffer source is often the first code written for a new project. It is not uncommon to have design docs refer to protocol buffers as the specification of how things will work once the full system is implemented.

Automation Planning A SET’s time is limited and spread thinly and it is a good idea to create a plan for automating testing of the system as early as possible and to be practical about it. Designs that seek to automate everything end-to-end all in one master test suite are generally a mistake.

At Google, SETs take the following approach. We first isolate the interfaces that we think are prone to errors and create mocks and fakes (as described in the previous section) so that we can control the interactions at those interfaces and ensure good test coverage.

In addition to the automation (mocks, fakes, and frameworks) to be delivered by the SET, the plan should include how to surface information about build quality to all concerned parties. Google SETs include such reporting mechanisms and dashboards that collect test results and show progress as part of the test plan. In this way, an SET increases the chances of creating high-quality code by making the whole process easy and transparent.

SWEs and SETs work closely on product development. SWEs write the production code and tests for that code. SETs support this work by writing test frameworks that enable SWEs to write tests. SETs also take a share of maintenance work. Quality is a shared responsibility between these roles.

A SET’s first job is testability. They act as consultants recommending program structure and coding style that lends itself to better unit testing and in writing frameworks to enable developers to test for themselves. We discuss frameworks later; first, let’s talk about the coding process at Google.

To make SETs true partners in the ownership of the source code, Google centers its development process around code reviews. There is far more fanfare about reviewing code than there is about writing it.

Reviewing code is a fundamental aspect of being a developer and it is an activity that has full tool support and an encompassing culture around it that has borrowed somewhat from the open source community’s concept of “committer” where only people who have proven to be reliable developers can actually commit code to the source tree.

At Google everyone is a committer, but we use a concept called readability to distinguish between proven committers and new developers. Here’s how the whole process works: Code is written and packaged as a unit that we call a change list, or CL for short. A CL is written and then submitted for review in a tool known internally as Mondrian (named after the Dutch painter whose work inspired abstract art). Mondrian then routes the code to qualified SWEs or SETs for review and eventual signoff.

SWEs and SETs who are new to Google will eventually be awarded with a readability designation by their peers for writing consistently good CLs. Readabilities are language specific and given for C++, Java, Python, and JavaScript, the primary languages Google uses. They are credentials that designate an experienced, trustworthy developer and help ensure the entire codebase has the look and feel of having been written by a single developer.

They are credentials that designate an experienced, trustworthy developer and help ensure the entire codebase has the look and feel of having been written by a single developer.

Software development at Google happens quickly and at scale. The Google codebase receives over 20 changes per minute and 50 percent of the files change every month! Each product is developed and released from “head” relying on automated tests verifying the product behavior. Release frequency varies from multiple times per day to once every few weeks, depending on the product team.

Hiring highly technical testers was only step one. We still needed to get developers involved. One of the key ways we did this was by a program called Test Certified. In retrospect, the program was instrumental in getting the developer-testing culture ingrained at Google.

Test Certified started out as a contest. Can we get developers to take testing seriously if we make it a prestigious matter? If developers follow certain practices and achieve specific results, can we say they are “certified” and create a badge system that provides some bragging rights?

Summary of Test Certified Levels

Level 1

• Set up test coverage bundles.

• Set up a continuous build.

• Classify your tests as Small, Medium, and Large.

• Identify nondeterministic tests.

• Create a smoke test suite.

Level 2

• No releases with red tests.

• Require a smoke test suite to pass before a submit.

• Incremental coverage by all tests >= 50%.

• Incremental coverage by small tests >= 10%.

• At least one feature tested by an integration test.

Level 3

• Require tests for all nontrivial changes.

• Incremental coverage by small tests >= 50%.

• New significant features are tested by integration tests.

Level 4

• Automate running of smoke tests before submitting new code.

• Smoke tests should take less than 30 minutes to run.

• No nondeterministic tests.

• Total test coverage should be at least 40%.

• Test coverage from small tests alone should be at least 25%.

• All significant features are tested by integration tests.

Level 5

• Add a test for each nontrivial bug fix.

• Actively use available analysis tools.

• Total test coverage should be at least 60%.

• Test coverage from small tests alone should be at least 40%.

The program was piloted slowly with a few teams of testing-minded developers who were keen to improve their practices. After the kinks were worked out of the program, a big contest to get certified was held as a companywide push and adoption was brisk. It wasn’t as hard a sell as one might think. The benefit to development teams was substantial: • They got lots of attention from good testers who signed up to be Test Certified Mentors. In a culture where testing resources were scarce, signing up for this program got a product team far more testers than it ordinarily would have merited.

• They received guidance from experts and learned how to write better small tests.

• They understood which teams were doing a better job of testing and thus who to learn from.

• They were able to brag to the other teams that they were higher on the Test Certified scale!

And two final points:

The Future of the SET Simply put, we don’t believe there is a future SET. The SET is a developer. Period. Google pays them as developers, calibrates their performance reviews against developers, and calls both roles software engineers. So many similarities can only lead to one conclusion: They are the exact same role.

Today, I’m happy to announce that we’re open sourcing Test Analytics, a tool built at Google to make generating an ACC simple. Test Analytics has two main parts; first and foremost, it’s a step-by-step tool to create an ACC matrix that’s faster and much simpler than the Google Spreadsheets we used before the tool existed. It also provides visualizations of the matrix and risks associated with your ACC Capabilities that were difficult or impossible to do in a simple spreadsheet. The second part is taking the ACC plan and making it a living, automatic-updating risk matrix. Test Analytics does this by importing quality signals from your project: Bugs, Test Cases, Test Results, and Code Changes. By importing this data, Test Analytics lets you visualize risk that isn’t just estimated or guessed, but based on quantitative values. If a Component or Capability in your project has had a lot of code change or many bugs are still open or not verified as working, the risk in that area is higher. Test Results can provide a mitigation to those risks; if you run tests and import passing results, the risk in an area gets lower as you test.

My takeaways for the team and the company

  1. It is high time to tear down the silos and connect the test automation engineers of the company to share their experiences and help each other. As we are using Microsoft Teams for communication, I advise creating a Teams group where we can connect. I would gladly do the first few steps.

  2. Create bug reports automatically on failing automated tests. With this, we could save a lot of time.

  3. Consider introducing Google's Test Certified concept to evaluate where we are standing at the moment in terms of automation. Or we can adapt it you our needs.

  4. Offer the option for test automation engineers to take part in developer code reviews. It gives great opportunities to understand the insights of a component, get some ideas for how it can be break and also test engineers could share the ideas for improvements. I would happily check the merge requests of the developers.

  5. Consider using Protocol Buffers for defining DTOs , so we can easily generate the code for any language including Java and Javascript. At my previous company, we used Protocol Buffers and it is a really great tool.

  6. Share reusable components between teams. At the moment each test automation team works independently from each other but I am pretty sure there are many libraries, tools that could increase the productivity of other teams too.

  7. Experiment with Google Test Analytics, to see whether we can benefit from using it or not.

Have you read the book? What were your main takeaways? Have not read the book? Will you consider using any of my takeaways?

Share this article if you found it useful. 😊

Buy me a coffee if you like this article! 😊

📚 Join the Selenide community on LinkedIn! ✌

Top comments (0)