Testing Blog
Interview with Copeland
Tuesday, February 23, 2010
I recently did an interview with Matt Johnston of uTest (a community based testing company) that talks about our philosophy and approach to testing at Google. Let me know what you think.
Part 1
,
Part 2
,
Part 3
Posted by Patrick Copeland
2 comments
Testing in the Data Center (Manufacturing No More)
Tuesday, February 09, 2010
By James A. Whittaker
W. Edwards Deming helped to revolutionize the process of manufacturing automobiles in the 1970s and a decade later the software industry ran with the manufacturing analogy and the result was nearly every waterfall, spiral or agile method we have. Some like TQM, Cleanroom and Six Sigma are obvious descendants of Deming while others were just heavily influenced by his thinking. Deming
was
the man.
I repeat,
was
. My time testing in Google's data center makes it clear that this analogy just doesn't fit anymore. I want a new one. And I want one that helps me as a tester. I want one that better guides my behavior.
We just don't write or release software the way we used to. Software isn't so much
built
as it is
grown
. Software isn't
shipped
... it's simply
made available
by, often literally, the flip of a switch. This is not your father's software. 21st century development is a seamless path from innovation to release where every phase of development, including release, is happening all the time. Users are on the inside of the firewall in that respect and feedback is constant. If a product isn't compelling we find out much earlier and it dies in the data center. I fancy these dead products serve to enrich the data center, a digital circle of life where new products are built on the bones of the ones that didn't make it.
In our father's software and Deming's model we talk about quality control and quality assurance while we play the role of inspector. In contrast, my job seems much more like that of an attending physician. In fact, a medical analogy gives us some interesting parallels to think about software testing. A physician's hospital is our data center, there is always activity and many things are happening in parallel. Physicians have patients; we have applications and features. Their medical devices are our infrastructure and tools. I can picture my application's features strewn across the data center in little virtual hospital beds. Over here is the GMail ward, over there is Maps. Search, of course, has a wing of its own and Ads, well, they all have private rooms.
In a hospital records are important. There are too many patients with specific medical conditions and treatment histories for any physician to keep straight. Imagine walking up to the operating table without examination notes and diagnoses? Imagine operating without a constant stream of real time health data?
Yet as software testers we find ourselves in this situation often. That app lying in our data center has been tested before. It has been treated before. Where are our medical notes?
So let's add little clipboards to the virtual data center beds in which our apps lay. Let's add equipment to take vitals and display them for any attending tester to see. Like human patients, apps have a pulse, data runs through code paths like blood through veins. There are important things happening, countable events that lead to statistics, indicators and create a medical history for an attending tester to use in whatever procedure they must now perform. The work of prior testers need not be ignored.
It's an unsettling aspect of the analogy that I have put developers in the role of creator, but so be it. Like other metaphorical creators before them they have spawned intrinsically flawed creatures. Security is their cancer, privacy their aging. Software is born broken and only some things can be fixed. The cancer of security can only be managed. Like actual aging, privacy is a guarantee only young software enjoys. Such is the life of a data center app.
But it is the monitors and clipboards that intrigue me. What do they say of our digital patients? As an app grows from concept into adolescence what part of their growth do we monitor? Where is the best place to place our probes? How do we document treatment and evaluations? Where do we store the notes about surgeries? What maladies have been treated? Are there problematic organs and recurrent illness? The documents and spreadsheets of the last century are inadequate. A patient's records are only useful if they are attached to the patient, up-to-date and in full living color to be read by whatever attending tester happens to be on call.
This is the challenge of the new century of software. It's not a process of get-it-as-reliable-as-possible-before-we-ship. It's health care, cradle to grave health care ... prevention, diagnosis, treatment and cure.
So slip into your scrubs, it's going to be a long night in the ER.
19 comments
Interviewing Insights and Test Frameworks
Tuesday, January 05, 2010
By James A. Whittaker
Google is hiring. We have openings for security testers, test tool developers, automation experts and manual testers. That's right, I said manual testers.
As a result of all this interviewing I've been reading a lot of interview feedback and wanted to pass along some insights about how these applicants approach solving the testing problems we ask in our interviews. I think the patterns I note in this post are interesting insights into the mind of the software tester, at least the ones who want to work for Google.
One of the things our interviewers like to ask is 'how would you test product
xyz
?' The answers help us judge a tester's instincts, but after reading many hundreds of these interviews I have noticed marked patterns in how testers approach solving such problems. It's as though testers have a default testing framework built into their thinking that guides them in choosing test cases and defines the way they approach test design.
In fact, these built-in frameworks seem to drive a tester's thinking to the extent that when I manage to identify the framework a tester is using, I can predict with a high degree of accuracy how they will answer the interviewers' questions. The framework defines what kind of tester they are. I find this intriguing and wonder if others have similar or counter examples to cite.
Here are the frameworks I have seen just in the last two weeks:
The
Input Domain Framework
treats software as an input-output mechanism. Subscribers of this framework think in terms of sets of inputs, rules about which inputs are more important and relationships between inputs, input sequences and outputs. This is a common model in random testing, model-based testing and the testing of protocols and APIs. An applicant who uses this framework will talk about which inputs they would use to test a specific application and try to justify why those inputs are important.
The
Divide and Conquer Framework
treats software as a set of features. Subscribers begin by decomposing an app into its features, prioritizing them and then working through that list in order. Often the decomposition is multi-layered creating a bunch of small testing problems out of one very large one. You don't test the feature so much as you test its constituent parts. An applicant who uses this framework is less concerned with actual test cases and more concerned with reducing the size of the problem to something manageable.
The
Fishbowl Framework
is a big picture approach to testing in which we manipulate the application while watching and comparing the results. Put the app in a fishbowl, swirl it around in the water and watch what happens. The emphasis is more on the watching and analyzing than it is on exactly how we manipulate the features. An applicant who uses this framework chooses tests that cause visible output and large state changes.
The
Storybook Framework
consists of developing specific scenarios and making sure the software does what is is supposed to do when presented with those scenarios. Stories start with the expected path and work outward. They don't always get beyond the expected. This framework tests coherence of behavior more than subtle errors. Applicants who employ this framework often take a user's point of view and talk about using the application to get real work done.
The
Pessimists Framework
starts with edge cases. Subscribers test erroneous input, bad data, misconfigured environments and so on. This is a common strategy on mature products where the main paths are well trodden. Applicants who use this framework like to assume that the main paths will get tested naturally as part of normal dev use and dog-fooding and that the testing challenge is concentrated on lower probability scenarios. They are quick to take credit for prior testing, assume its rationality and pound on problematic scenarios.
There are more and I am taking furious notes to try and make sense of them all. As I get to know the testers who work in my organization, it doesn't take long to see which frameworks they employ and in what order (many are driven by multiple frameworks). Indeed, after studying an applicant's first interview, I can almost always identify the framework they use to answer testing questions and can often predict how they are going to answer the questions other interviewers ask even before I read that far.
Now some interesting questions come out of this that I am still looking into. Which of these frameworks is best? Which is best suited to certain types of functionality? Which is better for getting a job at Google? Already patterns are emerging.
One thing is for sure, we're interviewing at a rate that will provide me with lots of data on this subject. Contact me if you'd like to participate in this little study!
32 comments
http://twitter.com/googletesting
Monday, December 14, 2009
Google Testing Blog is now live on twitter. Follow us here:
http://twitter.com/googletesting
By Patrick Copeland
1 comment
"If you were a brand new QA manager ..." (cont)
Friday, December 04, 2009
By James A. Whittaker
More thoughts:
Understand your orgs release process and priorities
Late cycle pre-release testing is the most nerve racking part of the entire development cycle. Test managers have to strike a balance between doing the right testing and ensuring a harmonious release. I suggest attending all the dev meetings, but certainly as release approaches you shouldn't miss a single one. Pay close attention to their worries and concerns. Nightmare scenarios have a tendency to surface late in the process. Add test cases to your verification suite to ensure these scenarios won't happen.
The key here is to get late cycle pre-release testing right without any surprises. Developers can get skittish so make sure they understand your test plan going into the final push. The trick isn't to defer to development as to how to perform release testing but to make sure they are on-board with your plan. I find that at Google increasing the team's focus on manual testing is wholeheartedly welcomed by the dev team. Find your dev team's comfort zone and strike a balance between doing the right testing and making the final hours/days as wrinkle-free as possible.
Question your testing process
Start by reading every test case and reviewing all automation. Can you map these test cases back to the test plan? How many tests do you have per component? Per feature? If a bug is found outside the testing process did you create a test case for it? Do you have a process to fix or deprecate broken or outdated test cases?
As a test manager the completeness and thoroughness of the set of tests is your job. You may not be writing or running a lot of tests, but you should have them all in your head and be the first to spot gaps. It should be something a new manager tackles early and stays on top of at all times.
Look for ways to innovate
The easiest way to look good in the eyes of developers is to maintain the status quo. Many development managers appreciate a docile and subservient test team. Many of them like a predictable and easily understood testing practice. It's one less thing to worry about (even in the face of obvious inefficiencies the familiar path is often the most well worn).
As a new manager it is your job not to let them off so easy! You should make a list of the parts of the process that concern you and the parts that seem overly hard or inefficient. These are the places to apply innovation. Prepare for nervousness from the developer ranks, but do yourself and the industry a favor and place some bets for the long term.
There is no advice I have found universally applicable concerning how to best foster innovation. What works for me is to find the stars on your team and make sure they are working on something they can be passionate about. As a manager this is the single most important thing you can do to increase productivity and foster innovation.
5 comments
Labels
TotT
98
GTAC
61
James Whittaker
42
Misko Hevery
32
Anthony Vallone
27
Code Health
27
Patrick Copeland
23
Jobs
18
Andrew Trenk
12
C++
11
Patrik Höglund
8
JavaScript
7
Allen Hutchison
6
George Pirocanac
6
Zhanyong Wan
6
Harry Robinson
5
Java
5
Julian Harty
5
Alberto Savoia
4
Ben Yu
4
Erik Kuefler
4
Philip Zembrod
4
Shyam Seshadri
4
Adam Bender
3
Chrome
3
Dillon Bly
3
John Thomas
3
Lesley Katzen
3
Marc Kaplan
3
Markus Clermont
3
Max Kanat-Alexander
3
Sonal Shah
3
APIs
2
Abhishek Arya
2
Alan Myrvold
2
Alek Icev
2
Android
2
April Fools
2
Chaitali Narla
2
Chris Lewis
2
Chrome OS
2
Diego Salas
2
Dori Reuveni
2
Jason Arbon
2
Jochen Wuttke
2
Kostya Serebryany
2
Marc Eaddy
2
Marko Ivanković
2
Mobile
2
Oliver Chang
2
Simon Stewart
2
Stefan Kennedy
2
Test Flakiness
2
Titus Winters
2
Tony Voellm
2
WebRTC
2
Yiming Sun
2
Yvette Nameth
2
Zuri Kemp
2
Aaron Jacobs
1
Adam Porter
1
Adam Raider
1
Adel Saoud
1
Alan Faulkner
1
Alex Eagle
1
Anantha Keesara
1
Antoine Picard
1
App Engine
1
Ari Shamash
1
Arif Sukoco
1
Benjamin Pick
1
Bob Nystrom
1
Bruce Leban
1
Carlos Arguelles
1
Carlos Israel Ortiz García
1
Cathal Weakliam
1
Christopher Semturs
1
Clay Murphy
1
Dagang Wei
1
Dan Maksimovich
1
Dan Shi
1
Dan Willemsen
1
Dave Chen
1
Dave Gladfelter
1
David Mandelberg
1
Derek Snyder
1
Diego Cavalcanti
1
Dmitry Vyukov
1
Eduardo Bravo Ortiz
1
Ekaterina Kamenskaya
1
Elliott Karpilovsky
1
Espresso
1
Felipe Sodré
1
Francois Aube
1
Gene Volovich
1
Google+
1
Goran Petrovic
1
Goranka Bjedov
1
Hank Duan
1
Havard Rast Blok
1
Hongfei Ding
1
Jason Elbaum
1
Jason Huggins
1
Jay Han
1
Jeff Hoy
1
Jeff Listfield
1
Jessica Tomechak
1
Jim Reardon
1
Joe Allan Muharsky
1
Joel Hynoski
1
John Micco
1
John Penix
1
Jonathan Rockway
1
Jonathan Velasquez
1
Josh Armour
1
Julie Ralph
1
Kai Kent
1
Karin Lundberg
1
Kaue Silveira
1
Kevin Bourrillion
1
Kevin Graney
1
Kirkland
1
Kurt Alfred Kluever
1
Manjusha Parvathaneni
1
Marek Kiszkis
1
Marius Latinis
1
Mark Ivey
1
Mark Manley
1
Mark Striebeck
1
Matt Lowrie
1
Meredith Whittaker
1
Michael Bachman
1
Michael Klepikov
1
Mike Aizatsky
1
Mike Wacker
1
Mona El Mahdy
1
Noel Yap
1
Palak Bansal
1
Patricia Legaspi
1
Per Jacobsson
1
Peter Arrenbrecht
1
Peter Spragins
1
Phil Norman
1
Phil Rollet
1
Pooja Gupta
1
Project Showcase
1
Radoslav Vasilev
1
Rajat Dewan
1
Rajat Jain
1
Rich Martin
1
Richard Bustamante
1
Roshan Sembacuttiaratchy
1
Ruslan Khamitov
1
Sam Lee
1
Sean Jordan
1
Sharon Zhou
1
Shiva Garg
1
Siddartha Janga
1
Simran Basi
1
Stan Chan
1
Stephen Ng
1
Tejas Shah
1
Test Analytics
1
Test Engineer
1
Tim Lyakhovetskiy
1
Tom O'Neill
1
Vojta Jína
1
automation
1
dead code
1
iOS
1
mutation testing
1
Archive
▼
2024
(8)
▼
May
(3)
Don't DRY Your Code Prematurely
Avoid the Long Parameter List
Test Failures Should Be Actionable
►
Apr
(3)
►
Mar
(1)
►
Feb
(1)
►
2023
(14)
►
Dec
(2)
►
Nov
(2)
►
Oct
(5)
►
Sep
(3)
►
Aug
(1)
►
Apr
(1)
►
2022
(2)
►
Feb
(2)
►
2021
(3)
►
Jun
(1)
►
Apr
(1)
►
Mar
(1)
►
2020
(8)
►
Dec
(2)
►
Nov
(1)
►
Oct
(1)
►
Aug
(2)
►
Jul
(1)
►
May
(1)
►
2019
(4)
►
Dec
(1)
►
Nov
(1)
►
Jul
(1)
►
Jan
(1)
►
2018
(7)
►
Nov
(1)
►
Sep
(1)
►
Jul
(1)
►
Jun
(2)
►
May
(1)
►
Feb
(1)
►
2017
(17)
►
Dec
(1)
►
Nov
(1)
►
Oct
(1)
►
Sep
(1)
►
Aug
(1)
►
Jul
(2)
►
Jun
(2)
►
May
(3)
►
Apr
(2)
►
Feb
(1)
►
Jan
(2)
►
2016
(15)
►
Dec
(1)
►
Nov
(2)
►
Oct
(1)
►
Sep
(2)
►
Aug
(1)
►
Jun
(2)
►
May
(3)
►
Apr
(1)
►
Mar
(1)
►
Feb
(1)
►
2015
(14)
►
Dec
(1)
►
Nov
(1)
►
Oct
(2)
►
Aug
(1)
►
Jun
(1)
►
May
(2)
►
Apr
(2)
►
Mar
(1)
►
Feb
(1)
►
Jan
(2)
►
2014
(24)
►
Dec
(2)
►
Nov
(1)
►
Oct
(2)
►
Sep
(2)
►
Aug
(2)
►
Jul
(3)
►
Jun
(3)
►
May
(2)
►
Apr
(2)
►
Mar
(2)
►
Feb
(1)
►
Jan
(2)
►
2013
(16)
►
Dec
(1)
►
Nov
(1)
►
Oct
(1)
►
Aug
(2)
►
Jul
(1)
►
Jun
(2)
►
May
(2)
►
Apr
(2)
►
Mar
(2)
►
Jan
(2)
►
2012
(11)
►
Dec
(1)
►
Nov
(2)
►
Oct
(3)
►
Sep
(1)
►
Aug
(4)
►
2011
(39)
►
Nov
(2)
►
Oct
(5)
►
Sep
(2)
►
Aug
(4)
►
Jul
(2)
►
Jun
(5)
►
May
(4)
►
Apr
(3)
►
Mar
(4)
►
Feb
(5)
►
Jan
(3)
►
2010
(37)
►
Dec
(3)
►
Nov
(3)
►
Oct
(4)
►
Sep
(8)
►
Aug
(3)
►
Jul
(3)
►
Jun
(2)
►
May
(2)
►
Apr
(3)
►
Mar
(3)
►
Feb
(2)
►
Jan
(1)
►
2009
(54)
►
Dec
(3)
►
Nov
(2)
►
Oct
(3)
►
Sep
(5)
►
Aug
(4)
►
Jul
(15)
►
Jun
(8)
►
May
(3)
►
Apr
(2)
►
Feb
(5)
►
Jan
(4)
►
2008
(75)
►
Dec
(6)
►
Nov
(8)
►
Oct
(9)
►
Sep
(8)
►
Aug
(9)
►
Jul
(9)
►
Jun
(6)
►
May
(6)
►
Apr
(4)
►
Mar
(4)
►
Feb
(4)
►
Jan
(2)
►
2007
(41)
►
Oct
(6)
►
Sep
(5)
►
Aug
(3)
►
Jul
(2)
►
Jun
(2)
►
May
(2)
►
Apr
(7)
►
Mar
(5)
►
Feb
(5)
►
Jan
(4)
Feed
Follow @googletesting