Software Testing Economics. Testing Outcomes

Software Testing Economics. Testing Outcomes

Another article in our series on software testing economics. Looking forward to your thoughts.

Version 1.0 in detail


Underestimation of testing efforts often results not in changing the test strategy but even more efforts (additional time, staff, etc.), and consequently in longer times and/or costs of the project. So, we underestimated labor efforts or they were underestimated for us – what should we do then?

“Give us what we need! Yes, we have an estimation of 3,500 man-hours, but everything has gone wrong, therefore we need more time and people.” Is that bad? We are exceeding the set limits, time frames and costs, and there's nothing worse for the customer because time is everything. Of course, if you do the job on time but badly – it's unacceptable, yet if you do it well but six months later... Think what is worse.

A case example: “The main thing is to get the project, and then we’ll see.” We bid for a tender. We have estimated labor efforts as 500 man-hours. It’s costly; we get an instruction to cut labor efforts, but not the developers. With such labor costs the project cannot be sold, therefore they decided to cut down testing to 300 man-hours. It’s not quite clear what software testers do on the project. If they deliver software with defects, it’s only worse for the project. “If they write test cases – who needs these test cases after all?!” I have heard that phrase from top managers a lot of times.

OK, we agreed 300 man-hours. But we don't know how we can do what we promised with only 300 man-hours. Yet we agreed to that estimation and therefore promised to do everything in full, with all artifacts. And this time has expired. What’s next? There's no miracle: either you stop testing, which is harmful for the project in technical terms, or you continue testing at a loss, and people from finance come and ask you why the company is incurring losses. And if the company is listed on the stock exchange, they say “your margin is very low, investors will run away from us.” The result is that product quality becomes unpredictable.

Another case: “Anything you wish for your money.” How many times have we told that to the customer? The customer takes it literally and starts demanding all kinds of things for their money. The project is carried out under a Fixed Price model, so that everyone is trying to minimize costs; changes are starting to occur, and retesting is being done. We got our 500 men-hours, and have spent them because we subscribed to your “wishes”. Yet the result is disastrous. Tests are either stopped or continue at a loss. The product quality is unpredictable.

Case three: “Testing from here till lunch.” We have estimated labor efforts as 500 man-hours and got approval. We have a team, and we start testing without worrying about priorities and goals, we are testing everything in a row, because this is what the customer needs, as they told. We love the customer and want to do our job well, but here the 500 man-hours have expired. And what would happen to the applications when the time has expired? God knows. Some things have been tested, some not; some automation scripts are written, which don't work though, but they exist. Regressive testing through automation is such a thing that can do nothing on the one hand and consume lots of money on the other hand. Tests are either stopped or continue at a loss. The product quality is unpredictable.

Case four: “Cavalry charge or on the off chance” Yes, we are a team of professionals, we can do everything. All the testers in the company are called to arms, we are conducting software tests as we can – we may lack expertise for test design but we have a huge number of junior testers who are zealous – they work on weekends. Testing is completed on the day of project completion. And what is there in the end? God knows. It depends on how the cards fall. Cards fall badly, as a rule. The product quality is unpredictable.

Software Testing Economics. Testing Outcomes.jpg


Version 2.0 in brief


Let’s discuss the difference between Versions 1.0 and 2.0. Version 1.0 is the paradigm in which we are, unfortunately, working today. They gave us certain resources, and now we must produce a certain result. Why is it like that? Because we know how to utilize those resources. For instance, we know that 200 man-hours out of 500 will be spent on writing test cases, 100 for automated tests, and 200 for manual tests. But all this is in an ideal world, if there are no radical changes, and all software testers will work properly, etc. And if we justify our estimation, demonstrate that it is realistic and effective, and assess all risks, then we are responsible for it.

Finally, we have a plan in place, which is fixed, and we are trying to keep to that plan.

And what is Version 2.0? So, we have got estimations. They play the role of an emergency parachute. That’s how we will work unless we can’t do anything more efficient. But Version 2.0 says that we should utilize the justified labor efforts efficiently by analyzing the project situation. Shall we write test cases? If yes, which ones? Which test data will we used and where will we get it from? From the customer? Our own? What kind of software testers do we need – a few with strong expertise (but expensive) or a lot minimal experience (but inexpensive)? Do we need a stable team for this project or not? Do we need automation, and if yes, then which tools and coverage should be provided?

While for Version 1.0 the main criterion is return on investments (ROI), for Version 2.0 the main criterion is profit maximization. These are two different things.

Main points


  • Exhaustive testing is impossible
  • Testing, among other things, is an economic activity
  • Version 1.0, or What should be done. We work under the plan and estimation; if we lack some resources, ask to add them (an extensive approach)
  • Version 2.0, or How it should be done. We work under the plan and estimation as effectively as possible (an intensive approach)

Check out our software testing trainings and start working on developing or expanding your software testing skills.

Come learn with us!

Alexandr Alexandrov
Software Testing Consultant
Nadal masz pytania?
Połącz sięz nami