5 Ways to Fail With Your Test Automation

In theory, Test Automation tools are supposed to make things easier for testers, save time and provide us with an additional comfort blanket for our test coverage. And yet, so many organizations fail to implement a good, solid and scalable Test Automation infrastructure that could, in fact, save them time and return the invested resources. Some "Throw away" written Test-automation projects and others sustained in an on-going struggle with their goals to make a stable Automation suite.
Over the years I have seen many companies trying to implement their automation initiatives, I have seen written projects thrown away, frameworks switched and managers being fired because they failed to deliver what was promised or set as a goal.

There are many reasons for failure, and though automation makes perfect sense in the long run, we do need to realize that it is a lot more challenging as it seems.
Here are some of the things to be aware of before you jump ahead and start building your Test Automation project:

A Test Automation project is a project like any other software project and should be treated like one: How would you approach planning a software project? Let this thought guide you as you think about implementing your Test Automation project. Who would you choose for the job? Whom will be using the produced product? Which language or framework is most suitable for the job? Would you take a look at similar projects and learn from their guidelines and challenges? Would you write an architecture you know nothing about by yourself, or would you call a consultant to advise you with some initial ideas?
All of these questions are just the tip of the iceberg of what you should be asking yourself before you write a single line of code or test case.

Inadequate Skillset: Let me ask you a question. Would you let a non-experienced developer or worse, someone who does not know how to write a good clean scalable code - design your app's architecture and write it from scratch? I think that the answer is no. So, why do so many companies think that someone could implement quality Test Automation infrastructure without the appropriate experience and skillset? If the dedicated team does not have the appropriate coding knowledge and experience and you do not plan to hire someone who does, then you should consider using codeless automation tools.

Unrealistic Expectations/Approach: I will divide that one into 2 parts. The first is a wrong perception of the ROI (return of investment) of test-automation and the second part is unrealistic expectations per-time frame. A project is as good as you make it, and you cannot expect something to deliver value without investing enough time. I have recently talked to a QA Engineer that told me that he received 6 hours a week to write an Automation infrastructure. What do you expect him to accomplish in 6 hours a week?
In order to make this work you have to ask yourself:
 - What are the goals you want to achieve with your test automation?
 - What would be considered a good value for your efforts/time?
 - What are the criteria for success?
 - How much time/money are you willing to invest in order to make it happen?
Automation is not a "Launch and Forget" mission. You need to acknowledge that it is an ongoing process that requires a team effort, maintenance and continuous progress.
I have to admit, that sadly from what I have seen, some companies perceive giving a QA Engineer an Automation task as just a way to preserve an employee or add some "spice" into his weekly routine. Don't get me wrong, I'm not saying we should not preserve our employees or give them the time to learn and progress, I'm just saying that it should be aligned with realistic expectations and transparency to the employee itself.

Insufficient planning and lack of design: A good Automation project should begin with a high-level strategy and plan, that will evolve into a detailed technical design. The "One size fits all" approach can be devastating for your success. What worked great for one company does not necessarily mean it would work for you. That statement is also true about a tool choice. It goes as far as following buzz word when you do not really need to use the technology in question. This is highly important to note. There is a high variety of open source and commercial tools out there that can deliver high value-for-money and yet, so many companies rush to Selenium without even considering the alternatives and the benefits/ solution these tools can provide. Choose a tool that will meet your needs. Here is my two cents - it is better to pay for a solution instead of wasting a lot of time (Which will measure up to be a lot more expensive than the tool's price tag).

Now, as we proceed to talk about how to plan our design, we need to consider our all-around technical architecture and business needs. Here are just some of the aspects you should take under consideration:

- What projects, modules, business flows will we test?
- What is the planned coverage for each implementation stage? (Sanity, Regression, CI/CD dedicated suite?)
- What are the main shared/common/repeating flows/functionality?
- How do we plan each test to be a short standalone artifact? (SetUp and TearDown, dependency on external Data instead of produced test Data, What needs to run before each test, suite class?)
- How do we prevent one test "breaking" another when they run in concurrency?
- How do we produce a good scalable, appropriate and sterile environment? (Mock servers, Database instances, Cleanup script, browser settings, Grid/Server settings, Installation and Cleanup, Virtual environments or docker containers).
- What build tools should we use?
- How would our dependencies be managed?
- Where will our project be stored?
- How many engineers will be working on the project and how the version management would be managed?
- Which CI tools are best to execute our builds?
- Will you be implementing CI/CD workflow?
- Where will you run your tests and which resources can you use to design your execution platform? (Are you provided with the budget to purchase cloud solution licenses? Do you have the resources or support to set up your own Selenium Server infrastructure?)
- What kind of Tests will be automated? (API, Visual Validation, Server processes, Web/UI Automation, Mobile - Android/IOS).
- Are there any large data sets, long flows, complex processes, integrations that would require additional assessment?
- What should we test via the GUI and what would be better and more stable to test via the API?
- What Logs, Reporting mechanisms, Handlers and listeners do we need to implement in order to make our root cause analysis and debugging easier? (What good is a one-hour automation suite if you spend a day understanding the output?)
- What metrics/report/output should the runs provide?
- What integrations do we need? (Reporting systems, ALM Tools, Bug Trackers, Test management tools).
- Who will write the infrastructure and who is supposed to implement the tests? (Will there even be a division between the two?)
- Who is responsible to provide the Automation Engineers with the business flows/Test cases/ Business logic that is meant to be automated?
- Is there a POC (Proof of concept) stage defined to decide about future goals?

That is just some of the questions in a nutshell and what you should take from this is that
a correct tool choice, good planning/design will make the difference between success and failure.

Automating everything: An important thing to note is that not everything can or needs to be automated. At times, due to wrong perception or wrong management decisions, we automate everything that comes to the manual tester's mind, or just blindly convert manual test cases to Test-Automation scripts. First of all, not all tests are suitable to automate. Second of all, the test-cases themselves are not always suitable for the job. That results in an unmaintainable cluster of automated tests that is impossible to maintain and sadly most of the times it will result in marking out a lot of our written work. There is no real added value in automating thousands of test cases that at times check the same business logic if your organization does not have the resources to maintain several automation teams.
We also need to make a clear plan about what needs to be in each automation build. What is included in our regression? What is defined as our Sanity? What is stable and reliable enough to go into our CI/CD Pipeline?

Test-Automation needs solid planning, knowledge, dedication and commitment to deliver proper value.
Lack of skills/ proper training, Wrong tools selection, Inadequate resources, Lack of proper planning, Unrealistic expectation, all of which can make a project fail.
I hope that this article will have a positive impact and help some of you to adopt a productive mindset.

Daniel Gold - Head of QA & Automation at Testim.io

Learn more about Testim.io - AI & Machine Learning based Test Automation


  1. Very enlightening article! I also seen many failures, many of which because the managers have high expectations and a large budget, but don't bother to put the attention needed, that includes all the considerations you listed and more. Amazing when you think about it.

    1. Thanks you very much Doron! Always happy to hear your feedback.

  2. Wow!! So relevant post for me, just in time. Thank you very much.

  3. Brilliant article especially for managers who know little or nothing much about test automation and it's intricacies.I always say treat test automation as production code if you want to get value from it.


Post a Comment

Popular posts from this blog

צעדים ראשונים - המדריך לבודק המתחיל חלק א

What's new in Test Automation? - Someone is pulling the strings once again