A part of my fascinating role in Testim.io is arriving at our customer's headquarters and helping them achieve their automation goals. This type of Solution Architecture is truly challenging and interesting because there is no "One size fit all approach" and each company has its own unique challenges, agenda, mindset, and skillset.
There are many difficulties in implementing test automation, one of which is changing your mindset from manual testing to automation-oriented perception.
One of the first things I tell everyone I meet is "before a single line of code is written, stop! take a piece of paper and a pen and list down your goals. Plan as detailed as you can what it should be like".
I noticed by talking to testers, that it is very difficult for some of them to detach themselves from what they are used to perform manually and try to think about automated tests from a different point of view.
One of the biggest common mistakes is attempting to automate every manual test case you have ever written. Besides the fact that I see no value in planning, authoring, and maintaining thousands of "Backlog tests", such quantity and approach often lead to "Throwing away" or quarantining a significant part of the tests and is just not a correct way to design what should be automated.
I recently had a chance to sit down with a few testers and let them show me what they plan to automate. One of the things I wanted to get to their attention is the difference between what they would do manually and how an automated test suite would look like, and here is how it went.
- "Okay, please show me the flow you would like to automate"
- They opened their application, granted, they were already logged-in, made a few actions on the screen, selected some listing from a list, and checked their expected outcome.
- "Okay, now let's think about what you have just done.
This is your work station. Your browser 'remembers you' so you are logged in ones you navigate to your application. What would happen when you try to execute this test on a "clean" browser instance or a remote client? Secondly, you selected some listing from your table. will that listing be present there in a day? a week? a month? How many listings would you like to test? how are they different one from another? is the same business flow using different data and expecting a different output on the screen?"
After we settled that I proceeded to ask some guiding question in order for us to understand what we will be dealing with:
- What are the pre-conditions you need in order to perform your flows?
- Do you need different users/credentials/ Permissions for your tests?
- Do you rely on constant data that exists in your environment when you start your tests? if so who produces it and how will you enforce it is there?
- On which environments do you need to execute your flows? Is there more than 1? What is the difference between them?
- Are you using different inputs for the same exact flow? If so, where does the input data come from?
- Are your tests producing data that can "break" the flow of another test? If so, what do you need to do in order to clean this data and return to "point zero" after the flow is executed?
- What are the common actions you perform in these flows?
- What is the constant and dynamic data you need?
- Do your flows depend on 3rd party services?
This conversation was the beginning of a wonderful process, in which we learned how to acquire an "Automated mindset"
Further down the road, I try to assist by asking guiding questions and giving some tips for best practices and overall guidelines.
Simplify and break down your tests to independent standalone units - break down your tests into single-purpose short units. In my entire career, I have never met a QA manager or tester that said their system isn't "Complex". With that said, long flows are more susceptive to flakiness and instability. Your tests need to be reliable to produce value. They need to be as short and simple as possible when the guideline is that a single test performs and validates a single business logic. A lot of the issues I see with the solution design is that some testers have difficulty to understand that a good automated test is different than a good manual test.
What is our "Point Zero?" - We might need some setup and cleanup flows. They need to be defined for each test and each suite/execution. These flows have multiple roles. Preventing dependencies.
Produce standalone independent flows - Our tests cannot depend on each other. This is a big "No-no". Each test needs to be a standalone artifact. If we need dependencies than the correct way is to use fixtures or make the test produce its own data.
Do we even need the UI? - What is it exactly we are trying to test? is it actually the UI or are we trying to get to a state or precondition by going through long flows? Many preconditions and even validation can be performed by API calls. If possible this would be a much-preferred and stable way.
Identify your building blocks - A good start would be identifying your "shared" actions and flows that would guide you in many of your tests. There flows are your building blocks. Dependent on the selected framework, they can be divided into pages, models, business flows, validations, common utilities, etc...
Design before implementation, Start small and always improve - I have said it before, test automation is a project like any other software project and should be treated like one. Technical debt can turn into a 3 headed monster quicker than you think. Start small and produce value. Strive to stabilize and improve a good test suite before jumping to scale. If your selected framework does not have out of the box reporting solution, implement a good scalable and detailed one your self. It will save you a lot of time and headache in the future.
Strive to be sterile - The good practice would be creating a dedicated environment for executing your test automation. This environment should be sterile from all interferences that could affect the reliability and stability of your tests. Unexpected pop-ups, captcha, security blockers are just the tip of the iceberg of what could be eliminated from your automation environment to allow better and more stable and robust executions. At the end of the day, your tests are not supposed to maneuver between abstracts. They are meant to save you time and provide you with a reliable picture of your system with minimum waste of your time.
קורס אוטומציה למתקדמים - ההרשמה נפתחה למחזור ספטמבר