Performance testing is an important part of our product development life-cycle as we need to make sure whatever software we are building performances at an optimal level and can handle the relevant load we need it for. We tend to forget that your automation frameworks need to be built with the same performance in mind. While it’s unlikely that you will require a high load or stress on your automation framework, the speed at which it is able to execute is incredibly important, especially to ensure that it delivers the results you need in a reasonable amount of time.
More than ever in today’s DevOps ready world, we want to write code and be able to run it through an automated test suite can get results in a few hours to know if it’s good enough to push to production. The problem is that with thousands of different testing types, particularly those that are UI driven or End to end focused, getting any form of execution results back is likely to take many hours and often limit your ability to respond quickly to get fixes or updates into production.
There are many things that you can do though to ensure that your test automation framework executes effectively without compromising on your quality risk. Unit tests by nature are quick to execute and small in scope, but where companies struggle is with the bigger integration, component or end-to-end tests which become a lot more onerous on execution. These below suggestions focus mostly on this form of automated testing, though can be applied to unit tests just as easily.
Choose the appropriate tools and language to allow for rapid execution
While unit tests can be executed without too much difficulty, testing the software end to end may require tools that could slow down your execution. Building your own framework allows you to not rely on your tools too much, but it’s unlikely you can eliminate the need for tools completely. Ensure that you choose a tool that is not unwieldy and executes your tests quickly. Creating your framework in a language that doesn’t have a heavy-handed compiler like Scala or .Net helps to ensure that your tests can execute quickly and if the reliance on the tool is minimized purely for object interaction, at least it isn’t needed for processing and should be able to execute what you require at high speed.
Write fast-executable code instead of easy to write code
This is a tricky thing to do but is incredibly important. We often write our test frameworks with the simplicity of test scripting in mind rather than efficiency of execution. This is one of the reasons why a code-driven framework is effective because it removes a layer of abstraction in the processing. From a coding perspective, this means that you should avoid complicated loops that require unnecessary processing and ensure that the number of decisions required in your test. We also often tend to confuse the number of lines of code with performance, when in reality just because your code is written in as few lines as possible, doesn’t mean its quicker when executed by a CPU. Choose the simplest form of decision making rather than the simplest code to write.
Simplify your tests
While each test should remain independent (more on this next week), it doesn’t mean you can’t test multiple requirements in one test, provided there is no immediate dependency or duplication along the way. Some tests, especially end to end tests get weighed down because a certain element of an object is constantly verified every time a function is called and yet it may need to be called by practically every test. Test something once, every other time that function is called, it should bypass the verify operations that slow it down. If something does fail, your error handling should take care of it.
Error Handling
A lot of the time wastage that takes place in automated test execution is a result of errors in your application or test. Your framework should be able to handle any unexpected response or system delay by failing or bypassing the necessary tests. Along with this, if a part of the code fails, yet many tests down the line call on it, your framework should be intelligent enough to ensure all subsequent tests calling that function are skipped. Another reason why your test framework needs to be modular.
Remove any latency
At times we run into problems with automation testing because we have unnecessary latency waiting for system response, especially if a 3rd party system is involved. This can be eliminated through using stub data where possible to simulate desired responses than wait on certain events to happen. It mind sound like a bit of a cheat, but well-defined stub data is as accurate as production and if you are unable to work with stub data at any level of reliability it’s a sign that the APIs are perhaps not defined as well as they should be.
The same applies to unnecessary logging which is great for debugging and tracking what your tests do, but are perhaps not suited for use in your daily integration environment. While logging is not taxing on processing, it still adds to the processing and copious amounts of logging add up. Keep logs for debugging, but try and remove them unless absolutely necessary when you release it.
Remove unnecessary tests
There can be a lot of duplication that goes into testing. From unit tests and end to end tests covering the same thing to legacy tests that while still relevant might be testing barely used functionality, remove what is not necessary. Using test design techniques like CTD (Combinatorial Test Design) will also help you to identify the least number of tests to achieve a high level of coverage. Rather let your unit tests worry about 100% code coverage and design your bigger end to end tests using this method to reduce wastage.
Along with this is a process many companies forget about – test maintenance. Just because a test has been added to your automated regression suite doesn’t mean it should stay there. Regularly review your tests and update, replace or delete where necessary to remove anything that is unnecessary.
Performance Test your automation
And the last, but most important thing is to measure the performance of your automation and where its bottlenecks lie. You do not need to have performance tests written specifically for this, but that will certainly help. Most integration tools can track how long each test takes to execute and teams can use this to identify specific bottlenecks and address them where possible. Do this on a regular basis and make a concerted effort to prevent long-running modules or tests from entering into your automation build. It might seem like overkill, but again this is all around building a scalable test framework and as your applications and the number of tests needed to ensure their quality grows, performance becomes key.
Again, I guess it’s worth saying that you need to treat your automation framework just as you would any of your other products. It is essentially a product that is used to test your other products and the better you build it, the better it will meet the needs of your company.
Comments