When architecting a system we often consider how our system will be able to scale depending on the number of users interacting with it at any point in time. Indeed it was this very design philosophy that led to the rise of Cloud computing. However, when companies are considering building their test frameworks, they’re not always considering this same principle which will allow them to scale their test automation to meet the demands of their delivery.
Too often when companies design their test framework, they look at the existing needs of their organization and develop it from there. These designs will often include ways of integrating the types of applications, technologies and tools currently in use, as well as the number of known tests to currently formulate around. The problem with this is that tools change, applications under test change and the size and scale of your testing is likely to grow and diversify as your products do. When designing your automation frameworks it’s important to keep this in mind, so that while you are building a solution that will help you to deliver high-quality software in a short space of time now, you also want to ensure it meets this need for the foreseeable future as well.
So, how do we do this when we don’t always know what technologies and applications lie in our future or ensure that as our testing efforts grow, our delivery is not hampered? Here are a few things to consider:
Don’t build around your tools
Instead of trying to build a framework that maximizes your current testing tools, rather build a framework that is completely tool agnostic and allows you to change tools easily should you ever need to. This can be done by written a centralized framework which dictates how automated tests, look and work, with only one or two functions responsible for invoking the tool and translating your testing into an output it can use. This may require more effort as you are unable to leverage that particular tools core feature set and will need considerable effort to develop around these gaps. However the long-term benefit of not needing to rewrite aspects of your automation every time you change tools should far outweigh the added effort invested in building a completely autonomous framework.
Keep your framework modular
I’ve mentioned this before in previous articles, but it’s important that any framework design be built around robustness and modularity. Functions should ideally be as small as possible so that when aspects of the applications change or features and technologies evolve, only those functions affected by the change need to be updated. To best achieve this all core functions of a framework should be completely removed from anything involving your application and any functions that wrap around the individual objects of the application under test should focus on only one object and its features.
Built for performance
A big aspect that doesn’t always factor into test automation design is performance. Not so much in test development, but rather in execution. It’s often said that automation tests should be repeatable, robust and easy to maintain, but they should also be fast to execute as well. We have the technology today to write frameworks that can execute applications quickly. When designing your framework, it's important to regularly integrate performance testing into the process and write your code efficiently and in an IDE that is speedy in execution. For instance, Scala is a popular and growing development language that can be used to write effective test cases quickly. Its compilation engine though is very slow meaning that it doesn’t scale well to a larger number of tests.
Structuring tests so that they are not reliant on asynchronous communication or 3rd party systems and making use of reliably stubbed data sets and API Interactions will also help to reduce performance bottlenecks in your framework, though at times they can add a further quality risk if not correctly designed.
Another aspect that can drastically improve performance is the design of the error handling in your system. Test failure should be easy to identify and should allow for a test to quickly move on. While most frameworks do this, where people go wrong is in not recording certain object failures in a variable that can be used to skip tests reliant on the same failing object or help a spate test to run an execution script that sets up the data it needs because an earlier test could not allow it to do so. Again, this takes time and make script tests incredibly difficult, but will save you an incredible amount of time when executing.
Test Independence
It’s been said many times that tests should remain independent of each other to ensure they are robust and not prone to failure just because something previously was not able to execute. Another aspect of test independence which aids in scalability is that tests can execute concurrently on two separate build instance to test the same cod. This is especially useful for integration testing where the number of test cases is often massive and their execution time not always easy to reduce. To ensure you can still test your code in a short space of time, your code can be built onto multiple VMs which divide the tests up for execution. This allows you to deliver quality code, in a timeous fashion that doesn’t hamper your drive to continuous delivery.
Over the coming weeks, I will try and delve a little further into the different technical aspects of these approaches and what you need to look out for in building a framework to these requirements. There are important aspects of all test frameworks that I’ve already mentioned, like robustness and maintainability. I won’t be going into these separately but will touch on them in the different areas as they will naturally form part of any well-designed testing solution.
Comments