top of page
Writer's pictureCraig Risi

Effective Test Automation Approaches for Modern CI/CD Pipelines



I'm taking a break today from my tool evaluations to post an article I previously wrote for InfoQ. I feel it is a relevant topic at the moment and important given the range of tools we've been looking at, to understand how to implement your tests effectively in a CI/CD pipeline.


The rise of CI/CD has had a massive impact on the software testing world. With developers requiring pipelines to provide quick feedback on whether their software update has been successful or not, it has forced many testing teams to revisit their existing test automation approaches and find ways of being able to speed up their delivery without compromising on quality. These two factors often contradict each other in the testing world as time is often the biggest enemy in a tester’s quest to be as thorough as possible in achieving their desired testing coverage.


So, how do teams deal with this significant change to ensure they are able to deliver high-quality automated tests while delivering on the expectation that the CI pipeline returns feedback quickly? Well, there are many different ways of looking at this, but what is important to understand is that the solutions are less technical and more cultural ones - with the approach to testing needing to shift rather than big technical enhancements to the testing frameworks.


Shifting Left


Perhaps the most obvious thing to do is to shift left. The idea of "shifting left" (where testing is moved earlier in the development cycle - primarily at a design and unit testing level) is already a common one in the industry that is pushed by many organizations and is becoming increasingly commonplace. Having a strong focus on unit tests is a good way of testing code quickly and providing fast feedback. After all, unit tests execute in a fraction of the time (as they can execute during compilation and don’t require any further integration with the rest of the system) and can provide good testing coverage when done right.


I’ve seen many testers shy away from the notion of unit testing because it's writing tests for a very small component of the code and there is a danger that key things will be missed. This is often just a fear due to the lack of visibility in the process or a lack of understanding of unit tests rather than a failure of unit tests themselves. Having a strong base of unit tests works as they can execute quickly as the code builds in the CI pipeline. It makes sense to have as many as possible and to cover every type of scenario as possible.


The biggest problem is that many teams don’t always know how to get it right. Firstly, unit testing shouldn’t be treated as some check box activity but rather approached with the proper analysis and commitment to test design that testers would ordinarily apply. And this means that rather than just leaving unit testing in the hands of the developers, you should get testers involved in the process. Even if a tester is not strong in coding, they can still assist in identifying what parameters to look for in testing and the right places to assert to deliver the right results for the integrated functionality to be tested later. Excluding your testing experts from being involved in the unit testing approach means it's possible unit tests could miss some key validation areas. This is often why you might hear many testers give unit tests a bad wrap. It’s not because unit testing is ineffectual, but rather simply that they often didn’t cover the right scenarios.


A second benefit of involving testers early is adding visibility to the unit testing effort. The amount of time (and therefore money) potentially wasted by teams duplicating testing efforts because testers end up simply testing something that was already covered by automated testing is probably quite high. That’s not to say independent validation shouldn’t occur, but it shouldn’t be excessive if scenarios have already been covered. Instead, the tester can focus on being able to provide better exploratory testing as well as focus their own automation efforts on integration testing those edge cases that they might never have otherwise covered.


It’s all about design and preparation


To do this effectively though requires a fair amount of deliberate effort and design. It’s not just about making an effort to focus more on the unit tests and perhaps getting a person with strong test analysis skills in to ensure test scenarios are suitably developed. It also requires user stories and requirements to be more specific to allow for appropriate testing. Often user stories can end up high-level and only focus on the detail from a user level and not a technical level. How individual functions should behave and interact with their corresponding dependencies needs to be clear to allow for good unit testing to take place.


Much of the criticism that befalls unit testing from the testing community is the poor integration it offers. Just because a feature works in isolation doesn’t mean it will work in conjunction with its dependencies. This is often why testers find so many defects early in their testing effort. This doesn’t need to be the case, as more detailed specifications can lead to more accurate mocking allowing for the unit tests to behave realistically and provide better results. There will always be "mocked" functionality that is not accurately known or designed, but with enough early thought, this amount of rework is greatly reduced.


Design is not just about unit tests though. One of the biggest barriers to test automation executing directly in the pipeline is that the team that deals with the larger integrated system only starts a lot of their testing and automation effort once the code has been deployed into a bigger environment. This wastes critical time in the development process, as certain issues will only be discovered later and there should be enough detail to allow testers to at least start writing the majority of their automated tests while the developers are coding on their side.


This doesn’t mean that manual verification, exploratory testing, and actually using the software shouldn’t take place. Those are critical parts of any testing process and are important steps to ensuring software behaves as desired. These approaches are also effective at finding faults with the proposed design. However, automating the integration tests allows the process to be streamlined. These tests can then be included in the initial pipelines thereby improving the overall quality of the delivered product by providing quicker feedback to the development team of failures without the testing team even needing to get involved.


So what actually needs to be tested then?


I’ve spoken a lot about specific approaches to design and shifting left to achieve the best testing results. But you still can’t go ahead and automate everything you test, because it's simply not feasible and adds too much to the execution time of the CI/CD pipelines. So knowing which scenarios need to be appropriately unit or integration tested for automation purposes is crucial while trying to alleviate unnecessary duplication of the testing effort.

Before I dive into these different tests, it’s worth noting that while the aim is to remove duplication, there is likely to always be a certain level of duplication that will be required across tests to achieve the right level of coverage. You want to try and reduce it as much as possible, but erring on the side of duplication is safer if you can’t figure out a better way to achieve the test coverage you need.


Areas to be unit tested


When it comes to building your pipeline, your unit tests and scans should typically fall into the CI portion of your pipeline, as they can all be evaluated as the code is being built.


Entry and exit points: All code receives input and then provides an output. Essentially, what you are looking to unit test is everything that a piece of code can receive, and then you must ensure it sends out the correct output. By catching everything that flows through each piece of code in the system, you greatly reduce the number of failures that are likely to occur when they are integrated as a whole.


Isolated functionality: While most code will operate on an integrated level, there are many functions that will handle all computation internally. These can be unit-tested exclusively and teams should aim to hit 100% unit test coverage on these pieces of code. I have mostly come across isolated functions when working in microservice architectures where authentication or calculator functions have no dependencies. This means that they can be unit tested with no need for additional integration.


Boundary value validations: Code behaves the same when it receives valid or invalid arguments, regardless of whether it is entered from a UI, some integrated API, or directly through the code. There is no need for testers to go through exhaustive scenarios when much of this can be covered in unit tests.


Clear data permutations: When the data inputs and outputs are clear, it makes that code or component an ideal candidate for a unit test. If you’re dealing with complex data permutations, then it is best to tackle these at an integration level. The reason for this is that complex data is often difficult to mock, slow to process, and will slow down your coding pipeline.


Security and performance: While the majority of load, performance, and security testing happens at an integration level, these can also be tested at a unit level. Each piece of code should be able to handle an invalid authentication, redirection, or SQL/code injection and transmit code efficiently. Unit tests can be created to validate against these. After all, a system’s security and performance are only as effective as its weakest part, so ensuring there are no weak parts is a good place to start.


Areas for integration automation


These are tests that will typically run post-deployment of your code into a bigger environment - though it doesn’t have to be a permanent environment and something utilizing containers works equally well. I’ve seen many teams still try and test everything in this phase though and this can lead to a very long portion of your pipeline execution. Something which is not great if you’re looking to deploy into production on a regular basis each day.


So, the importance is to only test those areas where your unit tests are going to cover satisfactorily, while also focusing on functionality and performance in your overall test design. Some design principles that I give later in this article will help with this.


Positive integration scenarios: We still need to automate integration points to ensure they work correctly. However, the trick is to not focus too much on exhaustive error validation, as these are often triggered by specific outputs that can be unit tested. Rather focus on ensuring successful integration takes place.


Test backend over frontend: Where possible, focus your automation effort on backend components than frontend components. While the user might be using the front end more often, it is typically not where a lot of the functional complexity lies, and backend testing is a lot faster and therefore better for your test automation execution.


Security: One of the common mistakes is that teams rely on security scans for the majority of their security testing and then don’t automate some other critical penetration tests that are performed on the software. And while some penetration tests can’t be executed in a pipeline effectively, many can and these should be automated and run regularly given their importance, especially when dealing with any functionality that covers access, payment, or data privacy. These are areas that can’t be compromised and should be covered.


Are there automated tests that shouldn’t be included in the CI/CD pipelines?


When it comes to automation, it’s not just about understanding what to automate, but also what not to automate, or even if there are tests that are automated, they shouldn’t always land in your CI/CD pipelines. And while the goal is to always shift left as much as possible and avoid these areas, for some architectures it’s not always possible and there may be some additional level of validation required to satisfy the needed test coverage.


This doesn’t mean that tests shouldn’t be automated or placed in pipelines, rather just that they should be separated from your CI/CD processes and rather executed on a daily basis as part of a scheduled execution and not part of your code delivery.


End-to-end tests with high data requirements: Anything that requires complex data scenarios to test should be reserved for execution in a proper test environment outside of a pipeline. While these tests can be automated, they are often too complex or specific for regular execution in a pipeline, plus will take a long time to execute and validate, making them not ideal for pipelines.


Visual regression: Outside of functional testing it is important to often perform visual regression against any site UI to ensure it looks consistent across a variety of devices, browsers, and resolutions. This is an important aspect of testing that often gets overlooked. However, as it doesn’t deal with actual functional behavior, it is often best to execute this outside of your core CI/CD pipelines, though still a requirement before major releases or UI updates.


Mutation testing: Mutation testing is a fantastic way of being able to check the coverage of your unit testing efforts and see what may have been missed by adjusting different decisions in your code and see what it misses. However, the process is quite lengthy and is best done as part of a review process rather than forming part of your pipelines.


Load and stress testing: While it is important to test the performance of different parts of code, you don’t want to put a system under any form of load or stress in a pipeline. To best do this testing, you need a dedicated environment and specific conditions that will stress the limits of your application under test. Not the sort of thing you want to do as part of your pipelines.


Conclusion


A good DevOps testing strategy requires a solid base of unit tests to provide most of the coverage with mocking to help drive the rest of the automation effort up, leaving only the need for a few end-to-end automated tests to ensure everything works in order and allow your team to take confidence that the pipeline tests will successfully deliver on their quality needs.

Comments


Thanks for subscribing!

bottom of page