top of page
Writer's pictureCraig Risi

Predicting the Unpredictable


One of the tenants of good software quality is predictability. You can’t test for what you don’t know so the idea is to try and either predict customer needs as best as possible or cut down unexpected behaviour occurring through your design. This applies across all facets of quality: functional, usability, performance, security and load and increases in difficulty as the scope of your product increases.


We often try to mitigate this as much has possible by getting our requirements as well defined as possible and while this remains the most cost-effective solution in test optimisation, it still doesn’t enable us to predict everything. This becomes especially true for things that are likely to change, like operating systems, browser versions or security updates (even in the space of a week or two, this could be a challenge). In these areas, we need to keep elements of agility and flexibility in our development processes and it becomes more about ensuring we can deliver updates/changes as quickly as possible rather than feeling constrained by the design of the solution.


This makes the adaptability and scalability of our test effort incredibly important, as it’s the testing effort that and the ability to run predictable and stable regression against ever-changing software and customer usage patterns that teams struggle with the most. So, how do you ensure that your test automation framework remains as modular and robust as possible to prevent it from falling victim to the constant unknowns of change?


Well, it starts with knowing the variables of our design. As much as we can’t test what we don’t know, if we don’t know what we don’t know – we are in bigger trouble. You solve this through a detailed functional matrix where you map your different code and functional UI modules against client behaviour and configuration, to identify areas covered and not covered. This is a cumbersome exercise, but if you’re serious about quality, it’s worth investing the effort into. Once you have the work done, it becomes easier to maintain.


There will still be several unknowns for the aforementioned future things we can’t predict or customers behaving in unusual ways. However, one thing you can do is look to the past.

Unless your product is completely brand new with an unknown client base, there is a chance you already have a detailed repository of customer defects and issues logged. By having easily searchable root causes or keywords in your issue management system, it should be relatively easy to map out issue and defect trends which customers have logged in the past and ensure you cater for those scenarios in your testing effort. As much as we think our customers are unpredictable, the truth is we are just not learning our customers’ behaviours. Customers repeat mistakes more often than you can imagine and ensuring your software handles this is paramount to removing areas of uncertainty in your testing.


The last part of becoming more predictable, lies in the design of your framework itself. Any truly robust automated framework should be architectured around the following design principles:


-         Modularity: Every part of your automated test suite that interacts with your product/OS or any form of UI interaction needs to be broken down into small, modular components, so that when you are forced to updated your system, your automation still executes. Similarly, there is no point running full regression on our entire product which will often take a long time, so we should easily be able to execute focused regression on the changed components while smoke testing those areas which haven’t changed.


-         Repeatable: Any test should behave exactly the same each time it executes. If you’ getting varying responses when your test executes it is a sign that there is either something significantly wrong with the underlying architecture of your product, or you haven’t catered for a wide enough range of variables in your setup or configuration.


-         Framework/test and toolset independence: Just as software keeps changing, so do your test tools and your test framework itself. Your framework should never be tied into any tool that changes over time or could become obsolete. While you may need these tools to automate with, your scripts should operate independently and you should reduce any interaction with a tool to one or two modules in your framework. Similarly, as you make improvements to your framework, it should continue to operate and not be too reliant on large parts of the framework. While there is always going to be dependencies, these should be small enough and modular enough that the change is easy to manage. Similarly, even as your framework changes and improves, the tests should remain the same and not require change. If you’re tying your tests to your framework, you are going to have a maintenance nightmare on your hands as your software evolves.


-         Version-Controlled: A lot of teams are intent on ensuring their automation pack is updated to the latest changes, but forget that customers tend to not always be as responsive to change as we would like. What this often means is that clients are running different version of your software against different version of operating systems or configurations, each with different code or endpoint behaviours. Your ability to quickly not just execute regression against the latest configurations, but a variety of older configurations, builds and product changes ensures that you are addressing quality concerns across a wider number of software configurations.


-         Progressive: I wrote about this previously, so won’t go into too much detail here – but it needs to be stated once again that automation goes beyond the realms of functional testing and includes all aspects of your testing effort and continuous integration and deployment efforts.

0 comments

Comments


Thanks for subscribing!

bottom of page