Our Blog

Ensuring payment switch performance with automated tests

As developers, our team at Aviso have led a number of large scale, multi-year payment software development projects. This experience is invaluable – there are few things that are easier to learn from than your own mistakes! One of the key lessons that we have taken from earlier projects is the importance of testing.

Building a world class payment switch is complex – far too complex for all that complexity to fit in any single developer’s brain. The development team is always under pressure to add new features to Novate – and to execute these features to a high standard. The only way that we can add these features, and maintain the code quality standards across the entire project is the presence of a comprehensive suite of tests.

There are a plethora of payment messages that Novate must support. We have the standard interfaces that you would typically expect (Visa, Mastercard, Amex, APACS 70, and so on). Additionally, within these different supported interfaces, there’s a virtual alphabet soup of transaction types that can be encountered: CPS (Custom Payment Service), E-Commerce, E-Commerce AVS, E-Commerce non-secure, E-Commerce secure (with CAVV), MOTO, Mag-stripe, Keyed, ICC, ICC Online PIN, DCC, Cashback, Online fallback, recurring payment and many more (whew!).

These and other transaction types need to be supported through the different interfaces that Novate provides. While Novate’s declarative message configuration enables this support in an elegant and extensible way, it is still critical to ensure that Aviso’s core development team can add new features to Novate, secure in the knowledge that the existing functionality is going to continue to behave as expected.

We ensure this by having thousands of automated tests that test each piece of functionality within Novate. We have several layers of tests: unit tests (to ensure each individual function within Novate does what it’s supposed to do), functional tests (to ensure that larger pieces of functionality that perform a business-specific function perform as desired), integration tests (to ensure that the Novate platform as a whole works as expected – in all supported configurations) and load tests (to ensure that Novate not only works correctly, but performs to an acceptable level).

All of these tests are executed by our continuous integration server whenever a developer commits new code to the Novate code base. Whenever new functionality is added to Novate, tests that exercise this functionality are also added to our testing tools. We use state of the art integration testing tools like Pallet to automate our test deployment. We have a private cloud of virtual machines available, to allow us to deploy and test Novate on all supported operating systems, as well as against either Oracle or Postgres (or a mixture of both).

Performance is a key driver for many customers, and we recognize that. We have invested significant time and thought into our load testing harness. This harness is a distributed, high performance tool that can be configured to fire a wide variety of transactions at Novate to simulate production conditions. For example, we can simulate the presence of many thousands of APACS or ISO-8583 terminals authorising into Novate (including proper MAC generation for APACS 40 messages!), and run these tests at a click of a button against the latest Novate build. Because this harness is (like Novate itself) distributed, if we need to simulate a very large load authorising into Novate, we can simply spin up more virtual machines and deploy more instances of the harness onto those servers – giving us the ability to fire an almost arbitrary volume of traffic at Novate during our test cycles. If we wish to precisely mimic a production load volume, we can do so too – we can configure our harness to send a certain number of transactions per second to Novate and watch for the results.

Within our automated build, the load tests themselves will cause the build to fail should the performance of Novate drop below a certain threshold – allowing developers to quickly spot changes that have lead to unintended performance degradation.

This might seem like a lot of work – and it was! However, the benefits we’ve seen in terms of developer productivity and build quality have been more than worth it. We’re confident that our build system will enable us to serve our customers better, and ensure that the end product that we ship will be of an exceptionally high quality – and consistently so.