Our Blog

The importance of testing in development

Building a mission-critical payment switching system that is relied on to process thousands (even millions) of dollars a minute is complex. Such a system is far too complicated for all that complexity to fit in any single developer’s brain.

Development teams building such systems are always under pressure to add new features, and to execute these features to a high standard. The only way that these features can be added, and that the code quality standard across the entire project is maintained, is the presence of a comprehensive suite of tests.

Variety is the spice of life … and in payment systems, the cause of great complexity

There exists a plethora of payment messages that a modern payment switch must support. You have the standard interfaces that you would typically expect (Visa, Mastercard, Amex, APACS 70, and so on). Additionally, within these different supported interfaces, there’s a virtual alphabet soup of transaction types that can be encountered: CPS (Custom Payment Service), E-Commerce, E-Commerce AVS, E-Commerce non-secure, E-Commerce secure (with CAVV), MOTO, Mag-stripe, Keyed, ICC, ICC Online PIN, DCC, Cashback, Online fallback, recurring payment and many more (whew!).

These and other transaction types need to be supported through the different interfaces that the application provides.

It is imperative that the development team can safely add new features such as new interfaces, secure in the knowledge that the existing functionality is going to continue to behave as expected.

Testing for accuracy, performance and stability

When dealing with a complex application such as that described above, what is needed is thousands of automated tests that test each and every piece of functionality within the system within a short timeframe.

These can consist of several layers of tests, such as:

–  unit tests (to ensure each individual function does what it’s supposed to do

–  functional tests (to ensure that larger pieces of functionality that perform a business-specific function perform as desired)

–  integration tests (to ensure that the the platform as a whole works as expected – in all supported configurations)

–  load tests (to ensure that the application not only works correctly, but performs to an acceptable level)

The ideal scenario is where all of these tests are executed automatically whenever a developer commits new code to the code base. Whenever new functionality is added, tests that exercise this functionality should also be added to the suite of testing tools. This means that if a change in one part of the system impacts on another – you find out straight away. This limits the amount of testing needed into the future – a benefit that builds over time.

At Aviso we like to use state of the art integration testing tools like Pallet to automate our test deployment for major systems. We have a private cloud of virtual machines available; to allow us to deploy and test these systems on all supported operating systems, as well as against either Oracle or Postgres (or a mixture of both).

Performance Testing – where payment systems meets F1

Performance is a key driver for many customers, and it is really important that this is recognized fully.

With this in mind, at Aviso we have invested significant time and thought into our load-testing harness. This harness is a distributed, high performance tool that can be configured to fire a wide variety of transactions at our switching system (‘Novate’) to simulate production conditions.

For example, we can simulate the presence of many thousands of APACS or ISO-8583 terminals authorising into Novate (including proper MAC generation for APACS 40 messages!), and run these tests at a click of a button against the latest build. Because this harness is (like Novate itself) distributed, if we need to simulate a very large load authorising into Novate, we can simply spin up more virtual machines and deploy more instances of the harness onto those servers – giving us the ability to fire an almost arbitrary volume of traffic at the application during our test cycles.

If we wish to precisely mimic a production load volume, we can do so too – we can configure our harness to send a certain number of transactions per second to Novate and watch for the results.

Within our automated build, the load tests themselves will cause the build to fail should the performance drop below a certain threshold – allowing developers to quickly spot changes that have lead to unintended performance degradation.

To conclude – investing in testing in the short-term saves time and money (and many tears) in the long-term

Building the extent of test suites described above might seem like a lot of work – and it is.

However, over the years we have seen that upfront investment in an integrated and comprehensive testing harness more than pays of in terms of developer productivity and build quality.

Over the next few months we will share more views on this area of testing, and we look forward to starting a conversation with like-minded developers or switch administrators about what kind of test nightmares – or dreams they have had.

In the meantime, if you’d like to talk to us directly about our experience just call us on +353 66 979 6523 or email our MD on denis.mccarthy@aviso.io – we’d be more than happy to have a chat.