In this article I will discuss the software tests that can be carried out at the system level. These tests are an essential link in the continuous integration chain, they may or may not be functional and have very specific characteristics. I will discuss best practices to be followed and implementation issues, based on the example of system tests of an API. So, let’s see what the test levels are and what we call a system test.

The test levels

In accordance with the recommendations of the ISTQB (International Software Testing Qualification Board), software tests are classified into different levels. Here are the traditional test levels, which are found in the classic test pyramid:

Pyramid of the traditional system tests
  • Unit tests : these tests aim to validate each code unit by isolating it (using stubs, mocks, or fakes…)
  • Integration tests : these tests focus on communications or message exchanges between different code units
  • System tests : these tests verify that the different aspects of a complete software system meet the specifications
  • Acceptance tests : these tests aim to obtain acceptance of the system as a whole by the customer, user or representative.

At AT Internet, as our software solution is very extensive and complex, we have chosen to add two levels of testing:

Pyramid of At Internet system tests
  • Systems integration tests : these tests focus on the exchanges between different systems that make up our solution
  • Solution tests : these tests validate the proper functioning, as a whole, of our solution, which is composed of different systems

The system tests

These “system” tests are the first level of the so-called “black box” tests, i.e. they are defined on the basis of the product specifications (business rules, performance requirements, etc.) without any access to the source code. The so-called “white box” tests, on the other hand, are defined on the basis of the internal structure of the product (architecture, code, code portions, etc.).

Black box and white box tests

These tests must be completely independent of the internal structure of the system (language and technologies used, implementation details etc…). They must simply focus on the inputs and outputs of the system, observing it completely objectively. This is what makes them powerful and useful: they then make it possible to validate a profession or behaviours that are directly perceptible to the customer.

System tests are essential when modifying the internal structure of the system (refactoring, replacing internal components, adding functionalities, etc.): they ensure that these modifications do not lead to regressions for customers.

In Agility, we have no other option than to automate these tests, otherwise continuous integration is compromised by longer delivery times due to manual testing phases that can be endless. I will therefore only deal here with automated system tests.

While unit and integration tests are essential and must be provided as much as possible, it is essential to invest in tests at the system level to ensure that the desired business is delivered to the customer.

Each business brick can indeed correctly do what it is designed for, but their assembly can sometimes lead to peculiar behaviour! On the other hand, the job can be respected while parts of the system are sometimes in poor condition, which is still rarer.

System tests are the first tests directly related to the customer’s acceptance criteria in the test process. These tests, if automated, will make it possible to detect possible regressions as early as possible, otherwise there is a risk of unpleasant surprises in later phases of the project: when the system is integrated into the complete solution or worse, at the customer’s or one of their representatives’ premises (product owner for example).

It is then much more expensive to correct the problems detected and the risk of delaying the delivery of the project is much greater.

It is not always easy to find the resources to set up and automate this type of test: it requires multiple skills, both in test design and automation development and we are not always lucky enough to have a test developer on the team. In some cases, product developers will be called upon to carry out some of these implementations.

Testing is surely part of the development activities and the responsibility for delivering a product that works is that of the entire team, isn’t it?

What’s the approach to testing a system?

Diagram of a system with entries and outpits

To be able to properly test a system, you must first understand various aspects of our system:

The system entries

These are all triggers for system behaviour. We often think of user actions as obvious triggers, but there may be many others. Here are some examples:

  • A user action
  • Receipt of notifications
  • A change in system status
  • The time that passes (yes, it can trigger actions and it even happens very often: synchronisation mechanisms, task scheduling, etc.)

The system outputs

We often think here of the answer provided to the user but there are other very common outputs that we do not always think about. For example:

  • The answer to the user
  • Log writing (logs)
  • Writing data to a database
  • The issuance of a notification

Expected behaviours (or business rules)

We now know how to activate the system by playing on its different inputs. We also know what we can observe by looking at the different outings. We then need to know the relationships between inputs and outputs. These relationships are described as system business rules and can take different forms: from the acceptance criterion of a user story in Agility to more detailed specifications in other contexts.

These more or less formally described behaviours will be used directly to describe the different test cases that will take on the role of system tests.

Execution contexts

Some system behaviours depend not only on the inputs received but also on the context in which the system is located at the time of the test.

Let’s take the example of a call to an API to record new information in the system. We then have two cases, with different imaginable behaviours:

  • The information is already present in the system:
    • We store both pieces of information
    • We update existing information
    • We store the history of the information values
    • We trigger an error
  • The information is not yet available
    • We record the information received
    • We don’t do any treatment
    • We trigger an error

In both situations, the execution context has a direct influence on the expected behaviour of the system.  This is where the test dataset takes on its importance: we will generate the context in which we want to be before each test to be able to validate the different behaviours of our system. We then have a number of combinations of all these elements that will constitute our test cases:

  • A test data set
  • One or more inputs to “activate”
  • Expected behaviour
  • One or more outputs to be controlled

System tests can then be implemented. This will require the implementation of a test mechanism that must be able to play on the system inputs and control the validity of its behaviour by observing its different outputs:

Test mechanic of the system monitoring

Depending on the case, this test mechanism is directly made available in tools on the market, depending on the system we want to test:

  • API tests: SOAP UI, supertest, Postman, etc.
  • Interface tests: Selenium, Cypress, NightwatchJS, etc.

In other cases, it will be necessary to implement a mechanism adapted to your needs, which must allow you to play with the different inputs and outputs of the system under test:

  • Reading/writing in a Kafka topic
  • Sending / retrieving notifications
  • Insertion of data into databases
  • Receiving emails
  • etc.

It can be seen here that the test tool is not linked to the technology used in the system to be tested. It is often very different and is not related to the information processing needs of the system but rather to the constraints imposed on the client(s) of the system.

Non-functional aspects such as performance or resilience can also be tested in system tests, with different techniques and tools depending on the needs.

Testing an API

Let’s take the concrete example of an API as a system to be tested. The main classics of PLC validation are, in the order in which the system must check them:

  • The rights of use of this API
    • An unauthorised call can be rejected immediately, regardless of its validity
  • Parameter validation
    • The absence of mandatory parameters
    • The validity of the received parameter combination
      • Some parameters may sometimes be incompatible and should not be passed in the same call
    • The format of each parameter
      • Type, pattern, permitted values,…
    • The consistency of the values received for different parameters
      • For example, receiving a sort request on an unsolicited field can sometimes be refused
  • Relevance related to the business of the system
    • This involves validating the business rules of the system itself. For example, an information request (GET) on a non-existent property can trigger an error, the insertion of a data in the database via the API triggers the receipt of its identifier in return….

These operations are increasingly costly for the system, which must try to return an error as soon as possible if it is relevant, without initiating the next steps, otherwise generating unnecessary load on the servers and thereby altering non-functional aspects of the system, or even its security.

To design these different tests, different design techniques can be used, including limit value analysis, equivalence classes, state transitions, decision tables, etc.

Performance or safety tests can be considered with different tools such as LOAD UI, Gatling, JMeter, Neoload, for example. In this type of test, we find well-known techniques such as:

  • Injection tests
  • The fuzz testing
  • The hammering
  • The gradual increase in load

Conclusion

I hope I have been able to enlighten you on this level of testing, its importance and how to get the most out of it. It is also important to keep in mind that testing, design and implementation always depend on your context, and “best practices” must be continuously analysed, redesigned and interpreted to be applied to your projects in the best possible way.

It is in this spirit that we at AT Internet invest in the implementation of automated system tests, according to the needs of each project and taking into account the context of each team. We focus our investments in accordance with our test strategy with the objective of ensuring the highest quality of our products for the benefit of our customers.

In the next article, I will discuss 10 pitfalls to avoid when setting up system tests.

Featured photo credits: Markus Spiske

Author

With more than 10 years of experience in software testing strategy and implementation in an Agile environment, Alexandre is responsible for industrialising development at AT Internet. His daily challenge: guide our dev teams through implementing tools and methods with the aim of guaranteeing regular and high-quality deliveries to our customers.

Comments are closed.