Latest news »

Testing the limits

Learn about load, stress, spike, soak, and capacity testing to ensure your systems are ready for any workload. 

Raj Kumar Resillion

by Rajinder Kumar, Senior Performance Test Engineer

Performance testing plays an integral role in bringing an application to market, whether you’re releasing updates to existing software, adjusting infrastructure, anticipating increases in your user base, or ensuring system stability amidst unforeseen events. Its significance lies in its potential to impact your bottom line, especially in today’s business landscape where online transactions through websites and apps, alongside the migration of back-office systems to the cloud, are becoming increasingly prevalent. Poorly performing systems can drive customers away and lead to dissatisfaction among your staff who rely on your company systems. 

Understanding testing terminologies and tools 

In this series of blogs, I will navigate the realm of performance testing, aiming to demystify this domain that is often misunderstood by project teams and even other testers. By the end you’ll have a clearer understanding of the various terminologies, the relevance of different types of performance testing to their projects, optimal timing for testing, necessary tooling and frameworks, key metrics for capturing and analysing system behaviour, and ultimately, how to draw conclusions regarding system performance. Armed with this knowledge, you’ll be able to make recommendations for improvements—whether in software, infrastructure, or system tuning. You’ll also know when and where to retest as changes are implemented. Only when your stakeholders are confident that the system’s behaviour meets requirements and objectives, can it be approved for production. 

Stress, spike, soak testing – what do they really mean? 

The term performance testing is broad and encompasses various types such as load, stress, spike, soak, capacity, and others. While some terms may be used interchangeably, they do have subtle differences, which I’ll cover in future blogs. Here’s a summary: 

  • Load testing: Assesses how a system behaves under an expected load of a given number of concurrent users over a specified duration.  
  • Stress testing: Evaluates how a system behaves under high loads. 
  • Spike testing: Examines the system’s response to sudden, high loads, typically lasting for a short period.  
  • Soak testing: Occurs when tests are run over an extended period to assess the system’s behaviour.  
  • Capacity testing: Determines the system’s ability to support a specific number of users, providing insight into scalability, particularly important in the world of auto-scaling resources in the cloud, where understanding system behaviour during resource scaling is crucial. 

Performance testing allows you to assess the behaviour of a system in terms of responsiveness, stability, scalability, reliability, and resource usage of both the software and infrastructure under various workloads and conditions. The testing should identify any bottlenecks causing slowness, which can be rectified to ensure a good user experience under differing loads and adverse conditions. Perceived slowness can significantly impact individual users’ confidence in using the system. 

At the end of any performance testing, you should have an insight into these key areas: 

  • The readiness of your release for production 
  • How well the performance requirements were met 
  • The capacity of your system 
  • The causes of any bottlenecks and slowness 
  • Necessary changes required in hardware/server configuration 
  • Required system tuning 

The essential steps to optimise your performance testing outcomes 

To conduct performance testing successfully and provide meaningful results and insights for stakeholders so that they can make informed decisions regarding production release, a structured process must be followed to optimise testing outcomes. 

The following eight tasks should be included in your approach: 

  1. Test environment: Ensure that the performance test environment closely resembles the production environment, acknowledging that an exact replica may not be feasible due to cost constraints. While it may involve a scaled-down environment with fewer servers or reduced specifications, ensure it closely mirrors the production setup. Additionally, dedicate the environment solely to performance testing to avoid interference from other activities such as functional testing, which could impact results and complicate management. 
  2. Performance goals: Establish clear performance goals and acceptance criteria in collaboration with stakeholders, such as business or technical architects. These criteria may include user response times, throughput, concurrent users, and resource utilisation. 
  3. Test plan: Develop a comprehensive test plan outlining meaningful test scenarios aligned with the performance acceptance criteria. 
  4. Test tools: Select appropriate test and monitoring tools for performance testing, considering factors such as functionality, compatibility, and budget. Evaluate both open-source and licensed tools to determine the best fit for testing requirements. 
  5. Environment configuration: Ensure the correct configuration of the test environment whenever software releases or configuration changes are introduced. Accurate configuration is vital to prevent incorrect versions or configurations from being released into production. 
  6. Test implementation: Develop and implement test scenarios along with associated test data. Ideally, tests should run from dedicated test servers to minimise resource impact on other servers in the test environment. 
  7. Test execution: Prior to executing a test scenario, conduct a smoke test with a small number of users for a brief period to validate functionality, test data integrity and the correct metrics are collected. This early validation helps identify and rectify implementation issues promptly, saving time and effort in the long run. 
  8. Result analysis: Collate, analyse, and share test results with stakeholders. Refine remaining tests as necessary based on analysis findings, and conduct retests following system changes. Once performance goals are achieved, consider the testing complete. 

By using these tasks as your guide, you can ensure a systematic and effective performance testing process, leading to informed decision-making and successful production releases. 

Testing services are critical components to the successful development of software solutions which are reliable, secure, and meet requirements. At Resillion we develop and implement quality processes and practices throughout the full software development lifecycle, while our testing services involve activities such as functional testing, performance testing, and security testing. We’re committed to achieving quality outcomes by assuring that your end product or application works as intended, is secure, and is compliant with all relevant regulations. 

In my next blog I’ll be exploring the different types of performance testing in more detail, providing examples and use cases. 

Want to learn how we can help?

Get in touch with one of our experts.








    Our Accreditations and Certifications

    Crest Accreditation Resillion
    Check Penetration Testing
    RvA L690 Accreditation
    ISO 27001
    ISO 9001 Resillion
    CCV Cyber Pentest
    Cyber Essentials
    CE+assessor

    Contact Us