The focus of Performance Testing is checking a software program’s
Speed – Determines whether the application responds quickly Scalability – Determines the maximum user load the software application can handle. Stability – Determines if the application is stable under varying loads
Why do Performance Testing?
Features and Functionality supported by a software system are not the only concern. A software application’s performance, like its response time, reliability, resource usage, and scalability, do matter. The goal of Performance Testing is not to find bugs but to eliminate performance bottlenecks. Performance Testing is done to provide stakeholders with information about their application regarding speed, stability, and scalability. More importantly, Performance Testing uncovers what needs to be improved before the product goes to market. Without Performance Testing, the software is likely to suffer from issues such as: running slow while several users use it simultaneously, inconsistencies across different operating systems, and poor usability.
Performance testing will determine whether their software meets speed, scalability, and stability requirements under expected workloads. Applications sent to market with poor performance metrics due to nonexistent or poor performance testing are likely to gain a bad reputation and fail to meet expected sales goals. Also, mission-critical applications like space launch programs or life-saving medical equipment should be performance tested to ensure that they run for a long period without deviations. According to Dunn & Bradstreet, 59% of Fortune 500 companies experience an estimated 1.6 hours of downtime every week. Considering the average Fortune 500 company with a minimum of 10,000 employees is paying $56 per hour, the labor part of downtime costs for such an organization would be $896,000 weekly, translating into more than $46 million per year. Only a 5-minute downtime of Google.com (19-Aug-13) is estimated to cost the search giant as much as $545,000. Hence, performance testing is important.
Types of Performance Testing
Load testing – checks the application’s ability to perform under anticipated user loads. The objective is to identify performance bottlenecks before the software application goes live. Stress testing – involves testing an application under extreme workloads to see how it handles high traffic or data processing. The objective is to identify the breaking point of an application. Endurance testing – is done to make sure the software can handle the expected load over a long period of time. Spike testing – tests the software’s reaction to sudden large spikes in the load generated by users. Volume testing – Under Volume Testing large no. of. Data is populated in a database, and the overall software system’s behavior is monitored. The objective is to check software application’s performance under varying database volumes. Scalability testing – The objective of scalability testing is to determine the software application’s effectiveness in “scaling up” to support an increase in user load. It helps plan capacity addition to your software system.
Common Performance Problems
Most performance problems revolve around speed, response time, load time, and poor scalability. Speed is often one of the most important attributes of an application. A slow-running application will lose potential users. Performance testing ensures an app runs fast enough to keep a user’s attention and interest. Take a look at the following list of common performance problems and notice how speed is a common factor in many of them:
Long Load time – Load time is normally the initial time it takes an application to start. This should generally be kept to a minimum. While some applications are impossible to make load in under a minute, Load time should be kept under a few seconds if possible. Poor response time – Response time is the time it takes from when a user inputs data into the application until the application outputs a response to that input. Generally, this should be very quick. Again if a user has to wait too long, they lose interest. Poor scalability – A software product suffers from poor scalability when it cannot handle the expected number of users or when it does not accommodate a wide enough range of users. Load Testing should be done to be certain the application can handle the anticipated number of users. Bottlenecking – Bottlenecks are obstructions in a system that degrade overall system performance. Bottlenecking is when either coding errors or hardware issues cause a decrease in throughput under certain loads. Bottlenecking is often caused by one faulty section of code. The key to fixing a bottlenecking issue is finding the section of code causing the slow down and trying to fix it there. Bottlenecking is generally fixed by either fixing poor running processes or adding additional Hardware. Some common performance bottlenecks are
CPU utilization Memory utilization Network utilization Operating System limitations Disk usage
How to Do Performance Testing
The methodology adopted for performance testing can vary widely, but the objective for performance tests remains the same. It can help demonstrate that your software system meets certain pre-defined performance criteria. Or it can help compare the performance of two software systems. It can also help identify parts of your software system which degrade its performance. Below is a generic process on how to perform performance testing
Step 1) Identify Your Testing Environment
Know your physical test environment, production environment and what testing tools are available. Understand details of the hardware, software and network configurations used during testing before you begin the testing process. It will help testers create more efficient tests. It will also help identify possible challenges that testers may encounter during the performance testing procedures.
Step 2) Identify the Performance Acceptance Criteria
This includes goals and constraints for throughput, response times and resource allocation. It is also necessary to identify project success criteria outside of these goals and constraints. Testers should be empowered to set performance criteria and goals because often the project specifications will not include a wide enough variety of performance benchmarks. Sometimes there may be none at all. When possible finding a similar application to compare to is a good way to set performance goals.
Step 3) Plan & Design Performance Tests
Determine how usage is likely to vary amongst end users and identify key scenarios to test for all possible use cases. It is necessary to simulate a variety of end users, plan performance test data and outline what metrics will be gathered.
Step 4) Configuring the Test Environment
Prepare the testing environment before execution. Also, arrange tools and other resources.
Step 5) Implement Test Design
Create the performance tests according to your test design.
Step 6) Run the Tests
Execute and monitor the tests.
Step 7) Analyze, Tune and Retest
Consolidate, analyze and share test results. Then fine tune and test again to see if there is an improvement or decrease in performance. Since improvements generally grow smaller with each retest, stop when bottlenecking is caused by the CPU. Then you may have the consider option of increasing CPU power.
Performance Testing Metrics: Parameters Monitored
The basic parameters monitored during performance testing include:
Processor Usage – an amount of time processor spends executing non-idle threads. Memory use – amount of physical memory available to processes on a computer. Disk time – amount of time disk is busy executing a read or write request. Bandwidth – shows the bits per second used by a network interface. Private bytes – number of bytes a process has allocated that can’t be shared amongst other processes. These are used to measure memory leaks and usage. Committed memory – amount of virtual memory used. Memory pages/second – number of pages written to or read from the disk in order to resolve hard page faults. Hard page faults are when code not from the current working set is called up from elsewhere and retrieved from a disk. Page faults/second – the overall rate in which fault pages are processed by the processor. This again occurs when a process requires code from outside its working set. CPU interrupts per second – is the avg. number of hardware interrupts a processor is receiving and processing each second. Disk queue length – is the avg. no. of read and write requests queued for the selected disk during a sample interval. Network output queue length – length of the output packet queue in packets. Anything more than two means a delay and bottlenecking needs to be stopped. Network bytes total per second – rate which bytes are sent and received on the interface including framing characters. Response time – time from when a user enters a request until the first character of the response is received. Throughput – rate a computer or network receives requests per second. Amount of connection pooling – the number of user requests that are met by pooled connections. The more requests met by connections in the pool, the better the performance will be. Maximum active sessions – the maximum number of sessions that can be active at once. Hit ratios – This has to do with the number of SQL statements that are handled by cached data instead of expensive I/O operations. This is a good place to start for solving bottlenecking issues. Hits per second – the no. of hits on a web server during each second of a load test. Rollback segment – the amount of data that can rollback at any point in time. Database locks – locking of tables and databases needs to be monitored and carefully tuned. Top waits – are monitored to determine what wait times can be cut down when dealing with the how fast data is retrieved from memory Thread counts – An applications health can be measured by the no. of threads that are running and currently active. Garbage collection – It has to do with returning unused memory back to the system. Garbage collection needs to be monitored for efficiency.
Example Performance Test Cases
Test Case 01: Verify response time is not more than 4 secs when 1000 users access the website simultaneously. Test Case 02: Verify response time of the Application Under Load is within an acceptable range when the network connectivity is slow Test Case 03: Check the maximum number of users that the application can handle before it crashes. Test Case 04: Check database execution time when 500 records are read/written simultaneously. Test Case 05: Check CPU and memory usage of the application and the database server under peak load conditions Test Case 06: Verify the response time of the application under low, normal, moderate, and heavy load conditions.
During the actual performance test execution, vague terms like acceptable range, heavy load, etc. are replaced by concrete numbers. Performance engineers set these numbers as per business requirements and the technical landscape of the application.
Performance Test Tools
There are a wide variety of performance testing tools available in the market. The tool you choose for testing will depend on many factors such as types of the protocol supported, license cost, hardware requirements, platform support etc. Below is a list of popularly used testing tools.
LoadNinja – is revolutionizing the way we load tests. This cloud-based load testing tool empowers teams to record & instantly playback comprehensive load tests, without complex dynamic correlation & run these load tests in real browsers at scale. Teams are able to increase test coverage. & cut load testing time by over 60%. HeadSpin– offers the industry’s best performance testing capabilities for its users. Users can Optimize their digital experiences with the performance testing capabilities of the HeadSpin Platform by identifying and resolving performance issues across applications, devices and networks. HeadSpin provides actual, real-world data removing ambiguity from thousands of devices, networks and locations. Users can leverage advanced AI capabilities to automatically identify performance issues in testing before they impact users. BlazeMeter – was designed and built by engineers who are passionate about open source. BlazeMeter gets you massive scale load and performance testing directly from your IDE. Plus, see what your user sees under load with combined UX & load testing. And the best part? It’s all in there: performance, functional, scriptless, API testing and monitoring, test data, and mock services. HP LoadRunner – is the most popular performance testing tools on the market today. This tool is capable of simulating hundreds of thousands of users, putting applications under real-life loads to determine their behavior under expected loads.Loadrunner features a virtual user generator which simulates the actions of live human users. Jmeter – one of the leading tools used for load testing of web and application servers.
FAQ
Which Applications should we Performance Test?
Performance Testing is always done for client-server based systems only. This means, any application which is not a client-server based architecture, must not require Performance Testing. For example, Microsoft Calculator is neither client-server based nor it runs multiple users; hence it is not a candidate for Performance Testing.
What is the difference between Performance Testing & Performance Engineering?
It is of significance to understand the difference between Performance Testing and Performance Engineering. An understanding is shared below: Performance Testing is a discipline concerned with testing and reporting the current performance of a software application under various parameters. Performance Engineering is the process by which software is tested and tuned with the intent of realizing the required performance. This process aims to optimize the most important application performance trait i.e. user experience. Historically, testing and tuning have been distinctly separate and often competing realms. In the last few years, however, several pockets of testers and developers have collaborated independently to create tuning teams. Because these teams have met with significant success, the concept of coupling performance testing with performance tuning has caught on, and now we call it performance engineering.
Conclusion
In Software Engineering, Performance testing is necessary before marketing any software product. It ensures customer satisfaction & protects an investor’s investment against product failure. Costs of performance testing are usually more than made up for with improved customer satisfaction, loyalty, and retention.