Performance Exploration and Testing of Web-based Software Systems

Research output: Types of ThesisDoctoral ThesisCollection of Articles

17 Downloads (Pure)

Abstract

Modern society relies heavily on a wide range of inter-connected software systems for finance, energy distribution, communication, and transportation. The era of controlled communication in closed networks for limited purposes is over. Due to the adoption of the Internet, almost all financial, government, and social sectors rely heavily on web-based information systems. These systems need to be very fast and reliable, and should be able to support a vast number of concurrent users. As software users are immensely perceptive about the performance of the software system, the companies relying on web-based application systems for businesses strive to provide high-quality web services in order to stay competitive in the worldwide market. These companies may suffer a considerable loss of customers, which can detrimentally affect profits and revenues if the applications do not perform well in terms of functionality and performance. As various reports show that an application is more prone to fail due to performance issues rather than functional ones, it is very important that web application systems are rigorously tested for performance issues before deployment.

In this thesis, we propose a set of approaches for performance testing and exploration of web-based software systems. Although we target web-based software systems, our methods can be easily adapted to different types of software systems.

Our contributions fall into two categories: approaches for model-based performance testing and approaches for performance explorations of blackbox systems with large input spaces. In the first category, as a first contribution, we provide model-based performance testing, where we generate realistic workloads using Probabilistic Timed Automata (PTA). During the load generation process, we monitor different Key Performance Indicators (KPIs) such as response times, throughput, memory, CPU, and disk. These KPIs are used to benchmark the performance of the system under test (SUT). As an extension of the first contribution, we provide an approach for extracting the workload models from server logs as an alternative to their manual creation based on the tester’s experience.

In the second category of contributions, we are interested in exploriing the performance of black-box software systems with large input spaces without prior knowledge of the domain. We propose different exploratory performance testing approaches to identify not only the worst user scenario with respect to a given workload model but also a set of input combinations that trigger performance issues and severely degrade the performance of software-intensive systems. Our first contribution, in this category, is an approach to explore the user scenario space randomly based on predefined mutation operators to find the worst user scenario. As a second contribution, we extend the previous work to present an exact approach that uses graphs-search algorithms and guarantees to find the worst user scenario. However, this approach does not scale well to large workload models with many loops. In our third contribution, we address the scalability issue of the exact approach and present an approach that employs genetic algorithms to identify a near-worst user scenario. As the last contribution, we provide an exploratory performance testing approach where we use reinforcement learning to explore a large input space in order to identify the input combinations that trigger performance issues in the SUT. This contribution is motivated by reports that show that almost two-thirds of the performance issues are detectable on certain input combinations. All the approaches discussed in this work are accompanied by tool support to automate the tedious tasks. The approaches have been evaluated against different web application case studies, but they can be extended to testing and exploring the performance of software-intensive systems in the other domains by adjusting their input artifacts such as workload models and input spaces with respect to those specific domains.
Original languageEnglish
Supervisors/Advisors
  • Truscan, Dragos, Supervisor
  • Porres Paltor, Ivan, Supervisor
Award date7 Dec 2020
Publisher
Print ISBNs978-952-12-3999-1
Electronic ISBNs978-952-12-4000-3
Publication statusPublished - 7 Dec 2020
MoE publication typeG5 Doctoral dissertation (article)

Keywords

  • Performance testing
  • Performance exploration
  • Software Testing
  • Web application
  • Model-Based Testing
  • Deep reinforcement learning

Fingerprint

Dive into the research topics of 'Performance Exploration and Testing of Web-based Software Systems'. Together they form a unique fingerprint.

Cite this