Tuesday, October 02, 2012

Performance Testing a Web Framework Application

Performance testing a web application that is built on a web framework can pose significant challenges, whether it is using Java Servlets, Struts, Rails, or others. These challenges are present regardless of the performance testing tool in use, in this case using Rational Performance Tester, but are equally evident in JMeter and other load generation tools.
A “normal” web application we will define as a web application that does not rely on any web frameworks, uses JSP or ASP pages, contains a limited amount of session information, and is generally stateless. Other attributes of a normal web application include:
  • Each page has a unique URL and serves a specific purpose or function within the web application.
  • Changing the page request attributes will modify the dataset, operations, or view of the requested URL, but will not change the underlying purpose or function of the page.
  • Stateful operations are performed by chaining a series of multiple page URLs together, each serving an appropriate purpose. Out of workflow requests (requesting a different URL) are not blocked and simply terminate the current workflow.
The operation of a normal web application differs from how web framework applications are constructed. Web framework applications generally contain a significant amount of session information and have a workflow that is stateful and limit options for performing out of workflow requests. Other differing attributes include:
  • A single URL encapsulates many different (usually related) purposes and functions.
  • Stateful operations consist of a series of requests that are chained together using a combination of request attributes and stateful information stored in the session.
  • Monlithic applications often appear under a single URL.
  • Changing or breaking out of predetermined workflows may be strictly controlled and prevented by disallowing any unexpected state changes.
As a result of these differences there are some key considerations when attempting to performance test a web framework application.
  1. Workflows within a web framework may be inflexible. This means that simple errors may cause cascading problems that invalidate the remainder of a test because the test may be unable to escape from a workflow if the correct sequence is breached due to a page timeout, or validation error due to an incorrect datapool.
  2. Out of flow, looping, and branching tests can be difficult or impossible to record because the request to initiate a loop or branch may change. There may not be a consistent “start of workflow” page request that can be made to initiate or restart a workflow.
  3. Error conditions must be planned for and carefully handled since an error state may negate all other requests until the error is handled correctly.
  4. Minor changes to the application may force entire test suites to be rewritten due to inability to record, insert, and chain new parameters or pages into an existing test.
  5. Similar workflows with different data may require individual test scripts due to changes in screen components.
In order to make your performance testing of a web framework application as successful as possible, here are some strategies I have adopted to reduce the amount of time I spend rewriting test scripts for each new code deploy, and to reduce the number of page errors I get during the course of a test run.
  1. PLAN your tests before recording! Understand which functions you want to hit with each test and ensure you have scripts that are focused with as few extraneous requests as possible.
  2. Create many short, focused tests instead of long all-encompassing tests to reduce the potential for errors and for error chaining.
  3. Separate read-only tests from data-entry tests and pre-populate data whenever possible to avoid long “setup” sequences.
  4. Avoid test dependency where later functions or tests heavily depend on earlier parts to configure environment/data. Either preconfigure your environment and datasets, or rely on the test itself to perform only the critical data entry prior to testing it’s assigned function.
  5. Chain multiple, small, independent tests in a schedule to cover desired functionality. Branching and looping should be done at the schedule level whenever possible to avoid needing to record branches and loops within individual tests.
  6. If you do need to include datapools that cause variation in functionality, record the largest, most inclusive workflow and dataset, and then populate your datapool with values that result in restricted subsets of the recorded workflow.
  7. Establish easily accessible, non-blockable base points in your test. These base points or pages must be able to short-circuit functionality that has become error locked and should be returned to as often as possible in your recording to allow “reset” to occur and reduce the impact of error chaining. Using base points allows you to more easily include other constructs such as loops, conditionals, or branches in your recording.