Performance Testing is an often neglected component of application development and system replacement projects. It is often ignored or relegated to the end of the project for a period of “performance tuning activities” in favor of functional development. And when a project starts to go off the tracks, targets get pushed, then QA and time allocated for tuning gets cut.
Why does this legacy of waterfall planning continue to exist within an Agile world?
Waiting until a system is functionally complete to start performance testing will not save money and increases risk that can be mitigated by incorporating it into your cycle of sprints.
Failure to test the performance of your system can mean risking an underbuilt environment leading to delays, downtime, and unhappy users. When the system is up and running, it may be slow, unresponsive, and extended queues can infuriate users.
Performance testing at the end of the project during stabilization means you may have time to build out your environment to meet initial demand, but specific performance problems due to poor code, or inefficient architecture will not have time to be resolved unless go-live is delayed. Performance testing too late in the project risks sluggish performance and intolerable wait times for some operations and can mean dissatisfied users and cycles of emergency patches to improve performance.
Performance testing can be accomplished in an agile project by incorporating it as part of the agile process and be willing to prioritize it appropriately.
Effective performance testing is something that is planned for and included starting from inception of the project and is part of a continuous cycle of QA. Every story must include as part of its QA acceptance a set of performance metrics that it must meet before it can be marked as complete.
The standard story card “As an xxxx I want yyyy so that zzzz” defines the typical requirements of a story and are generally further defined by acceptance criteria (“x is working”, “x cannot do y without doing z”, “x is stored in y for use elsewhere”).
Acceptance criteria has the following benefits:
- The get the team to think through how a feature or piece of functionality will work from the user’s perspective
- They remove ambiguity from requirements
- They form the tests that will confirm that a feature or piece of functionality is working and complete.
But acceptance criteria generally only defines functional acceptance. You will rarely see acceptance criteria such as the following: “x must return results in less than y seconds when the server is under z load 19 times out of 20 and less than u seconds when the server is under w load 18 times out of 20.”
A good performance testing plan will define:
- The performance criteria that the system is required to meet
- An explanation of how the performance criteria will be measured and how it matches against business objectives
- Remediation steps to explain how failures will be prioritized, handled, and resolved.
A team that has a solid grasp of the importance of system performance can incorporate performance testing tasks and remediation into an agile project by defining performance objectives, systematically evaluating the system, and defining failures as remediation stories that get fed into the backlog and prioritized.
A common issue with many development teams is a shortage of resources with the depth of experience to conduct effective and efficient performance testing. One option to consider is hiring a team with the knowledge and specialization to analyze, manage, educate and implement a performance testing plan in a cost-effective manner.
The MNP Performance Testing team has the tools and experience to conduct thorough performance analysis while integrating seamlessly within the Agile process on projects both large and small.
No comments:
Post a Comment