If you are working on a new application development project, experiencing performance problems within your application, or simply trying to improve your software quality assurance processes you may be ready to start the process of web application performance testing. The goal of this post is to provide you with some simple and practical guidelines for help you get started and be more successful on your web application performance testing projects.
I have had the opportunity to lead and execute on performance testing projects covering several different styles of web applications, including: high-traffic, public-facing, dynamic web sites; small and large back-office administration systems with several modules; E-commerce systems; and reporting and analytics dashboards for business users. Through those efforts I learned developed the guiding principles I use for web application performance test planning and execution that I’m sharing here with you. To that end, here are the ten guidelines that I use when I get started with performance testing:
- Start with business goals: You may be planning to build a comprehensive set of tests for all sections of the site when the business only cares that one key sales process be fast, failure free, and responsive. That is why it is critical to understand what the business wants to accomplish with the application and with the performance testing efforts. It sets the stage for planning your scripts and creating performance targets. You may be able to gather this information from documentation or information in user stories and design documents, but I find its best to have a brief discussion with key product owner(s) or business user(s) to establish these goals and set expectations.
- Plan test scripts based on user roles: Following along with the principle of starting with business goals, you should align your scripts with user roles (or even personas). For example, if you are testing an E-commerce site you might create scripts for a buyers, random browsers (i.e. window shopper), and an information searcher; if you’re testing a blog system you might organize scripts by poster, commenters, and readers. The purpose for this is to focus and categorize your results and reporting on user goals and metrics. It can also allow you focus more testing on goal paths and to limit the number of script. If your project has user stories available, that is a good place to start to identify your roles.
- Inventory your application: List out the sections, categories, or areas within your site along with key pages or paths. If you are working with an existing site you may be able get this information from a site map, Google webmaster tools, or some other crawling tools. You don’t have to worry about getting a comprehensive list, but get as much as needed to make sure you can cover the key goal paths. This inventory is important to help drive your test cases, scripts, or storyboards. It is also a very useful to plan for coverage of specific sections or pages.
- Gather existing metrics: This is a good tip if testing an existing site or one that has seen some testing use already. You can mine tools such as Google Analytics to understand where the most active pages or problem areas are. This can help to prioritize pages that should get more coverage and simulate more “real-life” scenarios when you’re planning your mix of tests.
- Focus effort on goal paths and conversion targets: If you have established clear business goals for testing this step might be easy. As you begin to plan your scripts, your test mix, and development effort make sure that your prioritize development, validation, and execution activities around goal paths and conversions. This allows you to make sure that the most important areas of your application get the most coverage. If your using Goals in Google Analytics or have real conversion rate data you should use that to
- Limit the number of scripts to develop: If the scope of testing or site is small, strive for 3 to 5 test scripts. For medium and larger applications you may need to have more test scripts for application modules or area, but still try to limit the number of scripts per area to 3 to 5. If you have many scripts test maintenance and analysis of the results becomes a problem, and remember that may have a larger and more comprehensive set of tests that are used for functional testing of the site. For performance testing I find it is best to keep things simple upfront and focus on key areas of the application. That way you can prove results and deliver value sooner, you can always plan to iterate and increase over time as your application matures.
- Plan scripts and storyboard before building anything: This is where all of the upfront planning really comes together. If you have business goals, an inventory of your application, defined roles, goal paths, and conversion targets you can use all of this valuable information to plan your scripts, test runs, and coverage. You can even go as far as developing a storyboard for all of your tests (be sure to document if you use a whiteboard). The purpose of this and the good news about it is that you can do this without writing any code or tests. In my experience, applications change a lot during the development cycle and you want to develop your tests when it begins to stabilize. This allows you to do most of your planning upfront so you will be ready for that (usually short) window when you can develop tests.
- Plan for test loss and rewrites: The realities of application development and a need business agility necessitate the need for an application to be changing. This is compounded if you are starting your test effort from scratch with a new application. Don’t fret if you have to rewrite tests upfront or change direction as you get closer to a release. As the application and test suites become more mature then loss/rewrites should reduce (if it doesn’t you may need to revisit your test design/planning).
- Use randomization to gain coverage or build less important cases: To allow you to focus effort on goal paths (tip #5) and to limit the number of scripts to develop (tip #6) utilize the test data randomization, conditional statements, and looping capabilities of your testing tools. For example, let’s say you have an E-commerce site and want to simulate the “window shopper” role; you may be able simply add loops and some random parameters to get a lot of coverage and to simulate a base level of activity on your site. That can provide a lot of value with a minimal effort.
- Make sure your Application Performance Management (APM) tools are ready to go: Most testing tools do a good job of monitoring resources under test, response time, etc. There are advanced tools available (e.g. Dynatrace, AppDynamics, Quest, BMC, Microsoft Operations Manager, etc.) to help monitor application performance and server health. You definitely want to leverage these while your tests are running to help understand and debug any performance issues. Sometimes these tools take time to setup and gain access to. So in order to make sure you are ready to run your tests when they’re done being developed you’ll want to get this detail in order as early as you can.
As you read this you may think to yourself that many of things are “no-brainers”; that was intended. Often times I find that performance testing discussions can quickly begin to focus on tools, testing theory, and implementation details. I find that a focusing on these principles and fundamentals up front is a better approach. Once you have a plan driven by business goals and objectives the technology, tools, and execution can fall into place and you can iterate, improve, and enhance your approach as you go.
So there you have it; my “Ten tips for planning web application performance testing”. I’m eager to hear your comments, feedback, opinions, or experience in this area Please reach out to me if you would have questions, would like to talk more, or need help or advice on your particular project.