Thursday, May 21, 2015

ICPE 2015, City of Austin - Part 1

This post is all about my impressions of my trip to Austin TX for ICPE '15 and the city in general. I have covered details about the conference itself in other posts.
ICPE 2015, Getting Published with ACM, and Performance Management
Book Club - Foundations of Software and System Performance Engineering

Dear Austin Texas - your town motto seems incredibly appropriate. I'm not referring to your official motto of "Live Music Capital of the World", though that was abundantly evident as well, I mean the unofficial one "Keep Austin Weird."

Flying to Austin is weird. Flying to Austin on the afternoon of Superbowl Sunday on a plane that is running an hour late and half-filled with Texan transplants that know they aren't going to land before kickoff is more so. Especially for a true-blood Canadian who reserves such passion for games that don't prominently feature an attention hogging rhythm impaired southpaw shark.

But the weirdness started earlier than that. For a city that is the capitol of the largest state in the lower 48 and the second-largest population, a city that boasts an international reputation as a cultural hub, it is hard to get to. I only had two real options booking my flight from Winnipeg (YWG) to Austin (AUS) if I wanted to avoid an overnight layover (I did) or arriving late at night and risking a delay causing me to be late for the start of ICPE '15 on Monday morning (also yes). Since I was travelling on February 1st from Winnipeg, delays or cancellations were very much a possibility. The two options I did have were to leave at 6:30am (please arrive at the airport two hours before departure if you are flying internationally) with a 6 hour layover in Minneapolis (MSP) before arriving in the early afternoon. Or leave at 7:30am with short layovers (less than an hour) in Edmonton (YEG), and Los Angeles (LAX) before arriving at dinnertime.

In the interests of not flying a circuitous route around half the continent and potentially missing flights in two different airports, I chose the earlier departure. After arriving and puttering around the Minneapolis airport for a number of hours I went to my departure gate early where the staff let us know that the flight was going to be a half-hour late. Half an hour later it was now an hour late. Shortly after that when our original departure time arrived they let us know that although we were schedule to fly on Delta - someone along the way failed to realize that we would require an actual airplane in order to do so... and that they would try to find a new airplane to replace the non-existent one that we had been scheduled to fly on, but we would be facing further delays or possible cancellation.

Fun!

Fortunately they were able to discover an airplane in Wichita that we could use. I won't claim to understand the logistics of airline flight scheduling, but when the plane from Wichita arrived it proceeded to unload a full compliment of passengers. I'm not sure if the passengers were intending to come to Minneapolis, or if they were merely abducted in order to justify providing us with transportation, but either way - we had a plane, and one that managed to arrive in Austin less than half an hour ahead of the flight from LAX.

To give you a little context about my experience arriving in Austin, let me tell you a little about myself. I have never been "south". I've never traveled overseas, never been to Mexico, or Cuba, or on any kind of warm vacation. I've driven coast-to-coast in Canada, put my feet in the Pacific Ocean in Victoria BC, and in the Atlantic along the red sand beaches of PEI. But the absolute furthest south I have ever been is Gary Indiana - just at the very tip of Lake Michigan. I have never been anywhere that doesn't have snow in February. When I left home at 4-something in the morning in a sweater and a windbreaker the temperature was -26 C (windchill of -44 C) and when I arrived in Austin it was +23 C, an apparent difference of 67 C (or 120 F), a change roughly equivalent of that experienced by mall shoppers in Florida when they exit the building on a summer's day. In fact, the day before it was just as cold and I managed to snap a picture of the most brilliant sun dogs I have ever witnessed.



In Austin I stayed at the Hyatt Place Austin Downtown which is a beautiful hotel in the heart of downtown across from the Convention Center and just a few blocks from 6th street, congress avenue bridge, and easy walking distance to the Capitol. The accommodations were fantastic, the staff were wonderful (and one of whom I discovered was an expat from Winnipeg no less), and they put me on the top floor facing east which gave me an amazing view of the morning sunrise.



Upon my arrival, seeing that I was in Texas I heeded that age old proverb about the Romans - and I turned on the Super Bowl just in time to see the halftime show which proved to be more baffling than I expected. Alas, as I was busy doing my final preparations for ICPE I was watching it in my room while I worked, so I had no one to translate for me - but I gather that one of Katy Perry's dancing sharks was some kind of fortune teller that was predicting the apocalypse that would be brought about by an enormous lion? But I may have gotten that wrong. I did though appreciate the amazingly entertaining final 5 minutes of the 4th quarter (7 hours in real time) which was about as dramatic an ending to a game as I have ever seen. It's just a pity that Russell Wilson wasn't using the punctured bicycle tire that Brady was playing with or that last interception might not have happened.

(Part 2 will be coming soon!)

Friday, May 15, 2015

Book Club - Foundations of Software and System Performance Engineering

I recently received and am currently reading a copy of Foundations of Software and System Performance Engineering that I ordered through Amazon by AndrĂ© Bondi, a Senior Staff Engineer working in performance at Siemens Corp.

I had the good fortune of meeting AndrĂ© Bondi this year at ICPE '15 (which is where I learned about Foundations) and he was a fascinating, enthusiastic, and wonderful person to talk to. After I had presented my paper on Defining Standards for Web Page Performance, he approached me and we had a wonderful discussion about performance requirements and the perspective of the end-user and he had great things to say about the work that I was doing. His energy and interest in the subject was plainly obvious. It was a pure pleasure to have the opportunity to meet and talk with him.

As for Foundations, I haven't read very far into it yet, about 60 pages or so, but I have gone through and skimmed each section. This book is shaping up to be a fantastic resource and introductory guide to Performance Engineering. Since it is based on a training course that Dr. Bondi developed to train performance engineering and testing teams, I would expect no less. The book covers the entire software lifecycle from the perspective of how it connects with Performance Engineering as a practice, from requirements, to metrics, analysis, workloads, testing, instrumentation, and validation, and how they work within Agile environments and how to communicate and work with stakeholders on the project.

I won't write a complete review of Foundations yet as I still have much more to read, but I can be quite sure that this book is going to take a prominent place on my bookshelf, and play an important role while I am training my own performance engineering team.

Monday, May 11, 2015

BAWorld Winnipeg 2015 - Oct 7 - 9

The schedule for BA World Winnipeg 2015 taking place at the RBC Convention Centre from October 7th to 9th has just been released, and I have received notice that my seminar proposal has been accepted.

I will be presenting The Black Art of Performance Requirements for the Modern Web on October 8th at 2:15pm where I will be discussing the disconnect between what we believe the typical end user needs in terms of system performance, what they say they want, and what they actually need. This seminar is based upon a real-life case study and the contrast that it presented with "industry standards" for performance.

Some of the material is based upon a research paper that I presented at ICPE 2015 in Austin TX and is available online at the ACM Digital Library or MNP Media Library.

The complete session description is as follows:
Performance is a critical consideration in any project, many projects fail due to not only poor performance, but project teams that don’t give performance due consideration. Much of the reason is a lack of general understanding about how to define performance requirements, what makes a good performance requirement, and how to elicit co-operation to ensure they are met. The Black Art of Performance Requirements sheds light upon what end users want, what they think they want, and what they actually need. It examines the failings of industry standards, the reasons the results of industry studies fail to provide usable recommendations, and how to salvage value out of existing literature. High profile performance failures such as Healthcare.gov and Examsoft are neither accidental nor unavoidable. They are the result of failure to comprehend, failure to plan, and failure to commit to a set of defined performance requirements. Based on a real-world case study, The Black Art of Performance Requirements presents a process for defining SMART performance requirements in co-operation with business, developers, and analysts. Taking two years of production performance data and connecting with end-user performance complaints during that time frame, this session demonstrates the effectiveness of defining requirements using this process and explores how to objectively evaluate system performance against those requirements.

  1. Understand why users don't understand performance requirements, and learn how to define quality SMART performance requirements that will satisfy them anyway.
  2. Learn how to obtain buy-in from business, developers, and analysts for meeting performance targets and resolving performance problems.
  3. Learn how to measure, evaluate, and compare performance results against targets objectively.
For more information about performance requirements, testing, and how to improve your chances at project success, please see MNP Consulting - Performance Management

Wednesday, March 25, 2015

ICPE 2015, Getting Published with ACM, and Performance Management

It has been a busy year so far.

For much of 2014 I had been working on some research around Web Page Performance and understanding users expectations, what the industry believed about response times and how fast was 'fast enough', and how that compared to the real world. What I ended up finding is that much of the research that had been done in this field was, frankly, worthless. Oh, there were good points to be found, some useful results, and valuable nuggets to be unearthed - but on the whole most results drew conclusions that were being interpreted far to generally or their methodology was deeply flawed.

In short, there was a gaping hole that was asking to be filled. And the question was, how fast is 'fast enough'?

The most valuable conclusions I could glean from published works were as follows:

  1. Faster is better
  2. For systems that don't need to rely on real-time feedback, <1s is fast enough to be seamless for most request-response human-computer interactions be it locally or over the web
  3. People will happily wait longer if they are given responsive feedback while they are waiting
  4. Consistency is important
But from a practical perspective, what kind of performance level should I be aiming for? Is 5s fast enough? What about 3s? Is average response time sufficient, or do we need to use something stricter?

These kinds of questions led me to doing the research that eventually resulted in writing a paper that in September 2014 I submitted and was accepted by the International Conference on Performance Engineering (ICPE 2015).

That research paper is now officially available online at the MNP Library Defining Standards for Web Page Performance (also available via the ACM Digital Library with the following Citation Link).

In February of 2015 I then got to travel to Austin TX to present my paper at the conference. I will post my thoughts about the conference and the trip in an upcoming post, but in short it was a wonderful experience, I got to meet a lot of amazing people, the presentations were fantastic and I learned quite a bit.

Now, I am pursuing expanding our service offerings as part of MNP's Technology Consulting Team by officially introducing Performance Management.

Performance Management is all about improving the success rate of projects with a technology component. Performance is often a consideration that exists only as an afterthought on projects, something that can be taken care of after the important functional work has been completed. That very thinking has led to some spectacular failures in the real world notably including the 2014 Examsoft debacle, the 2013 launch of Healthcare.gov, the 2008 Edinburgh Fringe Festival ticketing system failure, and the 2007 scrapping of the new UK General Register Office System.

Performance Management is about planning for success, it is about helping organizations invest in quality and avoid making the mistakes of the past. This is something that has become a bit of a passion of mine, a purpose, and I intend to spend a lot more time using this space to talk about performance in the future.

Tuesday, August 05, 2014

RPT Custom Code - Ensuring Unique Logins for Looping Tests

A problem I have encountered while performance testing business applications with Rational Performance Tester is the uniqueness of logins. Often, business applications contain logic that will prevent duplicate logins from multiple sources at the same time, or will have workflow control or session replication or persistence that will result in interference if the same login is being used by more than one thread at the same time. A quick workaround to this problem can be to ensure there is a large pool of available logins to reduce the possibility of duplicates, however this is not always possible if a system is using Active Directory or LDAP, or if it SSO enabled. So the challenge being faced is, how do we bind a unique login to each virtual user thread in our test for the duration of the test?

The first approach would be to adjust the settings on your login datapool. When you add a datapool to your test an option exists to "Fetch only once per user".

In theory, this would be sufficient to ensure each thread will have a unique login. However in practice it seems to be not quite so simple. In a test configuration that uses multiple agents "Open mode" must be set to "Segmented" otherwise each agent will have a complete copy of the same list of accounts, resulting in duplication. In order to use Segmented in this manner though, your datapool must be larger than the number of threads in order to ensure sufficient rows are available in each segment. (IBM recommends you have 2x the number of records as threads to ensure balanced segmentation).

Despite the theory, I have run into the problem of threads exiting prematurely in a multiple user group/multiple agent/infinite loop test configuration with the error message "End of datapool reached". This is not an error we should be seeing. Reviewing the saved response data demonstrated that each thread was correctly using a single unique login, but somehow the error was consistent.

While attempting to debug the issue, I tried setting the "Wrap when last row is reached" property. Although successful in preventing threads from exiting prematurely, the wrap property appears to override the fetch-once property, returning me to a state of duplicate logins. Unfortunately, IBM's documentation does a poor job of explaining how each of these datapool properties interact with one another, so in order to overcome this issue I turned to writing my own piece of custom code to manage my logins.

The following custom code solution binds threads to a specific login on first access, and thereafter will always return the same login identifier for each subsequent request. It also segments the login map into groups that can be manually accessed (by passing the group name as the first parameter) or automatically by setting the User Group Name in your schedule.

This code is simplified and has a few limitations.

  1. Distinct Agent Segments - Each Agent must use a distinct User Group Name because static objects are not shared in memory between agents. If two agents are assigned to the same User Group then duplicate logins will occur.
  2. Configurability - The segments and login lists are hardcoded in this code segment, this could be overcome by adding an option to read logins from a file (or other data storage)
  3. Order - Logins will always be returned in the same order for each subsequent test run, a piece of randomization code would allow them to be shuffled.
package export; 

import java.util.Arrays; 
import java.util.HashMap; 
import java.util.List; 
import java.util.Map; 

import com.ibm.rational.test.lt.kernel.IDataArea; 
import com.ibm.rational.test.lt.kernel.services.ITestExecutionServices; 
import com.ibm.rational.test.lt.kernel.services.ITestLogManager; 
import com.ibm.rational.test.lt.kernel.services.IVirtualUserInfo; 
import com.ibm.rational.test.lt.kernel.services.RPTCondition; 

/** 
 * !!Warning!!: This code is NOT agent safe. Each unique set of segmentation identifiers MUST be isolated to a single agent for this code to work correctly 
 * @author grempel 
 */ 
public class UniqueLoginManager implements 
                com.ibm.rational.test.lt.kernel.custom.ICustomCode2 { 
        
        private static Map<String, List<String>> loginsBySegment = new HashMap<String, List<String>>(); 
        private static Map<String, String> loginsByThread = new HashMap<String, String>(); 
        private static Map<String, Integer> indexBySegment = new HashMap<String, Integer>(); 
        private static boolean initialized = false; 

        /** 
         * Initializes static login maps by segmentation 
         */ 
        private synchronized void init() { 
                if(!initialized) { 
                        loginsBySegment.put("GRP1", Arrays.asList(new String[]{"grp1_login1","grp1_login2","grp1_login3"...}));
                        loginsBySegment.put("GRP2", Arrays.asList(new String[]{"grp2_login1","grp2_login2","grp2_login3"...}));
                        loginsBySegment.put("GRP3", Arrays.asList(new String[]{"grp3_login1","grp3_login2","grp3_login3"...}));
                        indexBySegment.put("GRP1", 0); 
                        indexBySegment.put("GRP2", 0); 
                        indexBySegment.put("GRP3", 0); 
                        initialized = true; 
                } 
        } 
        
        /** 
         * Instances of this will be created using the no-arg constructor. 
         */ 
        public UniqueLoginManager() { 
        } 
        
        /** 
         * Returns the previous login if the thread has previously requested a login during this test execution. Otherwise retrieves the next login from the list, binds it to the thread, and returns it. 
         * @param segmentId 
         * @param thread 
         * @return String.class login 
         */ 
        public synchronized String getLogin(String segmentId, String thread) { 
                if(loginsByThread.containsKey(thread)) { 
                        return loginsByThread.get(thread); 
                } else { 
                        init(); 
                        List<String> logins = loginsBySegment.get(segmentId); 
                        Integer index = indexBySegment.get(segmentId); 
                        if(logins!=null && logins.size()>0 && index<logins.size()) { 
                                String login = logins.get(index); 
                                indexBySegment.put(segmentId, index+1); 
                                loginsByThread.put(thread, login); 
                                return login; 
                        } else { 
                                //fail 
                                return null; 
                        } 
                } 
        } 

        /** 
         * Generate and manage unique logins across multiple user threads and user-defined segments. 
         * Warning: This class is not thread-safe for a segmentation identifier that is distributed across multiple agents. Each segmentation identifier 
         * must exist on a single agent to avoid login duplication. 
         * @param String.class[] args - arg0 = segmentation identifier (optional, if not included will use the UserGroupName as segmentation identifier) 
         */ 
        public String exec(ITestExecutionServices tes, String[] args) { 
                String segmentId = null; 
                ITestLogManager tlm = tes.getTestLogManager(); 
                
                IDataArea dataArea = tes.findDataArea(IDataArea.VIRTUALUSER); 
                IVirtualUserInfo virtualUserInfo = (IVirtualUserInfo)dataArea.get(IVirtualUserInfo.KEY); 
                String user = virtualUserInfo.getUserName(); 
                
                //If arg[0] not provided, use the UserGroupName as the segmentation identifier 
                if(args.length<1) { 
                        segmentId = virtualUserInfo.getUserGroupName(); 
                } else { 
                        segmentId = args[0]; 
                } 
                
                String login = getLogin(segmentId, user); 
                if(login==null) { 
                        tlm.reportErrorCondition(RPTCondition.CustomCodeAlert); 
                        return null; 
                } 
                
                return login; 
        } 
}