Monday, March 31, 2014

RPT Custom Code - Saving Response Data

How to build datapools by extracting generated data from a system using custom code in Rational Performance Tester.

A major challenge that I encounter consistently as a Performance Engineer is priming a system with known, consistent data that I can feed into datapools and use for testing. There are a variety of ways to accomplish this: conversion processes, generation tools, database scripts, restore/upgrade processes, etc. However, a project that I have been working on recently has a variety of issues with these strategies. The database is automatically generated and nearly impossible to decipher, not to mention that it changes with each new code release. The second issue is that the data generation tools are not being kept up to date by the developers, so they are usually broken with each new release. These issues have led me to the point of using my performance testing tool (Rational Performance Tester v8.5.2) as a workaround data priming tool.

This plan has its own challenges, primary identifiers are all system generated - and the only way I can reliably access the data once it is generated is using those primary identifiers. Granted, there are ways to obtain this information once it exists in the system, but they involved days if not weeks of effort to extract and cleanse the data into a form that can be used effectively, and would require significant assistance from a developer to do so. A better way would be to process and extract the desired information from the responses themselves as it is being generated by RPT.

Currently RPT does not have any simple write-back mechanism for references during a test. There may exist a way using references and variables - but if it does it is well hidden beneath the primary strengths of the tool, which are load generation, data substitution, and response analysis. This led me to consider the question "How would I enhance the tool to let me extract response data?". Since RPT is an Eclipse-based tool and supports the addition of custom code and plug-ins it should be entirely feasible to write some code to extract whatever I wanted from it.

So that is what I did - the example and the code presented below is a very simple, bare-bones way of extracting a reference from a page response, passing it into a block of custom code, and writing it to a CSV file in a thread-safe manner that will allow you to import the generated CSV back into your project as a datapool source. There are dozens of ways I can think of to use this code and enhance it if you want to write to a database, do data transformation, correlating reference data with existing datapools, or producing multiple outputs - but this should be sufficient to get you started if you are encountering problems bridging that gap from theoretical to practical.

Step 1: Create a Custom Code Class
With your recorded test open, right-click your top-level test element and select "Add --> Custom Code".

Selecting the new Custom code element will show the Custom Code element details in the right-side pane that lets you enter a Class name (in the format: "package.Class"). If this is the first time you have created a Custom Code class - click the "Generate Code" button to create a basic default class implementation. If you already have the Custom Code class you want to use, skip down to Step 4 for adding an existing Custom Code element to your test case.

This will generate a simple Custom Code class that you can start using in your test.

package export; 

import com.ibm.rational.test.lt.kernel.services.ITestExecutionServices; 

/** 
 * @author unknown 
 */ 
public class Test implements 
                com.ibm.rational.test.lt.kernel.custom.ICustomCode2 { 

        /** 
         * Instances of this will be created using the no-arg constructor. 
         */ 
        public Test() { 
        } 

        /** 
         * For javadoc of ICustomCode2 and ITestExecutionServices interfaces, select 'Help Contents' in the 
         * Help menu and select 'Extending Rational Performance Tester functionality' -> 'Extending test execution with custom code' 
         */ 
        public String exec(ITestExecutionServices tes, String[] args) { 
                return null; 
        } 


}

To organize my library of custom code, I created a new Performance Test Project called "CustomCodeReference". If you create this project as a Performance Test Project it will automatically include all of the required RPT libraries in its build path. This will also allow me to export the project into a .jar file that I can import into other projects, or simply reuse it as a dependency in other Performance Testing projects as needed.


Step 2: Create a Singleton File Writer
Create a new class file that will be used by our custom code as the singleton file writer to manage file creation, access, and synchronization between threads. In this example I have created the class "export.ExportAdministrator" in my CustomCodeReference project to be my singleton file writer class. A suggested enhancement to this structure would be to create a dedicated Factory class that returns an instance of an interface to allow for multiple optional implementations such as a CSV file implementation, an XML file implementation, or a database implementation.

package export;

import java.io.File;
import java.io.FileWriter;
import java.text.DateFormat;
import java.text.SimpleDateFormat;

/**
 * @author grempel
 * @date 2014-03-27
 *
 * Singleton class with factory to manage thread-safe file export
 * Usage:
 *   Call ExportAdministrator.getInstance() to obtain object reference
 *   Use instance write() or writeln() to export to file
 */
public class ExportAdministrator {
private static ExportAdministrator instance = null;
private FileWriter fw = null;

public ExportAdministrator() {
//Build string writer
DateFormat df = new SimpleDateFormat("yyyy-MM-dd HH-mm-ss");
File file = new File("C:\\rpt-log\\" + df.format(System.currentTimeMillis()) + " output.csv");
try {
fw = new FileWriter(file);
} catch (Exception e) {
e.printStackTrace();
}
}

/**
* Retrieve singleton instance of writer.
*/
public static synchronized ExportAdministrator getInstance() {
if(instance==null) {
instance = new ExportAdministrator();
}
return instance;
}

/**
* Write string to file
*/
public synchronized String write(String str) {
if(fw!=null) {
try {
fw.write(str);
fw.flush();
return "true";
} catch (Exception e) {
return e.toString();
}
}
return "failed";
}

/**
* Write string to file and start a new line
*/
public synchronized String writeln(String str) {
if(fw!=null) {
try {
fw.write(str);
fw.write("\r\n");
fw.flush();
return "true";
} catch (Exception e) {
return e.toString();
}
}
return "failed";
}

/**
* Clean up open file references
*/
@Override
protected void finalize() throws Throwable {
if(fw!=null) {
fw.close();
}
super.finalize();
}

}

This class will generate a timestamped output file in the folder "C:\rpt-log\" on each agent that is running the test.

One note to make about the usage of this class. If you are using more than 1 virtual user to generate your data, you are best to compile the entire set of data you want to write to the file into 1 string before calling writeln(). If you attempt to call write() multiple times, you cannot guarantee that they will be sequential as another thread may have called the write() method in the meantime.

Step 3: Call File Writer from Custom Code
With our file writer (ExportAdministrator) singleton in place, we can now modify our custom code to submit whatever we want to write to that file. In this case I have created our Custom Code class called "export.ExportIdentifier" in my CustomCodeReference project.

This custom code expects 2 arguments to be passed to it, an output type: a simple text string that will allow you to group outputs based on the type of data being written - this allows you to reuse the code for multiple types of values in a single test; and an output value: the value you want to write. In addition to these, this custom code will also generate a timestamp that will be written to the file in order to determine the write operation sequence.

package export;

import com.ibm.rational.test.lt.kernel.services.ITestExecutionServices;
import com.ibm.rational.test.lt.kernel.services.ITestLogManager;
import com.ibm.rational.test.lt.kernel.services.RPTCondition;

/**
 * @author grempel
 * @date 2014-03-27
 * 
 * Usage:
 *   Add Custom Code reference to a test
 *   Set Class name = /CustomCodeReference/export.ExportIdentifier
 *   Add Arguments
 *     - Text: Value Type String
 *     - Text or Reference: Value String
 */
public class ExportIdentifier implements
com.ibm.rational.test.lt.kernel.custom.ICustomCode2 {
/**
* Instances of this will be created using the no-arg constructor.
*/
public ExportIdentifier() {
}

/**
* For javadoc of ICustomCode2 and ITestExecutionServices interfaces, select 'Help Contents' in the
* Help menu and select 'Extending Rational Performance Tester functionality' -> 'Extending test execution with custom code'
* @param String.class[] args - arg0 = value type, arg1 = value to write
*/
public String exec(ITestExecutionServices tes, String[] args) {
//Validate arguments
String type = null;
String value = null;
ITestLogManager tlm = tes.getTestLogManager();
if(args.length<=1) {
tlm.reportErrorCondition(RPTCondition.CustomCodeAlert);
return null;
}
type = args[0];
value = args[1];
Long timestamp = System.currentTimeMillis();

//Generate CSV string to write to file
String result = timestamp.toString() + "," + type + "," + value;

//Submit string to writer
String error = ExportAdministrator.getInstance().writeln(result);

//Catch errors and return to RPT for validation and handling
if(error.equalsIgnoreCase("true")) {
return result;
}

return error;
}

}

Step 4: Add Custom Code to a Test
To use this custom code in your test, add a Custom Code element to your test (see Step 1 for reference). Instead of generating a new class, we will be specifying our existing class in the Class Name field with the format: "/ProjectName/package.ClassName". In the above example this will be "/CustomCodeReference/export.ExportIdentifier".

Next we will add the arguments to the custom code element details. The first argument is the output type. Click the "Text" button on the right and specify the text to submit as the type. This can be different for each Custom Code element that you add to your test, but it works best as a hard-coded value in order to ensure it remains the same for each iteration.

Then click "Add" and select a data source object or reference to use as the output value. In this example I have set the output type to "TestAccountName" and the output value to the "Username" variable from my datapool. This example is redundant, I already have my usernames in a datapool - normally I would insert the custom code after a page response that contains some data that I want to write to a file, and select a reference instead.

This will now allow me to export any generated data that exists in the response record to an external source.

Hurdles #4 - Apache Pivot - Scroll Panes

Hurdles article number 4 and continuing to overcome challenges with Apache Pivot, this time we are looking at controlling the current position of a Scroll Pane in response to an event.

The premise of this article is that I have a Scroll Pane that contains a running log of information that is updated periodically. This log appends rows to a Table Pane that is the Scroll Pane's view. In order to ensure that any new information is immediately visible what I want to do is scroll the Scroll Pane to the bottom of the view.

There are a couple of pre-built methods to help with this. The Scroll Pane comes with setScrollTop(int y) and setScrollLeft(int x) to adjust the viewport position relative to the top and left respectively. It also has scrollAreaToVisible(Bounds bounds) which serves as an auto-scroll function to zoom to a sub-component.

The problem with these methods is that they will not work properly inside an event handler that modifies the view component! Within the event handler that is appending information to the view the following problems become immediately apparent:

  1. getHeight and getBounds methods on both the Scroll Pane and View components return 0.
  2. scrollAreaToVisible can throw OutOfBounds exceptions
  3. setScrollTop(y) when given a hard-coded value will adjust the Scroll Pane, but the scroll bars themselves will not be updated.
Through my investigations into the underlying code I discovered that I could obtain a reference to the Scroll Pane Skin object which is the object that controls the Scroll bars, tracks the current and max position of the viewport, and the rendering of these components. However many of the objects and methods within the Skin are private and inaccessible without rewriting the entire class, and the ones that are visible have the same problem of returning 0 values for height and bounds.

Eventually I discovered a link in the Apache Pivot Users forum that held the key to the solution. 2.0.2 How to move scroll bar to bottom of ScrollPane? The issue is that many of the view attributes are invalidated on a change to the View component, and are only reset as part of the painting code itself, after the event handler has already returned.

The solution to the problem is to queue a callback within the ApplicationContext itself specifically to force the repositioning of the Scroll Pane after all the paint operations have completed. I have included the code snippit below.

In the update event handler:
ApplicationContext.queueCallback(new ScrollPaneCallbackHandler(scrollPaneComponent));

Separate ScrollPaneCallbackHandler class:

private class ScrollPaneCallbackHandler implements Runnable {
private ScrollPane pane;
public ScrollPaneCallbackHandler(ScrollPane pane) {
this.pane = pane;
}
/**
* Resets ScrollTop position to show bottom of view component
*/
public void run() {
pane.setScrollTop(pane.getView().getHeight() - pane.getHeight());
}
}

Wednesday, January 29, 2014

Gridiron Solitaire on Gamers with Jobs

I am posting a link to a podcast: Gamers With Jobs - Conference Call Episode 381 (released today). I received a couple of shout-outs on this episode from my friend Bill Harris who was on to talk about his just-released-on-steam game Gridiron Solitaire.

I've been helping Bill for the better part of 3 years, teaching him how to code from scratch, helping him with problems, and answering his questions - so getting to see him actually release his brainchild to the public is a very proud moment.

More information can be found at Bill and Eli Productions and on the Steam Gridiron Solitaire Product Page.

Wednesday, January 15, 2014

Gotcha #4 - WPF - Restore Events and Media Elements

Today's article comes courtesy of assisting Bill Harris' (Dubious Quality) work on GridIron Solitaire. The original problem comes from identifying a bug that sound effects were not playing when a system returns from sleep mode if the game had been left open.

Some basic details on the application: GridIron Solitaire uses Windows Presentation Foundation (WPF) for its UI framework, it has been developed in VisualBasic 2010, and sound effects are played by using System.Windows.Controls.MediaElement class.

The problem in this case was that sound effects that had been initialized prior to the system enter sleep mode did not continue playing after the system returned to an active state. Sounds initialized after the system returned to an active state continued to play fine. When first presented with this problem it did not seem to be a terribly difficult one to solve, nor an uncommon one to encounter. There are many games released by major studios that have difficulty handling system sleep and restore, and even minimization or alt-tab window switching. The most recent experience I have had with these problems include Civilization V in which I have encountered alt-tab artifacts (game remains visible in the background), as well as restore from minimization (texture corruption) and restore from sleep (fatal crash) problems.

The first line of investigation that we followed was the possibility that the MediaElement was failing to play after a restore, so we constructed a plan to catch the MediaElement.MediaFailed event and reset the MediaElement in that instance. On the positive side this approach solved an unrelated application crash problem caused by a missing media file, but it was soon determined that the MediaFailed event was not being fired when the sounds failed to play after a system restore.

PowerModeChanged Event Solution #1
Further research revealed that a MediaElement's Source property becomes invalidated during the sleep/restore cycle, and that resetting the Source property and restarting the MediaElement. One of the suggestions was to catch the PowerModeChanged Event and check for PowerModes.Suspend and PowerModes.Resume states, any suspend/resume code needed by your application can be performed in this block. The resulting code catches the PowerModeChanged Event, and on Resume resets every MediaElement Source property to its correct value.

Private Sub SystemEvents_PowerModeChanged(ByVal sender As Object, ByVal e As PowerModeChangedEventArgs)
    Select Case e.Mode
        Case PowerModes.Resume
            Reinitialize_Sounds()

        Case PowerModes.StatusChange


        Case PowerModes.Suspend

            BackgroundEffectALoopPlayer.Pause()
            BackgroundEffectBPlayer.Pause()
    End Select
End Sub

Private Sub Reinitialize_Sounds()

    SoundEffectA.Source = Nothing
    SoundEffectA.Source = New Uri("Resources/SoundEffectA.mp3", UriKind.Relative)
    '...etc...
End Sub

The effect of this code seemed to be satisfactory at first. Although it slowed down the time for the application to restore from a suspended state, it wasn't a particular problem as additional processing time required to restore from a suspended state is normally accepted and expected. A minor issue was that ongoing sounds would not restart until the next time the code required them to be played, which was easily resolved by programmatically restarting ongoing sounds within the restore code.

However, it was only observed later - and generally on lower-end systems - that in some cases a few sound effects were not being restarted after restore. An additional confounding factor was that the sound effects that were not returning were not consistent, an apparent random selection of 2 or 3 would not resume and this effect was not reproducible in debug mode.

Application_Activated Event Solution #2
Considering the possibility that the PowerModeChanged Event was not firing correctly, or that there was some other conflict regarding MediaElement objects that was occurring after the PowerModeChanged Event the proposed alternative was to move the restore code into an Application_Activated Event handler. The Application_Activated Event is fired on a different but related and less-specific set of conditions to the PowerModeChanged Event. The theory was that perhaps the systems where the continuing sound failure was being observed were tablet and netbook systems which may be impacting the events being fired.

Once the restore code was transferred and redeployed, the same issues were observed occurring in precisely the same circumstances as the PowerModeChanged Event solution.

Brute-force Solution #3

In an attempt to find any way past the problem, Bill attempted to brute-force the solution by forcing every sound to reload the Source property every time that it was activated. Although the performance of this approach was going to be unacceptable, the lag time for loading a sound every time it is played would be noticeable to the player, as a debugging approach it was reasonable.

The result: slow, but successful. Despite every sound effect suffering from noticeable lag, after a suspend/restore cycle every sound effect returned and played correctly on every test system configuration.

Problem Identification
It was at this point that the cause of the problem was identified. The process that restores an application from a suspended state causes MediaElement sources to be invalidated. The problem with using either the PowerModeChanged Event or the Application_Activated Event for setting the source property, is that the event handlers are operating on a different thread than the application restore process and these threads are entering into a race condition.

In the first two solutions, on slower systems a few of the lighter-weight sound effects were being loaded into their MediaElement objects before the application restore process that invalidates the MediaElement sources was completed. As a result a few of the MediaElement sources were being invalidated after they had already been corrected - causing a few of the sound effects to fail in an inconsistent and unpredictable manner.

On-demand Resource Loading Solution #4
The final solution is to enforce on-demand resource loading and caching whenever a MediaElement is required to play a sound effect, moving the responsibility for checking the existence of a media source from the pre-loader to the code that plays the MediaElement itself.

As a result the PowerModeChanged Event handler is being used to nullify (set to Nothing) all MediaElement sources when a suspend event occurs. Then every time a MediaElement is about to be played the code first rechecks its source to ensure that it is not null, and reloads the source if it does not exist before playing it.

Private Sub SystemEvents_PowerModeChanged(ByVal sender As Object, ByVal e As PowerModeChangedEventArgs)
    Select Case e.Mode
        Case PowerModes.Resume

        Case PowerModes.StatusChange

        Case PowerModes.Suspend

            SoundEffectA.Source = Nothing
            SoundEffectB.Source = Nothing
            '...etc...
    End Select
End Sub

'In sound effect Play code:
If SoundEffectA.Source = Nothing Then
    SoundEffectA.Source = New Uri("Resources/SoundEffectA.mp3", UriKind.Relative)
End If
SoundEffectA.Play()

The final effect is that sound effect failure has been eliminated, and a slight lag occurs the first time each sound effect is played immediately after a system restore, but returns to optimal performance soon after once each sound effect has been re-cached.

Monday, November 25, 2013

Counterexample #1 - Performance Engineering of Healthcare.gov

For the past few months we have all had ringside seats to the spectacular failure of planning and communication that is Healthcare.gov - the personalized health insurance marketplace run by the United States Federal Government.

We now know that the project team and its managers were aware of problems with the application as early as March. That insufficient testing, evolving requirements, and performance were all contributors to the limitations that were seen at launch where the system could only handle 1,100 users per day. Considering that initial estimates were anticipating 50 to 60 thousand simultaneous users and in reality has been seeing upwards of 250,000 simultaneous users, this is a remarkable example of the impact of failing to engineer for performance.

On September 27th, four days before go-live, the Acting Directory of the CMS Office of Enterprise Management David Nelson wrote the following, illuminating, quote: "We cannot proactively find or replicate actual production capacity problems without an appropriately sized operational performance testing environment." By September 30th, the day before go-live another email : "performance degradation started when there were around 1,100 to 1,200 users".

The Catalogue of Catastrophe, a list of failed or troubled projects around the world has this to say about the project: "Healthcare.gov joins the list of projects that underestimated the volume of transactions they would be facing (see "Failure to address performance requirements" for further examples)."

If we example the list of Classic Mistakes as to why projects fail we can see that Healthcare.gov committed no less than 7 out of the top 10. Clay Shirky has written
 a fabulous article titled Healthcare.gov and the Gulf Between Planning and Reality, that explains the scope and magnitude of failure of communication that occurred on this project, as well as the inherent flaw in the statement "Failure is not an option."