Tuesday, August 05, 2014

RPT Custom Code - Ensuring Unique Logins for Looping Tests

A problem I have encountered while performance testing business applications with Rational Performance Tester is the uniqueness of logins. Often, business applications contain logic that will prevent duplicate logins from multiple sources at the same time, or will have workflow control or session replication or persistence that will result in interference if the same login is being used by more than one thread at the same time. A quick workaround to this problem can be to ensure there is a large pool of available logins to reduce the possibility of duplicates, however this is not always possible if a system is using Active Directory or LDAP, or if it SSO enabled. So the challenge being faced is, how do we bind a unique login to each virtual user thread in our test for the duration of the test?

The first approach would be to adjust the settings on your login datapool. When you add a datapool to your test an option exists to "Fetch only once per user".

In theory, this would be sufficient to ensure each thread will have a unique login. However in practice it seems to be not quite so simple. In a test configuration that uses multiple agents "Open mode" must be set to "Segmented" otherwise each agent will have a complete copy of the same list of accounts, resulting in duplication. In order to use Segmented in this manner though, your datapool must be larger than the number of threads in order to ensure sufficient rows are available in each segment. (IBM recommends you have 2x the number of records as threads to ensure balanced segmentation).

Despite the theory, I have run into the problem of threads exiting prematurely in a multiple user group/multiple agent/infinite loop test configuration with the error message "End of datapool reached". This is not an error we should be seeing. Reviewing the saved response data demonstrated that each thread was correctly using a single unique login, but somehow the error was consistent.

While attempting to debug the issue, I tried setting the "Wrap when last row is reached" property. Although successful in preventing threads from exiting prematurely, the wrap property appears to override the fetch-once property, returning me to a state of duplicate logins. Unfortunately, IBM's documentation does a poor job of explaining how each of these datapool properties interact with one another, so in order to overcome this issue I turned to writing my own piece of custom code to manage my logins.

The following custom code solution binds threads to a specific login on first access, and thereafter will always return the same login identifier for each subsequent request. It also segments the login map into groups that can be manually accessed (by passing the group name as the first parameter) or automatically by setting the User Group Name in your schedule.

This code is simplified and has a few limitations.

  1. Distinct Agent Segments - Each Agent must use a distinct User Group Name because static objects are not shared in memory between agents. If two agents are assigned to the same User Group then duplicate logins will occur.
  2. Configurability - The segments and login lists are hardcoded in this code segment, this could be overcome by adding an option to read logins from a file (or other data storage)
  3. Order - Logins will always be returned in the same order for each subsequent test run, a piece of randomization code would allow them to be shuffled.
package export; 

import java.util.Arrays; 
import java.util.HashMap; 
import java.util.List; 
import java.util.Map; 

import com.ibm.rational.test.lt.kernel.IDataArea; 
import com.ibm.rational.test.lt.kernel.services.ITestExecutionServices; 
import com.ibm.rational.test.lt.kernel.services.ITestLogManager; 
import com.ibm.rational.test.lt.kernel.services.IVirtualUserInfo; 
import com.ibm.rational.test.lt.kernel.services.RPTCondition; 

/** 
 * !!Warning!!: This code is NOT agent safe. Each unique set of segmentation identifiers MUST be isolated to a single agent for this code to work correctly 
 * @author grempel 
 */ 
public class UniqueLoginManager implements 
                com.ibm.rational.test.lt.kernel.custom.ICustomCode2 { 
        
        private static Map<String, List<String>> loginsBySegment = new HashMap<String, List<String>>(); 
        private static Map<String, String> loginsByThread = new HashMap<String, String>(); 
        private static Map<String, Integer> indexBySegment = new HashMap<String, Integer>(); 
        private static boolean initialized = false; 

        /** 
         * Initializes static login maps by segmentation 
         */ 
        private synchronized void init() { 
                if(!initialized) { 
                        loginsBySegment.put("GRP1", Arrays.asList(new String[]{"grp1_login1","grp1_login2","grp1_login3"...}));
                        loginsBySegment.put("GRP2", Arrays.asList(new String[]{"grp2_login1","grp2_login2","grp2_login3"...}));
                        loginsBySegment.put("GRP3", Arrays.asList(new String[]{"grp3_login1","grp3_login2","grp3_login3"...}));
                        indexBySegment.put("GRP1", 0); 
                        indexBySegment.put("GRP2", 0); 
                        indexBySegment.put("GRP3", 0); 
                        initialized = true; 
                } 
        } 
        
        /** 
         * Instances of this will be created using the no-arg constructor. 
         */ 
        public UniqueLoginManager() { 
        } 
        
        /** 
         * Returns the previous login if the thread has previously requested a login during this test execution. Otherwise retrieves the next login from the list, binds it to the thread, and returns it. 
         * @param segmentId 
         * @param thread 
         * @return String.class login 
         */ 
        public synchronized String getLogin(String segmentId, String thread) { 
                if(loginsByThread.containsKey(thread)) { 
                        return loginsByThread.get(thread); 
                } else { 
                        init(); 
                        List<String> logins = loginsBySegment.get(segmentId); 
                        Integer index = indexBySegment.get(segmentId); 
                        if(logins!=null && logins.size()>0 && index<logins.size()) { 
                                String login = logins.get(index); 
                                indexBySegment.put(segmentId, index+1); 
                                loginsByThread.put(thread, login); 
                                return login; 
                        } else { 
                                //fail 
                                return null; 
                        } 
                } 
        } 

        /** 
         * Generate and manage unique logins across multiple user threads and user-defined segments. 
         * Warning: This class is not thread-safe for a segmentation identifier that is distributed across multiple agents. Each segmentation identifier 
         * must exist on a single agent to avoid login duplication. 
         * @param String.class[] args - arg0 = segmentation identifier (optional, if not included will use the UserGroupName as segmentation identifier) 
         */ 
        public String exec(ITestExecutionServices tes, String[] args) { 
                String segmentId = null; 
                ITestLogManager tlm = tes.getTestLogManager(); 
                
                IDataArea dataArea = tes.findDataArea(IDataArea.VIRTUALUSER); 
                IVirtualUserInfo virtualUserInfo = (IVirtualUserInfo)dataArea.get(IVirtualUserInfo.KEY); 
                String user = virtualUserInfo.getUserName(); 
                
                //If arg[0] not provided, use the UserGroupName as the segmentation identifier 
                if(args.length<1) { 
                        segmentId = virtualUserInfo.getUserGroupName(); 
                } else { 
                        segmentId = args[0]; 
                } 
                
                String login = getLogin(segmentId, user); 
                if(login==null) { 
                        tlm.reportErrorCondition(RPTCondition.CustomCodeAlert); 
                        return null; 
                } 
                
                return login; 
        } 
}

Monday, July 28, 2014

Web Performance Standards: Finding Value in User Surveys

Studies that purport to establish or define performance standards for web page loading times typically take one of three forms: examinations and measurements of physiological traits, empirical studies based on abandonment, or surveys based on participants’ emotional response. Of these three, surveys are least likely to produce reliable results as they are based on participants’ subjective self-assessment of their tolerance levels and not on precise, concrete, measurable actions. As well, a participant’s tolerance for loading times may vary significantly based on numerous factors such as: Age, Experience, Task, Time of Day, and others[8].

However, the goal for defining performance standards is to establish a level at which the typical user of a web page will be satisfied. Setting a target performance level at a point where 50% of study participants abandon the web page before it completes loading is a poor target for user satisfaction. In order to aim for user satisfaction, the targets that are set must be faster than the typical users’ frustration level – an emotional tipping point that must be reached before a user will decide to abandon a web page. Thus surveys, by necessity, play an important part in understanding how to define an effective performance standard.

This article examines two significant web performance surveys conducted by JupiterResearch in 2006 and Forrester Consulting in 2009 which attempt to produce generalizations about user’s satisfaction with web page performance and their tolerance thresholds. The rest of this article reviews the methodology used by these two surveys and potential deficiencies in those methodologies, and then describes an experimental survey to compare how participant responses differ when similar survey questions are used but the method for providing their response differs. The results of the survey are then presented and compared. Conclusions are drawn regarding the value and usefulness of the research.

Published Survey-Based Standards

JupiterResearch (2006)

Retail Web Site Performance, Consumer Reaction to a Poor Online Shopping Experience. June 1, 2006 prepared for Akamai Technologies, Inc.

Key Finding:
“Overall, 28 percent of online shoppers will not wait longer than four seconds for a Web site page to load before leaving. Broadband users are even less tolerance of slow rendering. A full one-third of online shoppers with a broadband connection are unwilling to wait more than four seconds (compared with 19 percent of online shoppers with a dial-up connection).”[6]

Methodology:
JupiterResearch conducted a survey of 1,058 online shoppers. Among other questions posed to respondents the one we are concerned with is “Typically, how long are you willing to wait for a single Web page to load before leaving the Web site? (Select one.)”.

The options presented were:
  • Less than 1 second
  • 1 to 2 seconds
  • 3 to 4 seconds
  • 5 to 6 seconds
  • More than 6 seconds

Forrester Consulting (2009)

eCommerce Web Site Performance Today, An Updated Look At Consumer Reaction To A Poor Online Shopping Experience. August 17, 2009 prepared for Akamai Technologies, Inc.

This report is a directly analogous to the 2006 report by JupiterResearch as the latter was acquired by Forrester Consulting in 2008[7].

Key Finding:
“Forty-seven percent of consumers expect a Web page to load in 2 seconds or less.”[2]

Methodology:
Forrester Consulting conducted a survey of 1,048 online shoppers. Among other questions posed to respondents the one we are concerned with is “What are your expectations for how quickly a Web site should load when you are browsing or searching for a product?”.

The options presented were:
  • Less than 1 second
  • 1 second
  • 2 seconds
  • 3 seconds
  • More than 4 seconds

Comparison

Forrester Consulting references the previous 2006 study and attributes the difference in their key findings to increasing access to broadband among US customers. However the report fails to provide any supposition as to why the expectations of broadband users would increase so dramatically in three years. The 2006 survey reports 33% of broadband users will not wait more than 4 seconds, while the 2009 survey reports that at least 47% of broadband users will not wait more than 2 seconds.

Comparing similar units, the percentage of broadband users that will not wait more than 2 seconds for a web page to load increases from 12% in 2006 to 47% in 2009. No reason is suggested by the paper as to why there is a four-fold increase in only 3 years.

The 2009 paper also makes the claim that “This methodology was consistent with the 2006 study methodology.” Although the basic format of the survey and how it was conducted remained the same, the options being presented to the respondents for this question are dramatically different. In 2006, 73.5% of respondents answered with “5 to 6 seconds” or “more than 6 seconds”. These same respondents in 2009 have no longer been given the option to make this distinction and have all been lumped into the same “more than 4 seconds” category.

Presenting such a limited range of options in the 2009 survey, it may be that respondents who would have naturally answered by selecting an option with a larger time reconsidered what their answer would be when presented with options dramatically different from their initial expectation. It may be that instead of choosing the most appropriate answer based on their initial thought (more than 4 seconds) they instead selected an option closer to the middle. This presents the possibility that either or both of these surveys may be subject to Central Tendency Bias or Position Bias.[3]


Experimental Survey Methodology

The purpose of this survey is to determine if the choice of answer structure and the options presented in the 2006 and 2009 surveys impacted the answers given by the respondents. In order to make this determination, this survey presented a single question to respondents that closely matched the question presented in the 2006 and 2009 surveys. The differing factor is that this survey allowed respondents to answer however they wished using a free-form answer field instead of selecting a predefined option.

The primary question presented in this survey is: “When opening a typical webpage, how long (in seconds) will you wait for it to load before feeling frustrated or taking some kind of action? Taking an action may include doing something else while you wait (switching windows/tabs), reloading the page, or giving up and going somewhere else.”

Three additional demographic questions were also included in the survey:
  • Age? - Options: under 18, 18-24, 25-36, 35-44, 45-54, 55-64, 65-74, over 75.
  • Gender? - Freeform entry
  • At what level of proficiency do you use the internet? - Options: I am a web application developer, I am a content creator, I use it for work, I use it for personal use regularly (>3 times/week), I use it for personal use occasionally (<= 3 times/week).

Findings

The results of this survey are based on 78 online responses from Canada and the US. The answers given by the respondents ranged from 0.5s to 60s. As shown in Table 1 this survey resulted in responses that were significantly higher and distributed much more broadly than the responses provided to the 2006 and 2009 surveys, as expected. However we can also see that this survey’s resulting Median and Mode closely match those from the 2006 survey. Compressing all the responses in the Freeform survey that exceeded 6 seconds into a single >6s option would have resulted in the same Mode value. In contrast the 2009 survey portrays a vastly different picture.

Table 1: Survey Average Values Comparison

Freeform Survey

2006 Survey (Jupiter)

2009 Survey (Forrester)
Median
5.00
Median
5-6s
Median
3s
Mean
9.82
Mean*
5.80
Mean*
2.51
Mode
5.00
Mode
>6s
Mode
3s
SD
11.41

SD*
1.57

SD*
1.01
*Mean and Standard Deviation values for 2006 Survey (Jupiter) and 2009 Survey (Forrester) were calculated using the midpoint value in each option range, and using the highest value + 1s for the highest option. These calculations are not intended to be exact, but are used in this context for comparative purposes only.

Grouping the responses to this survey into the same options presented by the 2006 and 2009 surveys produce a clearer comparison as shown in Figure 1 and Figure 2.


Figure 1: Response Frequency by Option, Freeform vs 2006 (Jupiter) Survey


Although the 2006 survey suffers from compression at the largest interval which contains nearly half of all responses, the results are a close match to the results of this survey. A slight shift towards faster web page loading time expectations can be seen between 2006 when the JupiterResearch survey was completed, and 2014 when this survey was completed. This shift is evident by the in the increase in percentage of responses in the <1s and 1-2s categories and the corresponding decrease in responses in the 3-4s, 5-6s, and >6s categories.

The Student’s t-test[1] when applied to the compressed results shown in Figure 1 using an independent two-sample t-test for unequal variances produced a value of P = 0.1650.

Figure 2: Response Frequency by Option, Freeform vs 2009 (Forrester) Survey


The 2009 survey results show no such similarity, the distribution of responses shows a vastly different pattern than the responses to both the 2006 survey and this survey. The Student’s t-test[1] when applied to the compressed results shown in Figure 2 using an independent two-sample t-test for unequal variances produced a value of P = 7.147*10-8.

Conclusions

There is a significant agreement between the results of this freeform survey and the 2006 JupiterResearch survey (P > 0.05) which indicates that it is unlikely that there is significant bias caused by the structure of the question or the options presented to the respondents for that survey. However, the results of the 2009 Forrester Consulting survey disagree greatly (P < 0.01) which is suggestive that the 2009 survey is being subjected to some form of bias that is likely imparted by the presentation of response options to the respondents.

All surveys that are conducted in an attempt to quantitatively define an emotional response (frustration) are going to produce results that are imprecise and limited by the ability of respondents to accurately self-evaluate. Patience is a volatile thing, fluctuating wildly between different users and within a single users themselves depending on their current state of mind.[4][5]

The freeform survey in particular is also limited by its small sample size of respondents, increasing the number of respondents to produce a sample size of >1000 would be beneficial to provide a better comparison and strengthen confidence in the conclusions.

Business Applicability of Results

It is important when working within a business context to understand how to apply the results obtained by web page performance surveys when establishing a set of performance guidelines or requirements. It is typically ineffective to set performance standards by using the median result – it is not an effective business goal to aim for a standard where only 50% of your users abandon your web page out of frustration. On the opposite end setting a standard where every user will be satisfied with the web page performance for every page may be unrealistic, factors such as type of connection, geographic location, and processing time can prevent those from being achievable.

A more typical approach is to use a high percentile of the survey results as a business performance standard, setting a goal to meet the performance level that would satisfy 90% of your users. Figure 3 compares the three surveys and their responses that correspond to several response percentile levels.

Figure 3: Response Time Percentile Comparison


Based on the percentile analysis of survey responses we can draw conclusions regarding the expected percentage of satisfied users based on achieving a specific web page performance response time standard.

We can observe that the percentage of users that will be satisfied with a web page performance standard of 1s or less is >=95% in the freeform survey, >=99% in the 2006 (Jupiter) survey, and >=90% in the 2009 (Forrester) survey. This gives us the conclusion that in general at least 90% of all users would be satisfied if a web page performance standard of 1s or less was achieved.

Similarly, a performance standard of 2s or less is >=85% in the freeform survey, >=99% in the 2006 (Jupiter) survey, and >=80% in the 2009 (Forrester) survey. Thus at least 80% of all users would be satisfied if a standard of 2s or less was achieved.

These observations are based on the worst-case scenario that the survey that provided the most aggressive performance targets is the most accurate of the three. This does provide a good lower bound for user satisfaction, but not perhaps a good expected level of user satisfaction. If we take each of the surveys as having equal weight, we can average their response time percentile values to determine an expected level of user satisfaction.

Based on the average percentile value, we see that our expected level of user satisfaction with a performance standard of 1s or less is >=99%, and for a standard of 2s or less it is >=90%. Thus given a business case where we aim to achieve at least a 90% rate of user satisfaction with our web site performance, we expect that our web page response times would need to be 2s or less.

References

[1]   Encyclopedia Brittanica (2014), Student's t-test. Available at: http://www.britannica.com/EBchecked/topic/569907/Students-t-test
[2]   Forrester Consulting (2009), eCommerce Web Site Performance Today. Available at: http://www.damcogroup.com/white-papers/ecommerce_website_perf_wp.pdf
[3]   Gingery, Tyson (2009), Survey Research Definitions: Central Tendency Bias, Cvent: Web Surveys, Dec 22, 2009. Available at: http://survey.cvent.com/blog/market-research-design-tips-2/survey-research-definitions-central-tendency-bias
[4]   Gozlan, Marc (2013), A stopwatch on the brain’s perception of time, Guardian Weekly, Jan 1, 2013. Available at: http://www.theguardian.com/science/2013/jan/01/psychology-time-perception-awareness-research
[5]   Hotchkiss, Jon (2013), How Bad Is Our Perception of Time? Very!, Huffington Post – The BLOG, Sep 19, 2013. Available at: http://www.huffingtonpost.com/jon-hotchkiss/how-bad-is-our-perception_b_3955696.html
[6]   JupiterResearch (2006), Retail Web Site Performance. Available at: http://www.akamai.com/dl/reports/Site_Abandonment_Final_Report.pdf
[7]   Kaplan, David (2008), Forrester Buys JupiterResearch for $23 Million, Forbes Magazine, Jul 31, 2008. Available at: https://web.archive.org/web/20080915011602/http://www.forbes.com/technology/2008/07/31/forrester-buys-jupiter-research-tech-cx_pco_0731paidcontent.html
[8]   Shneiderman, Ben (1984), Response Time and Display Rate in Human Performance with Computers, Computing Surveys 16, no. 3 (1984): pages 265-285. Available at: http://dl.acm.org/citation.cfm?id=2517

Appendix A – Freeform Survey Response Distribution and Demographics









Monday, March 31, 2014

RPT Custom Code - Saving Response Data

How to build datapools by extracting generated data from a system using custom code in Rational Performance Tester.

A major challenge that I encounter consistently as a Performance Engineer is priming a system with known, consistent data that I can feed into datapools and use for testing. There are a variety of ways to accomplish this: conversion processes, generation tools, database scripts, restore/upgrade processes, etc. However, a project that I have been working on recently has a variety of issues with these strategies. The database is automatically generated and nearly impossible to decipher, not to mention that it changes with each new code release. The second issue is that the data generation tools are not being kept up to date by the developers, so they are usually broken with each new release. These issues have led me to the point of using my performance testing tool (Rational Performance Tester v8.5.2) as a workaround data priming tool.

This plan has its own challenges, primary identifiers are all system generated - and the only way I can reliably access the data once it is generated is using those primary identifiers. Granted, there are ways to obtain this information once it exists in the system, but they involved days if not weeks of effort to extract and cleanse the data into a form that can be used effectively, and would require significant assistance from a developer to do so. A better way would be to process and extract the desired information from the responses themselves as it is being generated by RPT.

Currently RPT does not have any simple write-back mechanism for references during a test. There may exist a way using references and variables - but if it does it is well hidden beneath the primary strengths of the tool, which are load generation, data substitution, and response analysis. This led me to consider the question "How would I enhance the tool to let me extract response data?". Since RPT is an Eclipse-based tool and supports the addition of custom code and plug-ins it should be entirely feasible to write some code to extract whatever I wanted from it.

So that is what I did - the example and the code presented below is a very simple, bare-bones way of extracting a reference from a page response, passing it into a block of custom code, and writing it to a CSV file in a thread-safe manner that will allow you to import the generated CSV back into your project as a datapool source. There are dozens of ways I can think of to use this code and enhance it if you want to write to a database, do data transformation, correlating reference data with existing datapools, or producing multiple outputs - but this should be sufficient to get you started if you are encountering problems bridging that gap from theoretical to practical.

Step 1: Create a Custom Code Class
With your recorded test open, right-click your top-level test element and select "Add --> Custom Code".

Selecting the new Custom code element will show the Custom Code element details in the right-side pane that lets you enter a Class name (in the format: "package.Class"). If this is the first time you have created a Custom Code class - click the "Generate Code" button to create a basic default class implementation. If you already have the Custom Code class you want to use, skip down to Step 4 for adding an existing Custom Code element to your test case.

This will generate a simple Custom Code class that you can start using in your test.

package export; 

import com.ibm.rational.test.lt.kernel.services.ITestExecutionServices; 

/** 
 * @author unknown 
 */ 
public class Test implements 
                com.ibm.rational.test.lt.kernel.custom.ICustomCode2 { 

        /** 
         * Instances of this will be created using the no-arg constructor. 
         */ 
        public Test() { 
        } 

        /** 
         * For javadoc of ICustomCode2 and ITestExecutionServices interfaces, select 'Help Contents' in the 
         * Help menu and select 'Extending Rational Performance Tester functionality' -> 'Extending test execution with custom code' 
         */ 
        public String exec(ITestExecutionServices tes, String[] args) { 
                return null; 
        } 


}

To organize my library of custom code, I created a new Performance Test Project called "CustomCodeReference". If you create this project as a Performance Test Project it will automatically include all of the required RPT libraries in its build path. This will also allow me to export the project into a .jar file that I can import into other projects, or simply reuse it as a dependency in other Performance Testing projects as needed.


Step 2: Create a Singleton File Writer
Create a new class file that will be used by our custom code as the singleton file writer to manage file creation, access, and synchronization between threads. In this example I have created the class "export.ExportAdministrator" in my CustomCodeReference project to be my singleton file writer class. A suggested enhancement to this structure would be to create a dedicated Factory class that returns an instance of an interface to allow for multiple optional implementations such as a CSV file implementation, an XML file implementation, or a database implementation.

package export;

import java.io.File;
import java.io.FileWriter;
import java.text.DateFormat;
import java.text.SimpleDateFormat;

/**
 * @author grempel
 * @date 2014-03-27
 *
 * Singleton class with factory to manage thread-safe file export
 * Usage:
 *   Call ExportAdministrator.getInstance() to obtain object reference
 *   Use instance write() or writeln() to export to file
 */
public class ExportAdministrator {
private static ExportAdministrator instance = null;
private FileWriter fw = null;

public ExportAdministrator() {
//Build string writer
DateFormat df = new SimpleDateFormat("yyyy-MM-dd HH-mm-ss");
File file = new File("C:\\rpt-log\\" + df.format(System.currentTimeMillis()) + " output.csv");
try {
fw = new FileWriter(file);
} catch (Exception e) {
e.printStackTrace();
}
}

/**
* Retrieve singleton instance of writer.
*/
public static synchronized ExportAdministrator getInstance() {
if(instance==null) {
instance = new ExportAdministrator();
}
return instance;
}

/**
* Write string to file
*/
public synchronized String write(String str) {
if(fw!=null) {
try {
fw.write(str);
fw.flush();
return "true";
} catch (Exception e) {
return e.toString();
}
}
return "failed";
}

/**
* Write string to file and start a new line
*/
public synchronized String writeln(String str) {
if(fw!=null) {
try {
fw.write(str);
fw.write("\r\n");
fw.flush();
return "true";
} catch (Exception e) {
return e.toString();
}
}
return "failed";
}

/**
* Clean up open file references
*/
@Override
protected void finalize() throws Throwable {
if(fw!=null) {
fw.close();
}
super.finalize();
}

}

This class will generate a timestamped output file in the folder "C:\rpt-log\" on each agent that is running the test.

One note to make about the usage of this class. If you are using more than 1 virtual user to generate your data, you are best to compile the entire set of data you want to write to the file into 1 string before calling writeln(). If you attempt to call write() multiple times, you cannot guarantee that they will be sequential as another thread may have called the write() method in the meantime.

Step 3: Call File Writer from Custom Code
With our file writer (ExportAdministrator) singleton in place, we can now modify our custom code to submit whatever we want to write to that file. In this case I have created our Custom Code class called "export.ExportIdentifier" in my CustomCodeReference project.

This custom code expects 2 arguments to be passed to it, an output type: a simple text string that will allow you to group outputs based on the type of data being written - this allows you to reuse the code for multiple types of values in a single test; and an output value: the value you want to write. In addition to these, this custom code will also generate a timestamp that will be written to the file in order to determine the write operation sequence.

package export;

import com.ibm.rational.test.lt.kernel.services.ITestExecutionServices;
import com.ibm.rational.test.lt.kernel.services.ITestLogManager;
import com.ibm.rational.test.lt.kernel.services.RPTCondition;

/**
 * @author grempel
 * @date 2014-03-27
 * 
 * Usage:
 *   Add Custom Code reference to a test
 *   Set Class name = /CustomCodeReference/export.ExportIdentifier
 *   Add Arguments
 *     - Text: Value Type String
 *     - Text or Reference: Value String
 */
public class ExportIdentifier implements
com.ibm.rational.test.lt.kernel.custom.ICustomCode2 {
/**
* Instances of this will be created using the no-arg constructor.
*/
public ExportIdentifier() {
}

/**
* For javadoc of ICustomCode2 and ITestExecutionServices interfaces, select 'Help Contents' in the
* Help menu and select 'Extending Rational Performance Tester functionality' -> 'Extending test execution with custom code'
* @param String.class[] args - arg0 = value type, arg1 = value to write
*/
public String exec(ITestExecutionServices tes, String[] args) {
//Validate arguments
String type = null;
String value = null;
ITestLogManager tlm = tes.getTestLogManager();
if(args.length<=1) {
tlm.reportErrorCondition(RPTCondition.CustomCodeAlert);
return null;
}
type = args[0];
value = args[1];
Long timestamp = System.currentTimeMillis();

//Generate CSV string to write to file
String result = timestamp.toString() + "," + type + "," + value;

//Submit string to writer
String error = ExportAdministrator.getInstance().writeln(result);

//Catch errors and return to RPT for validation and handling
if(error.equalsIgnoreCase("true")) {
return result;
}

return error;
}

}

Step 4: Add Custom Code to a Test
To use this custom code in your test, add a Custom Code element to your test (see Step 1 for reference). Instead of generating a new class, we will be specifying our existing class in the Class Name field with the format: "/ProjectName/package.ClassName". In the above example this will be "/CustomCodeReference/export.ExportIdentifier".

Next we will add the arguments to the custom code element details. The first argument is the output type. Click the "Text" button on the right and specify the text to submit as the type. This can be different for each Custom Code element that you add to your test, but it works best as a hard-coded value in order to ensure it remains the same for each iteration.

Then click "Add" and select a data source object or reference to use as the output value. In this example I have set the output type to "TestAccountName" and the output value to the "Username" variable from my datapool. This example is redundant, I already have my usernames in a datapool - normally I would insert the custom code after a page response that contains some data that I want to write to a file, and select a reference instead.

This will now allow me to export any generated data that exists in the response record to an external source.