Monday, July 28, 2014

Web Performance Standards: Finding Value in User Surveys

Studies that purport to establish or define performance standards for web page loading times typically take one of three forms: examinations and measurements of physiological traits, empirical studies based on abandonment, or surveys based on participants’ emotional response. Of these three, surveys are least likely to produce reliable results as they are based on participants’ subjective self-assessment of their tolerance levels and not on precise, concrete, measurable actions. As well, a participant’s tolerance for loading times may vary significantly based on numerous factors such as: Age, Experience, Task, Time of Day, and others[7].

However, the goal for defining performance standards is to establish a level at which the typical user of a web page will be satisfied. Setting a target performance level at a point where 50% of study participants abandon the web page before it completes loading is a poor target for user satisfaction. In order to aim for user satisfaction, the targets that are set must be faster than the typical users’ frustration level – an emotional tipping point that must be reached before a user will decide to abandon a web page. Thus surveys, by necessity, play an important part in understanding how to define an effective performance standard.

This article examines two significant web performance surveys conducted by JupiterResearch in 2006 and Forrester Consulting in 2009 which attempt to produce generalizations about user’s satisfaction with web page performance and their tolerance thresholds. The rest of this article reviews the methodology used by these two surveys and potential deficiencies in those methodologies, and then describes an experimental survey to compare how participant responses differ when similar survey questions are used but the method for providing their response differs. The results of the survey are then presented and compared. Conclusions are drawn regarding the value and usefulness of the research.

Published Survey-Based Standards

JupiterResearch (2006)

Retail Web Site Performance, Consumer Reaction to a Poor Online Shopping Experience. June 1, 2006 prepared for Akamai Technologies, Inc.

Key Finding:
“Overall, 28 percent of online shoppers will not wait longer than four seconds for a Web site page to load before leaving. Broadband users are even less tolerance of slow rendering. A full one-third of online shoppers with a broadband connection are unwilling to wait more than four seconds (compared with 19 percent of online shoppers with a dial-up connection).”[5]

Methodology:
JupiterResearch conducted a survey of 1,058 online shoppers. Among other questions posed to respondents the one we are concerned with is “Typically, how long are you willing to wait for a single Web page to load before leaving the Web site? (Select one.)”.

The options presented were:
  • Less than 1 second
  • 1 to 2 seconds
  • 3 to 4 seconds
  • 5 to 6 seconds
  • More than 6 seconds

Forrester Consulting (2009)

eCommerce Web Site Performance Today, An Updated Look At Consumer Reaction To A Poor Online Shopping Experience. August 17, 2009 prepared for Akamai Technologies, Inc.

This report is a directly analogous to the 2006 report by JupiterResearch as the latter was acquired by Forrester Consulting in 2008[6].

Key Finding:
“Forty-seven percent of consumers expect a Web page to load in 2 seconds or less.”[1]

Methodology:
Forrester Consulting conducted a survey of 1,048 online shoppers. Among other questions posed to respondents the one we are concerned with is “What are your expectations for how quickly a Web site should load when you are browsing or searching for a product?”.

The options presented were:
  • Less than 1 second
  • 1 second
  • 2 seconds
  • 3 seconds
  • More than 4 seconds

Comparison

Forrester Consulting references the previous 2006 study and attributes the difference in their key findings to increasing access to broadband among US customers. However the report fails to provide any supposition as to why the expectations of broadband users would increase so dramatically in three years. The 2006 survey reports 33% of broadband users will not wait more than 4 seconds, while the 2009 survey reports that at least 47% of broadband users will not wait more than 2 seconds.

Comparing similar units, the percentage of broadband users that will not wait more than 2 seconds for a web page to load increases from 12% in 2006 to 47% in 2009. No reason is suggested by the paper as to why there is a four-fold increase in only 3 years.

The 2009 paper also makes the claim that “This methodology was consistent with the 2006 study methodology.” Although the basic format of the survey and how it was conducted remained the same, the options being presented to the respondents for this question are dramatically different. In 2006, 73.5% of respondents answered with “5 to 6 seconds” or “more than 6 seconds”. These same respondents in 2009 have no longer been given the option to make this distinction and have all been lumped into the same “more than 4 seconds” category.

Presenting such a limited range of options in the 2009 survey, it may be that respondents who would have naturally answered by selecting an option with a larger time reconsidered what their answer would be when presented with options dramatically different from their initial expectation. It may be that instead of choosing the most appropriate answer based on their initial thought (more than 4 seconds) they instead selected an option closer to the middle. This presents the possibility that either or both of these surveys may be subject to Central Tendency Bias or Position Bias.[2]


Experimental Survey Methodology

The purpose of this survey is to determine if the choice of answer structure and the options presented in the 2006 and 2009 surveys impacted the answers given by the respondents. In order to make this determination, this survey presented a single question to respondents that closely matched the question presented in the 2006 and 2009 surveys. The differing factor is that this survey allowed respondents to answer however they wished using a free-form answer field instead of selecting a predefined option.

The primary question presented in this survey is: “When opening a typical webpage, how long (in seconds) will you wait for it to load before feeling frustrated or taking some kind of action? Taking an action may include doing something else while you wait (switching windows/tabs), reloading the page, or giving up and going somewhere else.”

Three additional demographic questions were also included in the survey:
  • Age? - Options: under 18, 18-24, 25-36, 35-44, 45-54, 55-64, 65-74, over 75.
  • Gender? - Freeform entry
  • At what level of proficiency do you use the internet? - Options: I am a web application developer, I am a content creator, I use it for work, I use it for personal use regularly (>3 times/week), I use it for personal use occasionally (<= 3 times/week).

Findings

The results of this survey are based on 78 online responses from Canada and the US. The answers given by the respondents ranged from 0.5s to 60s. As shown in Table 1 this survey resulted in responses that were significantly higher and distributed much more broadly than the responses provided to the 2006 and 2009 surveys, as expected. However we can also see that this survey’s resulting Median and Mode closely match those from the 2006 survey. Compressing all the responses in the Freeform survey that exceeded 6 seconds into a single >6s option would have resulted in the same Mode value. In contrast the 2009 survey portrays a vastly different picture.

Table 1: Survey Average Values Comparison

Freeform Survey

2006 Survey (Jupiter)

2009 Survey (Forrester)
Median
5.00
Median
5-6s
Median
3s
Mean
9.82
Mean*
5.80
Mean*
2.51
Mode
5.00
Mode
>6s
Mode
3s
SD
11.41

SD*
1.57

SD*
1.01
*Mean and Standard Deviation values for 2006 Survey (Jupiter) and 2009 Survey (Forrester) were calculated using the midpoint value in each option range, and using the highest value + 1s for the highest option. These calculations are not intended to be exact, but are used in this context for comparative purposes only.

Grouping the responses to this survey into the same options presented by the 2006 and 2009 surveys produce a clearer comparison as shown in Figure 1 and Figure 2.


Figure 1: Response Frequency by Option, Freeform vs 2006 (Jupiter) Survey


Although the 2006 survey suffers from compression at the largest interval which contains nearly half of all responses, the results are a close match to the results of this survey. A slight shift towards faster web page loading time expectations can be seen between 2006 when the JupiterResearch survey was completed, and 2014 when this survey was completed. The Student’s t-test when applied to the compressed results shown in Figure 1 using an independent two-sample t-test for unequal variances produced a value of P = 0.1650.

Figure 2: Response Frequency by Option, Freeform vs 2009 (Forrester) Survey


The 2009 survey results show no such similarity, the distribution of responses shows a vastly different pattern than the responses to both the 2006 survey and this survey. The Student’s t-test when applied to the compressed results shown in Figure 2 using an independent two-sample t-test for unequal variances produced a value of P = 7.147*10-8.

Conclusions

There is a significant agreement between the results of this freeform survey and the 2006 JupiterResearch survey (P > 0.05) which indicates that it is unlikely that there is significant bias caused by the structure of the question or the options presented to the respondents for that survey. However, the results of the 2009 Forrester Consulting survey disagree greatly (P < 0.01) which is suggestive that the 2009 survey is being subjected to some form of bias that is likely imparted by the presentation of response options to the respondents.

All surveys that are conducted in an attempt to quantitatively define an emotional response (frustration) are going to produce results that are imprecise and limited by the ability of respondents to accurately self-evaluate. Patience is a volatile thing, fluctuating wildly between different users and within a single users themselves depending on their current state of mind.[3][4]

The freeform survey in particular is also limited by its small sample size of respondents, increasing the number of respondents to produce a sample size of >1000 would be beneficial to provide a better comparison and strengthen confidence in the conclusions.

Usefulness of Survey Results

It is important when working within a business context to understand how to apply the results obtained by web page performance surveys when establishing a set of performance guidelines or requirements. It is typically ineffective to set performance standards by using the median result – it is not an effective business goal to aim for a standard where only 50% of your users abandon your web page out of frustration. On the opposite end setting a standard where every user will be satisfied with the web page performance for every page may be unrealistic, factors such as type of connection, geographic location, and processing time can prevent those from being achievable.

A more typical approach is to use a high percentile of the survey results as a business performance standard, setting a goal to meet the performance level that would satisfy 90% of your users. Figure 3 compares the three surveys and their responses that correspond to several response percentile levels.

Figure 3: Response Time Percentile Comparison


Based on the responses provided for these surveys we could reasonably conclude that setting a web page performance standard of <1s and achieve it, then at least 90% of users would be satisfied with the performance and on average 99% of users would be satisfied. We could also conclude that setting a standard of <2s would satisfy at least 80% of users and on average 90% of users would be satisfied.

References

[1]   Forrester Consulting (2009), eCommerce Web Site Performance Today. Available at: http://www.damcogroup.com/white-papers/ecommerce_website_perf_wp.pdf
[2]   Gingery, Tyson (2009), Survey Research Definitions: Central Tendency Bias, Cvent: Web Surveys, Dec 22, 2009. Available at: http://survey.cvent.com/blog/market-research-design-tips-2/survey-research-definitions-central-tendency-bias
[3]   Gozlan, Marc (2013), A stopwatch on the brain’s perception of time, Guardian Weekly, Jan 1, 2013. Available at: http://www.theguardian.com/science/2013/jan/01/psychology-time-perception-awareness-research
[4]   Hotchkiss, Jon (2013), How Bad Is Our Perception of Time? Very!, Huffington Post – The BLOG, Sep 19, 2013. Available at: http://www.huffingtonpost.com/jon-hotchkiss/how-bad-is-our-perception_b_3955696.html
[5]   JupiterResearch (2006), Retail Web Site Performance. Available at: http://www.akamai.com/dl/reports/Site_Abandonment_Final_Report.pdf
[6]   Kaplan, David (2008), Forrester Buys JupiterResearch for $23 Million, Forbes Magazine, Jul 31, 2008. Available at: https://web.archive.org/web/20080915011602/http://www.forbes.com/technology/2008/07/31/forrester-buys-jupiter-research-tech-cx_pco_0731paidcontent.html
[7]   Shneiderman, Ben (1984), Response Time and Display Rate in Human Performance with Computers, Computing Surveys 16, no. 3 (1984): pages 265-285. Available at: http://dl.acm.org/citation.cfm?id=2517

Appendix A – Freeform Survey Response Distribution and Demographics









Monday, March 31, 2014

RPT Custom Code - Saving Response Data

How to build datapools by extracting generated data from a system using custom code in Rational Performance Tester.

A major challenge that I encounter consistently as a Performance Engineer is priming a system with known, consistent data that I can feed into datapools and use for testing. There are a variety of ways to accomplish this: conversion processes, generation tools, database scripts, restore/upgrade processes, etc. However, a project that I have been working on recently has a variety of issues with these strategies. The database is automatically generated and nearly impossible to decipher, not to mention that it changes with each new code release. The second issue is that the data generation tools are not being kept up to date by the developers, so they are usually broken with each new release. These issues have led me to the point of using my performance testing tool (Rational Performance Tester v8.5.2) as a workaround data priming tool.

This plan has its own challenges, primary identifiers are all system generated - and the only way I can reliably access the data once it is generated is using those primary identifiers. Granted, there are ways to obtain this information once it exists in the system, but they involved days if not weeks of effort to extract and cleanse the data into a form that can be used effectively, and would require significant assistance from a developer to do so. A better way would be to process and extract the desired information from the responses themselves as it is being generated by RPT.

Currently RPT does not have any simple write-back mechanism for references during a test. There may exist a way using references and variables - but if it does it is well hidden beneath the primary strengths of the tool, which are load generation, data substitution, and response analysis. This led me to consider the question "How would I enhance the tool to let me extract response data?". Since RPT is an Eclipse-based tool and supports the addition of custom code and plug-ins it should be entirely feasible to write some code to extract whatever I wanted from it.

So that is what I did - the example and the code presented below is a very simple, bare-bones way of extracting a reference from a page response, passing it into a block of custom code, and writing it to a CSV file in a thread-safe manner that will allow you to import the generated CSV back into your project as a datapool source. There are dozens of ways I can think of to use this code and enhance it if you want to write to a database, do data transformation, correlating reference data with existing datapools, or producing multiple outputs - but this should be sufficient to get you started if you are encountering problems bridging that gap from theoretical to practical.

Step 1: Create a Custom Code Class
With your recorded test open, right-click your top-level test element and select "Add --> Custom Code".

Selecting the new Custom code element will show the Custom Code element details in the right-side pane that lets you enter a Class name (in the format: "package.Class"). If this is the first time you have created a Custom Code class - click the "Generate Code" button to create a basic default class implementation. If you already have the Custom Code class you want to use, skip down to Step 4 for adding an existing Custom Code element to your test case.

This will generate a simple Custom Code class that you can start using in your test.

package export; 

import com.ibm.rational.test.lt.kernel.services.ITestExecutionServices; 

/** 
 * @author unknown 
 */ 
public class Test implements 
                com.ibm.rational.test.lt.kernel.custom.ICustomCode2 { 

        /** 
         * Instances of this will be created using the no-arg constructor. 
         */ 
        public Test() { 
        } 

        /** 
         * For javadoc of ICustomCode2 and ITestExecutionServices interfaces, select 'Help Contents' in the 
         * Help menu and select 'Extending Rational Performance Tester functionality' -> 'Extending test execution with custom code' 
         */ 
        public String exec(ITestExecutionServices tes, String[] args) { 
                return null; 
        } 


}

To organize my library of custom code, I created a new Performance Test Project called "CustomCodeReference". If you create this project as a Performance Test Project it will automatically include all of the required RPT libraries in its build path. This will also allow me to export the project into a .jar file that I can import into other projects, or simply reuse it as a dependency in other Performance Testing projects as needed.


Step 2: Create a Singleton File Writer
Create a new class file that will be used by our custom code as the singleton file writer to manage file creation, access, and synchronization between threads. In this example I have created the class "export.ExportAdministrator" in my CustomCodeReference project to be my singleton file writer class. A suggested enhancement to this structure would be to create a dedicated Factory class that returns an instance of an interface to allow for multiple optional implementations such as a CSV file implementation, an XML file implementation, or a database implementation.

package export;

import java.io.File;
import java.io.FileWriter;
import java.text.DateFormat;
import java.text.SimpleDateFormat;

/**
 * @author grempel
 * @date 2014-03-27
 *
 * Singleton class with factory to manage thread-safe file export
 * Usage:
 *   Call ExportAdministrator.getInstance() to obtain object reference
 *   Use instance write() or writeln() to export to file
 */
public class ExportAdministrator {
private static ExportAdministrator instance = null;
private FileWriter fw = null;

public ExportAdministrator() {
//Build string writer
DateFormat df = new SimpleDateFormat("yyyy-MM-dd HH-mm-ss");
File file = new File("C:\\rpt-log\\" + df.format(System.currentTimeMillis()) + " output.csv");
try {
fw = new FileWriter(file);
} catch (Exception e) {
e.printStackTrace();
}
}

/**
* Retrieve singleton instance of writer.
*/
public static synchronized ExportAdministrator getInstance() {
if(instance==null) {
instance = new ExportAdministrator();
}
return instance;
}

/**
* Write string to file
*/
public synchronized String write(String str) {
if(fw!=null) {
try {
fw.write(str);
fw.flush();
return "true";
} catch (Exception e) {
return e.toString();
}
}
return "failed";
}

/**
* Write string to file and start a new line
*/
public synchronized String writeln(String str) {
if(fw!=null) {
try {
fw.write(str);
fw.write("\r\n");
fw.flush();
return "true";
} catch (Exception e) {
return e.toString();
}
}
return "failed";
}

/**
* Clean up open file references
*/
@Override
protected void finalize() throws Throwable {
if(fw!=null) {
fw.close();
}
super.finalize();
}

}

This class will generate a timestamped output file in the folder "C:\rpt-log\" on each agent that is running the test.

One note to make about the usage of this class. If you are using more than 1 virtual user to generate your data, you are best to compile the entire set of data you want to write to the file into 1 string before calling writeln(). If you attempt to call write() multiple times, you cannot guarantee that they will be sequential as another thread may have called the write() method in the meantime.

Step 3: Call File Writer from Custom Code
With our file writer (ExportAdministrator) singleton in place, we can now modify our custom code to submit whatever we want to write to that file. In this case I have created our Custom Code class called "export.ExportIdentifier" in my CustomCodeReference project.

This custom code expects 2 arguments to be passed to it, an output type: a simple text string that will allow you to group outputs based on the type of data being written - this allows you to reuse the code for multiple types of values in a single test; and an output value: the value you want to write. In addition to these, this custom code will also generate a timestamp that will be written to the file in order to determine the write operation sequence.

package export;

import com.ibm.rational.test.lt.kernel.services.ITestExecutionServices;
import com.ibm.rational.test.lt.kernel.services.ITestLogManager;
import com.ibm.rational.test.lt.kernel.services.RPTCondition;

/**
 * @author grempel
 * @date 2014-03-27
 * 
 * Usage:
 *   Add Custom Code reference to a test
 *   Set Class name = /CustomCodeReference/export.ExportIdentifier
 *   Add Arguments
 *     - Text: Value Type String
 *     - Text or Reference: Value String
 */
public class ExportIdentifier implements
com.ibm.rational.test.lt.kernel.custom.ICustomCode2 {
/**
* Instances of this will be created using the no-arg constructor.
*/
public ExportIdentifier() {
}

/**
* For javadoc of ICustomCode2 and ITestExecutionServices interfaces, select 'Help Contents' in the
* Help menu and select 'Extending Rational Performance Tester functionality' -> 'Extending test execution with custom code'
* @param String.class[] args - arg0 = value type, arg1 = value to write
*/
public String exec(ITestExecutionServices tes, String[] args) {
//Validate arguments
String type = null;
String value = null;
ITestLogManager tlm = tes.getTestLogManager();
if(args.length<=1) {
tlm.reportErrorCondition(RPTCondition.CustomCodeAlert);
return null;
}
type = args[0];
value = args[1];
Long timestamp = System.currentTimeMillis();

//Generate CSV string to write to file
String result = timestamp.toString() + "," + type + "," + value;

//Submit string to writer
String error = ExportAdministrator.getInstance().writeln(result);

//Catch errors and return to RPT for validation and handling
if(error.equalsIgnoreCase("true")) {
return result;
}

return error;
}

}

Step 4: Add Custom Code to a Test
To use this custom code in your test, add a Custom Code element to your test (see Step 1 for reference). Instead of generating a new class, we will be specifying our existing class in the Class Name field with the format: "/ProjectName/package.ClassName". In the above example this will be "/CustomCodeReference/export.ExportIdentifier".

Next we will add the arguments to the custom code element details. The first argument is the output type. Click the "Text" button on the right and specify the text to submit as the type. This can be different for each Custom Code element that you add to your test, but it works best as a hard-coded value in order to ensure it remains the same for each iteration.

Then click "Add" and select a data source object or reference to use as the output value. In this example I have set the output type to "TestAccountName" and the output value to the "Username" variable from my datapool. This example is redundant, I already have my usernames in a datapool - normally I would insert the custom code after a page response that contains some data that I want to write to a file, and select a reference instead.

This will now allow me to export any generated data that exists in the response record to an external source.

Hurdles #4 - Apache Pivot - Scroll Panes

Hurdles article number 4 and continuing to overcome challenges with Apache Pivot, this time we are looking at controlling the current position of a Scroll Pane in response to an event.

The premise of this article is that I have a Scroll Pane that contains a running log of information that is updated periodically. This log appends rows to a Table Pane that is the Scroll Pane's view. In order to ensure that any new information is immediately visible what I want to do is scroll the Scroll Pane to the bottom of the view.

There are a couple of pre-built methods to help with this. The Scroll Pane comes with setScrollTop(int y) and setScrollLeft(int x) to adjust the viewport position relative to the top and left respectively. It also has scrollAreaToVisible(Bounds bounds) which serves as an auto-scroll function to zoom to a sub-component.

The problem with these methods is that they will not work properly inside an event handler that modifies the view component! Within the event handler that is appending information to the view the following problems become immediately apparent:

  1. getHeight and getBounds methods on both the Scroll Pane and View components return 0.
  2. scrollAreaToVisible can throw OutOfBounds exceptions
  3. setScrollTop(y) when given a hard-coded value will adjust the Scroll Pane, but the scroll bars themselves will not be updated.
Through my investigations into the underlying code I discovered that I could obtain a reference to the Scroll Pane Skin object which is the object that controls the Scroll bars, tracks the current and max position of the viewport, and the rendering of these components. However many of the objects and methods within the Skin are private and inaccessible without rewriting the entire class, and the ones that are visible have the same problem of returning 0 values for height and bounds.

Eventually I discovered a link in the Apache Pivot Users forum that held the key to the solution. 2.0.2 How to move scroll bar to bottom of ScrollPane? The issue is that many of the view attributes are invalidated on a change to the View component, and are only reset as part of the painting code itself, after the event handler has already returned.

The solution to the problem is to queue a callback within the ApplicationContext itself specifically to force the repositioning of the Scroll Pane after all the paint operations have completed. I have included the code snippit below.

In the update event handler:
ApplicationContext.queueCallback(new ScrollPaneCallbackHandler(scrollPaneComponent));

Separate ScrollPaneCallbackHandler class:

private class ScrollPaneCallbackHandler implements Runnable {
private ScrollPane pane;
public ScrollPaneCallbackHandler(ScrollPane pane) {
this.pane = pane;
}
/**
* Resets ScrollTop position to show bottom of view component
*/
public void run() {
pane.setScrollTop(pane.getView().getHeight() - pane.getHeight());
}
}

Wednesday, January 29, 2014

Gridiron Solitaire on Gamers with Jobs

I am posting a link to a podcast: Gamers With Jobs - Conference Call Episode 381 (released today). I received a couple of shout-outs on this episode from my friend Bill Harris who was on to talk about his just-released-on-steam game Gridiron Solitaire.

I've been helping Bill for the better part of 3 years, teaching him how to code from scratch, helping him with problems, and answering his questions - so getting to see him actually release his brainchild to the public is a very proud moment.

More information can be found at Bill and Eli Productions and on the Steam Gridiron Solitaire Product Page.

Wednesday, January 15, 2014

Gotcha #4 - WPF - Restore Events and Media Elements

Today's article comes courtesy of assisting Bill Harris' (Dubious Quality) work on GridIron Solitaire. The original problem comes from identifying a bug that sound effects were not playing when a system returns from sleep mode if the game had been left open.

Some basic details on the application: GridIron Solitaire uses Windows Presentation Foundation (WPF) for its UI framework, it has been developed in VisualBasic 2010, and sound effects are played by using System.Windows.Controls.MediaElement class.

The problem in this case was that sound effects that had been initialized prior to the system enter sleep mode did not continue playing after the system returned to an active state. Sounds initialized after the system returned to an active state continued to play fine. When first presented with this problem it did not seem to be a terribly difficult one to solve, nor an uncommon one to encounter. There are many games released by major studios that have difficulty handling system sleep and restore, and even minimization or alt-tab window switching. The most recent experience I have had with these problems include Civilization V in which I have encountered alt-tab artifacts (game remains visible in the background), as well as restore from minimization (texture corruption) and restore from sleep (fatal crash) problems.

The first line of investigation that we followed was the possibility that the MediaElement was failing to play after a restore, so we constructed a plan to catch the MediaElement.MediaFailed event and reset the MediaElement in that instance. On the positive side this approach solved an unrelated application crash problem caused by a missing media file, but it was soon determined that the MediaFailed event was not being fired when the sounds failed to play after a system restore.

PowerModeChanged Event Solution #1
Further research revealed that a MediaElement's Source property becomes invalidated during the sleep/restore cycle, and that resetting the Source property and restarting the MediaElement. One of the suggestions was to catch the PowerModeChanged Event and check for PowerModes.Suspend and PowerModes.Resume states, any suspend/resume code needed by your application can be performed in this block. The resulting code catches the PowerModeChanged Event, and on Resume resets every MediaElement Source property to its correct value.

Private Sub SystemEvents_PowerModeChanged(ByVal sender As Object, ByVal e As PowerModeChangedEventArgs)
    Select Case e.Mode
        Case PowerModes.Resume
            Reinitialize_Sounds()

        Case PowerModes.StatusChange


        Case PowerModes.Suspend

            BackgroundEffectALoopPlayer.Pause()
            BackgroundEffectBPlayer.Pause()
    End Select
End Sub

Private Sub Reinitialize_Sounds()

    SoundEffectA.Source = Nothing
    SoundEffectA.Source = New Uri("Resources/SoundEffectA.mp3", UriKind.Relative)
    '...etc...
End Sub

The effect of this code seemed to be satisfactory at first. Although it slowed down the time for the application to restore from a suspended state, it wasn't a particular problem as additional processing time required to restore from a suspended state is normally accepted and expected. A minor issue was that ongoing sounds would not restart until the next time the code required them to be played, which was easily resolved by programmatically restarting ongoing sounds within the restore code.

However, it was only observed later - and generally on lower-end systems - that in some cases a few sound effects were not being restarted after restore. An additional confounding factor was that the sound effects that were not returning were not consistent, an apparent random selection of 2 or 3 would not resume and this effect was not reproducible in debug mode.

Application_Activated Event Solution #2
Considering the possibility that the PowerModeChanged Event was not firing correctly, or that there was some other conflict regarding MediaElement objects that was occurring after the PowerModeChanged Event the proposed alternative was to move the restore code into an Application_Activated Event handler. The Application_Activated Event is fired on a different but related and less-specific set of conditions to the PowerModeChanged Event. The theory was that perhaps the systems where the continuing sound failure was being observed were tablet and netbook systems which may be impacting the events being fired.

Once the restore code was transferred and redeployed, the same issues were observed occurring in precisely the same circumstances as the PowerModeChanged Event solution.

Brute-force Solution #3

In an attempt to find any way past the problem, Bill attempted to brute-force the solution by forcing every sound to reload the Source property every time that it was activated. Although the performance of this approach was going to be unacceptable, the lag time for loading a sound every time it is played would be noticeable to the player, as a debugging approach it was reasonable.

The result: slow, but successful. Despite every sound effect suffering from noticeable lag, after a suspend/restore cycle every sound effect returned and played correctly on every test system configuration.

Problem Identification
It was at this point that the cause of the problem was identified. The process that restores an application from a suspended state causes MediaElement sources to be invalidated. The problem with using either the PowerModeChanged Event or the Application_Activated Event for setting the source property, is that the event handlers are operating on a different thread than the application restore process and these threads are entering into a race condition.

In the first two solutions, on slower systems a few of the lighter-weight sound effects were being loaded into their MediaElement objects before the application restore process that invalidates the MediaElement sources was completed. As a result a few of the MediaElement sources were being invalidated after they had already been corrected - causing a few of the sound effects to fail in an inconsistent and unpredictable manner.

On-demand Resource Loading Solution #4
The final solution is to enforce on-demand resource loading and caching whenever a MediaElement is required to play a sound effect, moving the responsibility for checking the existence of a media source from the pre-loader to the code that plays the MediaElement itself.

As a result the PowerModeChanged Event handler is being used to nullify (set to Nothing) all MediaElement sources when a suspend event occurs. Then every time a MediaElement is about to be played the code first rechecks its source to ensure that it is not null, and reloads the source if it does not exist before playing it.

Private Sub SystemEvents_PowerModeChanged(ByVal sender As Object, ByVal e As PowerModeChangedEventArgs)
    Select Case e.Mode
        Case PowerModes.Resume

        Case PowerModes.StatusChange

        Case PowerModes.Suspend

            SoundEffectA.Source = Nothing
            SoundEffectB.Source = Nothing
            '...etc...
    End Select
End Sub

'In sound effect Play code:
If SoundEffectA.Source = Nothing Then
    SoundEffectA.Source = New Uri("Resources/SoundEffectA.mp3", UriKind.Relative)
End If
SoundEffectA.Play()

The final effect is that sound effect failure has been eliminated, and a slight lag occurs the first time each sound effect is played immediately after a system restore, but returns to optimal performance soon after once each sound effect has been re-cached.