Tuesday, October 16, 2012

Gotcha: Teradata Union Queries

Encountered an interesting "gotcha" today involving Teradata v13.1 and how it handles data types with UNION queries. The context of this discovery was made within Cognos Report Studio v10.1.1 but we discovered it was a global issue within the generated SQL, not anything native to Cognos itself.

The problem was to do with using hard-coded string values within a UNION statement. If the string value is in the first component of a UNION then Teradata will truncate any column joined to that string via the UNION to the length of the string.

Here is my example situation:

Table foo:
foo_id INTEGER Primary Key
foo_name VARCHAR(15)

Table bar:
bar_id INTEGER Primary Key
name VARCHAR(100)

SELECT 'a-string' as name, foo_id FROM foo
SELECT name, bar_id FROM bar

Teradata will auto-detect the data type of 'a-string' as a VARCHAR(8) and will then cast bar.name as a VARCHAR(8) which will cause it to truncate anything beyond the first 8 characters. So a value of 'Hello World!' within the column bar.name will display as 'Hello Wo' in the above query.

This truncation does not take place if we replace 'a-string' with the column foo_name which is a VARCHAR(15). If we use foo_name instead, Teradata will correctly detect the largest data type from both sides of the query (VARCHAR(15) and VARCHAR(100)) and will cast all values as a VARCHAR(100) to prevent truncation.

In order to workaround this issue we must explicitly cast the hardcoded string 'a-string' as a datatype sufficiently long to contain any joined data. In this case CAST('a-string' AS VARCHAR(100)) is sufficient to resolve the truncation issue.

Tuesday, October 02, 2012

Performance Testing a Web Framework Application

Performance testing a web application that is built on a web framework can pose significant challenges, whether it is using Java Servlets, Struts, Rails, or others. These challenges are present regardless of the performance testing tool in use, in this case using Rational Performance Tester, but are equally evident in JMeter and other load generation tools.
A “normal” web application we will define as a web application that does not rely on any web frameworks, uses JSP or ASP pages, contains a limited amount of session information, and is generally stateless. Other attributes of a normal web application include:
  • Each page has a unique URL and serves a specific purpose or function within the web application.
  • Changing the page request attributes will modify the dataset, operations, or view of the requested URL, but will not change the underlying purpose or function of the page.
  • Stateful operations are performed by chaining a series of multiple page URLs together, each serving an appropriate purpose. Out of workflow requests (requesting a different URL) are not blocked and simply terminate the current workflow.
The operation of a normal web application differs from how web framework applications are constructed. Web framework applications generally contain a significant amount of session information and have a workflow that is stateful and limit options for performing out of workflow requests. Other differing attributes include:
  • A single URL encapsulates many different (usually related) purposes and functions.
  • Stateful operations consist of a series of requests that are chained together using a combination of request attributes and stateful information stored in the session.
  • Monlithic applications often appear under a single URL.
  • Changing or breaking out of predetermined workflows may be strictly controlled and prevented by disallowing any unexpected state changes.
As a result of these differences there are some key considerations when attempting to performance test a web framework application.
  1. Workflows within a web framework may be inflexible. This means that simple errors may cause cascading problems that invalidate the remainder of a test because the test may be unable to escape from a workflow if the correct sequence is breached due to a page timeout, or validation error due to an incorrect datapool.
  2. Out of flow, looping, and branching tests can be difficult or impossible to record because the request to initiate a loop or branch may change. There may not be a consistent “start of workflow” page request that can be made to initiate or restart a workflow.
  3. Error conditions must be planned for and carefully handled since an error state may negate all other requests until the error is handled correctly.
  4. Minor changes to the application may force entire test suites to be rewritten due to inability to record, insert, and chain new parameters or pages into an existing test.
  5. Similar workflows with different data may require individual test scripts due to changes in screen components.
In order to make your performance testing of a web framework application as successful as possible, here are some strategies I have adopted to reduce the amount of time I spend rewriting test scripts for each new code deploy, and to reduce the number of page errors I get during the course of a test run.
  1. PLAN your tests before recording! Understand which functions you want to hit with each test and ensure you have scripts that are focused with as few extraneous requests as possible.
  2. Create many short, focused tests instead of long all-encompassing tests to reduce the potential for errors and for error chaining.
  3. Separate read-only tests from data-entry tests and pre-populate data whenever possible to avoid long “setup” sequences.
  4. Avoid test dependency where later functions or tests heavily depend on earlier parts to configure environment/data. Either preconfigure your environment and datasets, or rely on the test itself to perform only the critical data entry prior to testing it’s assigned function.
  5. Chain multiple, small, independent tests in a schedule to cover desired functionality. Branching and looping should be done at the schedule level whenever possible to avoid needing to record branches and loops within individual tests.
  6. If you do need to include datapools that cause variation in functionality, record the largest, most inclusive workflow and dataset, and then populate your datapool with values that result in restricted subsets of the recorded workflow.
  7. Establish easily accessible, non-blockable base points in your test. These base points or pages must be able to short-circuit functionality that has become error locked and should be returned to as often as possible in your recording to allow “reset” to occur and reduce the impact of error chaining. Using base points allows you to more easily include other constructs such as loops, conditionals, or branches in your recording.