Thursday, November 10, 2011

Welcome to Performance Testing Blog

Hi, I am Sudhakar. Currently am working as a Performance Test Engineer @MNC company Hyderabad.

I am providing training sessions (online & regular classes) on performance testing with load runner and rational performance tester (RPT) tool with real time process.

If anybody interested to learn performance testing tools, you can reach me ”sudhakar6r@gmail.com”.

Siebel Correlation with custom code in LoadRunner

“Siebel _Star_Array” is a function generated by LoadRunner for correlating Siebel field values separated by Token "*". A sample format of the server response is given below and these field values are required to pass to the subsequent requests.

"1*N9*1-18506217*GEOFFTH4*Open10*Unassigned3*OEM12*Hardware-OEM14*Administrative1*01*010*Symptom X18*MSE-Perf7*GEOFFTH1*112*Geoff Thomas1*10*6* Normal19*10/16/2002 03:40:127*Sev - 47*1-13NY5"

Last Character/s (Highlighted in Red) in each of the fields denotes the length of the next field values. Because of which left and right boundaries will be dynamic and difficult to correlate.

The below "siebel _Star_Array” function provided with LoadRunner has some limitations and does not work at all times. It is it is tough to debug the errors without the source code

web_reg_save_param("WCSParam96",
"LB/IC=`ValueArray`",
"RB/IC=`",
"Ord=10",
"Search=Body",
"RelFrameId=1",
"AutoCorrelationFunction=flCorrelationCallbackParseStarArray",
LAST);

The sample code I had created can be used to parse the response and use it for correlation when compared to the Loadrunner Automactic Correlation

vuser_init()
{

char str[1024];
char separators[] = "*";
char *token;
char arrValues[50][20];
char arrFinalValues[50][20];
int i;
int j;
int length_old;
int length_new;
char newlength[2];
char actualValue[20];

/****************** Sample Text format****************************** */

strcpy(str, "1*N9*1-18506217*GEOFFTH4*Open10*Unassigned3*OEM12*Hardware-OEM14*Administrative1*01*010*Symptom X18*MSE-Perf7*GEOFFTH1*112*Geoff Thomas1*10*3 - Normal19*10/16/2002 03:40:127*Sev - 47*1-13NY5");

lr_output_message("%s",str);

/***** The following code will be used to split the text into strings based on the token *******/
token = (char *)strtok(str, separators); /* Get the first token */
if(!token) {
lr_output_message("No tokens found in string!");
return( -1 );
}

i=0;
while( token != NULL ) { /* While valid tokens are returned */
strcpy(arrValues[i],token);
lr_output_message("Initial array values is %s",arrValues[i]);
token = (char *)strtok(NULL, separators); /* Get the next token */
i++;
}

/*******************************************************************/
/*************** To remove Trail charecters ***********************/

for (j=0; j less than i; j++) //use lessthan sysmbol
{
if (j==0) {
strcpy(arrFinalValues[j],arrValues[j]);
length_old=strlen(arrValues[j]);
}
else{
length_new=strlen(arrValues[j]);
strncpy(arrFinalValues[j], arrValues[j], length_old);
if(length_new>length_old+1){
sprintf(newlength,"%c%c",arrValues[j][length_old],arrValues[j][length_old+1]);
length_old=atoi(newlength);
}
else{
sprintf(newlength,"%c",arrValues[j][length_old]);
length_old=atoi(newlength);
}//End of Else
}//End of Out Else
}//End of For

/* Final Data in the Array are */
for (j=0; j less than i; j++)//less than symbol
{
lr_output_message("Values after removing tail charecters %s",arrFinalValues[j]);
}
return 0;
}

How to determine two performance runs are statistically different?

Performance tests are good examples of Normal Distribution. A normal distribution of data means that most of the examples in a set of data are close to the "average," while relatively few examples tend to one extreme or the other.

According to statisticians, Two tests are considered statistically different if it is unlikely to have occurred by chance. we can be 99% sure that averages from two runs are really different only when the other average is more than 2.57 standard deviations away.


The same principle can also be applied to performance tests to determine results obtained are statiscally different.


For example, If the Average response time for transaction A is 1 sec during the first run with the standard deviation of 0.2 and for the second run average response time for the same transaction A is 1.2 sec. The difference between the two transaction response time is 1.2 – 1.0 =0.2, which is 1 standard deviations away (Standard deviation observed during the first run is 0.2) on the average. If the difference is 2.57 standard deviations away(we are 99% sure) then the results are statistically different.

 
Conversely we can even calculate the statistical limits using the formula below
If R1 is average response time and SD is the standard deviation for a performance run then the average response time for the second run should not exceed R1 + (SD* 2.33) or should not be less than R1 - (SD* 2.33). For the above example response time for the second run should not exceed 1 + (0.2* 2.33) =1.466 on the positive side of the average and 1 –(0.2*2.33) = 0.534 on the negative side of the average.

Working with LoadRunner Sybase Ctlib protocol

The goal of this document is to bring together necessary information to help those users who are involved in scripting Client Server Database protocols using LoadRunner.


The document can be used as a basis for scripting, enhancing and debugging any of the following protocols

Ø Sybase CTlib
Ø Sybase DTlib
Ø Informix
Ø MS SQL Server
Ø Oracle 2-Tier
Ø ODBC
Ø DB2 CLI
Ø ERP/CRM Siebel Vuser scripts


This document is based on my experiences using LoadRunner with Sybase Ctlib protocol

Click here to download the document

End to End Performance Test Approach - Part 2

During the test planning activities, gather the performance test objectives, calculate resource estimations and project timelines, and review the Architecture with individual team members and determine the types of tests required to test the application.

Capture the project specific information for each of the projects in the test plan as per the following template. This information will help us to co related the performance test results with system configurations. Performance test results will vary with system configuration changes.


Project Name
XYZ
Application Background
XYZ is an online web based application which is used by the customer to place orders related to various products. System currently getting upgraded from oracle 10g to oracle 11i
Type of Project:
Seibel Web application/ SAP GUI
Application Technology
dot Net, IIS web server, C #, VB
Hardware Platform involved and OS
App Server - windows XP, 2 GB ram, 4 cpu
DB server - Windows XP SP2, 3 GB, 10 CPU
Database
Oracle
Third party tools



Create a workload model which covers the list of scenarios identified for the performance testing along the SLA’s and the user load. No of Txns and No of Concurrent Users will be derived from the volumetric analysis


S. No.
Transaction/Script
Online/Batch
No. of Concurrent Users
Response time
No. of
Txns.
1
Scenario 1
O
9
< 10 secs
12
2
Scenario 2
O
4
< 1 secs 
12
3
Scenario 3
O
15
< 2 secs 
143
4
Scenario 4
O
4
< 13 secs
20
5
Scenario 5
O
2
< 4 secs 
20
6
Scenario 6
B
3
< 5 secs 
3
7
Scenario 7
B/O
1
< 2 secs 
4
8
Scenario 8
O
1
< 4 secs 
5


Identify the different types of tests required for testing the application based on the requirement analysis.
In Load test,  measure server response times to verify if the application can sustain expected maximum number of concurrent users and expected maximum size of the database.
In Stress test, measure server response times at varying loads starting from low load (low number of concurrent users), medium load (average number of concurrent users) through high load (expected maximum number of concurrent users until unacceptable levels of response times are experienced) to validate application's stability and validity.
In Endurance test, test the application for longer durations with half the average system load to detect the possible memory leaks in the system.
A detailed test plan canl be laid out using the information captured during the requirements gathering phase and share it with the development/Business team and take their inputs for the final approval. Test plan should include the following (but not limited to):
  • Scope
  • Test Approach
  • Test Objectives
  • Test Environment setup and requirements
  • Types of tests
  • Transaction mix
  • Workload Scenario
  • Identify Monitors
  • Scheduling ( Testing sequence , Test cycles)
  • Data setup (Data required by the Test tool, not the Application data)


Test Design & Execution
During the test design phase, validate the existing scripts and develop new functionalities based on the workload model and also validate and update the data required for the test environment identified during the test plan and also analyze script failures with the intent of finding their root cause so that we can debug our scripts effectively. We should also collect any application related errors found during the script validations and share with the development team.
During the execution process, first try to validate the scripts are pointing to the correct environment and performance metrics to be captured are properly configured in the environment. We should also validate load generator machines are up and working fine.
Each script should be run individually several times to validate that the script has been developed correctly. These tests may reveal performance problems that will need to be addressed.
Mixed load test can be carried out for the identified scenarios consisting of all transactions, online and batch, according to the workload mix discussed earlier. The load tests have to be run multiple times to ensure that the testing process is repeatable and also configure all the performance metrics in the load testing tool prior to the start of the test.

Result Analysis and Reporting
 Focus on analysis, monitoring, identifying bottle necks and proving recommendations, thus providing an end-to-end performance solution for the complete application.
Send the test reports   from various tests results with  conclusions based on those results, and also with consolidated data that supports those conclusions. And also do analysis, comparisons, and details of how the results were obtained.
At the end of each run of the Performance Test, a report should be produced. The test report should have comprehensive data collected from various sources presented in a single document.
For each of the test cases, the following response times should be reported: arithmetic mean, standard deviation, 90th percentile response time and other percentiles as necessary. In addition, each test case also report the total number of transactions executed, the time period over which the transactions were executed, number of errors and number of retries.

Collect comprehensive set of system data and tabulated in the test report for each run. The data that will be collected will include CPU utilization, memory utilization – system wide and per process, DB statistics.

End to End Performance Test Approach - Part 1

The purpose of this post is to show end to end approach for implementing performance testing. This document covers different phases of performance testing and approach to follow for successful performance test

Requirement Gathering:


During the Requirement Analysis phase it is important to assess and understand the nature of the application and the environment in which testing and monitoring should be performed. In addition to this, identify the resource requirements and plan based on the application. Also the existing Non-functional Requirement (NFR) would be discussed and understood by having discussion with the Design & Development team with respect to SLA’s, number of concurrent users and volumetric information. Also, wherever there is any specific information lacking, the same should be discussed with Design & Development team to reach a mutual agreement and definition of the same.

Performance team should also analyze system volume metrics over specified period of time in production to identify load patterns, work load behavior, Peak user load etc in the system.


Identify the peak period during volume metric analysis and also transaction arrival rate along with the user concurrency to simulate in the test environment. Create a workload model for the peak load with transaction mix along the user load based on the volume metric analysis. This will also help us in identifying the types of tests required for testing the application. Below is a sample template of the workload model for deriving the peak load.


Workload Model

Setting LoadRunner Header File Path

LoadRunner automatically compiles all the header files present in LoadRunner\include directory.


But it required header files need to be updated regularly after changes into the LoadRunner\include directory.



You can change the load runner properties to include your own header files folder

Please follow the step below one by one:

1. Browse through C:\Program Files\Mercury\LoadRunner\dat folder and search mdrv.dat file

2. Create a back up of the file in the same directory before making changes

3. Open the mdrv.dat file and search for [lrun_api] text in the file

4. You will reach the below section

[lrun_api]


ExtPriorityType=internal


WINNT_EXT_LIBS=lrun50.dll


WIN95_EXT_LIBS=lrun50.dll


LINUX_EXT_LIBS=libLrun50.so


SOLARIS_EXT_LIBS=libLrun50.so


HPUX_EXT_LIBS=libLrun50.sl


AIX_EXT_LIBS=libLrun50.so


LibCfgFunc=LrunApi_configure


UtilityExt=ParamEngine,Transaction,vusr_log,faserver,run_time_context


ExtIncludeFiles=lrun.h


ActiveScriptItems=Message:Mercury.Lrvb.LrMessage.1,Timing:Mercury.Lrvb.LrTiming2.1,Transaction:Mercury.Lrvb.LrTransaction2.1


ExtMessageQueue=0


SecurityRequirementsFiles=AllowedFunctions.asl


SecurityMode=On


5. Now add a new line to this section as follow


[lrun_api]


ExtPriorityType=internal


WINNT_EXT_LIBS=lrun50.dll


WIN95_EXT_LIBS=lrun50.dll


LINUX_EXT_LIBS=libLrun50.so


SOLARIS_EXT_LIBS=libLrun50.so


HPUX_EXT_LIBS=libLrun50.sl


AIX_EXT_LIBS=libLrun50.so


LibCfgFunc=LrunApi_configure


UtilityExt=ParamEngine,Transaction,vusr_log,faserver,run_time_context


ExtIncludeFiles=lrun.h


ActiveScriptItems=Message:Mercury.Lrvb.LrMessage.1,Timing:Mercury.Lrvb.LrTiming2.1,Transaction:Mercury.Lrvb.LrTransaction2.1


ExtCmdLine=-compile_flags C:\testscripts\ExternalHeaderFilePath


ExtMessageQueue=0


SecurityRequirementsFiles=AllowedFunctions.asl


SecurityMode=On

6. After making the changes, save the mdrv.dat file.

7. Take a sample script and execute the test once. 

How to convert LoadRunner Siebel-Web Protocol scripts to Web Protocol

1. Open the LoadRunner .usr file in Notepad

2. Check the following lines in the notepad

[General]


Type=Multi


AdditionalTypes=Siebel_Web


ActiveTypes=Siebel_Web


GenerateTypes=Siebel_Web




3. Replace Siebel_web with QTWeb

[General]


Type=Multi


AdditionalTypes=QTWeb


ActiveTypes=QTWeb


GenerateTypes=QTWeb


RecordedProtocols=QTWeb



Controller no longer requires separate license for Siebel-Web. Web user license is sufficient to execute the scripts

Performance Assessment Methodology

The below approach will help you if  you are planning to act as a consultant for a performance testing engagement and provide recomendations to your customer.

The attached diagram will detail  the assessment approach for the following topics

  • Study the existing test processes
  • Study Architecture
  • Gather Business Information
  • Define test Model
  • Define tests 

Performance Assessment Methodology

Generic Questions to RFP on Performance Testing

I have listed some generic questions which can be asked to a potential customer in response to an RFP on performance testing

  • Will you provide any performance data that has been gathered on the existing application? If you don’t have the data on hand, will you help make it available?
  • How many Performance contractors will be involved in the project team?
  • Are there any metrics for success or expectations for future volume? Does the organization have a performance testing technology that they regularly use? Does the Application already have service levels and performance levels defined?
  • What is the total number of users that will access the application?
  • Description of your existing software systems, e.g., Web server, database, application server, network operating system, messaging, monitoring, network management, etc.
  • Description of in-house IT staff, including their areas of expertise, be it networking, systems management, development, etc.
  • Can the application provide high availability systems? What is considered high availability?
  • How is load balancing performed?
  • Does the application have processes in place to monitor CPU usage, file system usage, memory usage, and network performance?
  • What are the tools does your organization have?
  • How many and what percent of applications/projects are performance tested with tool (LoadRunner) before they are launched into production

User Defined Template feature in LoadRunner 9.5

When testing the application, we also need to test several business processes, to do so we usually reuse same parameter files, runtime settings, Boiler templates etc. In addition we may also want to reuse certain functions for each of the scripts we create. Until now to do this we have to create a new script and then import and copy into it all the necessary files and settings from the existing scripts. With VUGEN 9.5 we can avoid this by using script template.

Create a script with proper boiler plate, parameters files, and runtime settings etc which are common to all the business processes and then save the template. To do this go to File menu– User Defined template – Save as Template in VUGEN

Once the template is created, apply the template, to do this go to File menu –User defined template – Create script from Template and a new script will be created with all the based data as made in the template and we can now start recording our business process on top of it.

How to reduce the amount of threads per process in LoadRunner

In LoadRunner 9.x, please perform the following steps:

1. Go to the /dat/protocols/QTWeb.lrp file.

2. By default, in web protocol the number of threads mdrv can spawn is 50. Change this value to a lower value (E.g. 15) by adding/changing the following:


[VuGen] //Add below entry if not available
MaxThreadPerProcess=15

3. Re-run the scenario.

Approach to Oracle Apps Performance testing

Oracle Apps Performance Testing

During one of my engagements, I was requested to come up with a strategy for performance testing Oracle E-Business suite. Client was major producer and leader of power transmission drives, components and bearings.

Client’s Oracle E-Business Suite is a complete set of business applications that enable the organization to efficiently manage customer interactions, manufacture products, ship orders, collect payments, and more.

Oracle Applications architecture is a framework for multi-tiered, distributed computing that supports Oracle Applications products. In this model, various services are distributed among multiple levels, or tiers. The tiers that compose Oracle Applications are the database tier, which manages the Oracle database; the application tier, which manages Oracle Applications and other tools; and the desktop tier, which provides the user interface display. Only the presentation layer of Oracle Applications is on the desktop tier in the form of a plug-in to a standard Web browser.

To come up with a performance strategy was a challenge to me because of their complex architecture and it was first of a kind in my organization to provide any kind of support for Oracle E-Business Suite.

Test Approach

My approach was to identify the configuration for performance testing in the test environment, for that present production configuration was analyzed to come up with a good approach.

In Production, Application and Database Tiers are present in two separate Solaris boxes and there are 6 instances which are sharing the application tier and 7 instances in DB tier. All the Instances present in the production boxes are oracle applications for different companies (Independent Entities) in that organization and all the hardware resources in the Box like CPU, Memory, and IO are shared across all the instances and the Transactions performed in the several instances are independent to each other.

Test environment which is very much similar to Production was considered for performance testing but Application tier was shared by 14 instances and Database tier was shared by 16 instances which was higher when compared to production.

First recommendation I had given was to the infrastructure team to map the number of instances similar to production in test environment.

I put forward two approaches to the client for the performance test along with the risks

Approach 1

In test environment, create the number of instances similar to prod in both App and DB tier and then analyze the work load model of each and every instance in both application as well as Database tiers and capture the important transactions for all instances and create scripts to replicate the same and execute those transactions in the background such that they utilize the hardware resources in test environment to some extent. Then simulate the work model for our instance and capture and publish the performance metrics

Risks

• Extremely difficult to understand the workload model for all the instances in the production

• Time Consuming and costly.

• Discussion required with multiple stakeholders

Approach 2

Second Approach of the performance test was to ignore the multiple instances and their transactions running in the server and dedicate the maximum amount of Hardware resources Client instance can consume in the test environment with through analysis from the existing production servers such that system utilization should not reach the identified critical level. Based on the analysis, CPU, Memory availability in the test environment should be constrained and Infrastructure Team should help in dedicating the CPU’s and Memory for client’s instance in the test environment for the performance test.

Performance test should then be carried out for the identified transactions in the workload model and capture and publish the performance metrics.

Risks

• Real time performance issues related to multiple instances may not be found

Conclusion

The above two approaches for the performance tests were discussed along with its Pros and Cons. First approach was ideal but given the timeframe available, simulating all the noise for other instances was difficult to implement and also it requires lot of coordination with multiple stakeholders. It was agreed to follow the second approach for the performance test after discussions with infrastructure team.