Thursday, November 10, 2011

How to convert LoadRunner Siebel-Web Protocol scripts to Web Protocol

1. Open the LoadRunner .usr file in Notepad

2. Check the following lines in the notepad

[General]


Type=Multi


AdditionalTypes=Siebel_Web


ActiveTypes=Siebel_Web


GenerateTypes=Siebel_Web




3. Replace Siebel_web with QTWeb

[General]


Type=Multi


AdditionalTypes=QTWeb


ActiveTypes=QTWeb


GenerateTypes=QTWeb


RecordedProtocols=QTWeb



Controller no longer requires separate license for Siebel-Web. Web user license is sufficient to execute the scripts

Performance Assessment Methodology

The below approach will help you if  you are planning to act as a consultant for a performance testing engagement and provide recomendations to your customer.

The attached diagram will detail  the assessment approach for the following topics

  • Study the existing test processes
  • Study Architecture
  • Gather Business Information
  • Define test Model
  • Define tests 

Performance Assessment Methodology

Generic Questions to RFP on Performance Testing

I have listed some generic questions which can be asked to a potential customer in response to an RFP on performance testing

  • Will you provide any performance data that has been gathered on the existing application? If you don’t have the data on hand, will you help make it available?
  • How many Performance contractors will be involved in the project team?
  • Are there any metrics for success or expectations for future volume? Does the organization have a performance testing technology that they regularly use? Does the Application already have service levels and performance levels defined?
  • What is the total number of users that will access the application?
  • Description of your existing software systems, e.g., Web server, database, application server, network operating system, messaging, monitoring, network management, etc.
  • Description of in-house IT staff, including their areas of expertise, be it networking, systems management, development, etc.
  • Can the application provide high availability systems? What is considered high availability?
  • How is load balancing performed?
  • Does the application have processes in place to monitor CPU usage, file system usage, memory usage, and network performance?
  • What are the tools does your organization have?
  • How many and what percent of applications/projects are performance tested with tool (LoadRunner) before they are launched into production

User Defined Template feature in LoadRunner 9.5

When testing the application, we also need to test several business processes, to do so we usually reuse same parameter files, runtime settings, Boiler templates etc. In addition we may also want to reuse certain functions for each of the scripts we create. Until now to do this we have to create a new script and then import and copy into it all the necessary files and settings from the existing scripts. With VUGEN 9.5 we can avoid this by using script template.

Create a script with proper boiler plate, parameters files, and runtime settings etc which are common to all the business processes and then save the template. To do this go to File menu– User Defined template – Save as Template in VUGEN

Once the template is created, apply the template, to do this go to File menu –User defined template – Create script from Template and a new script will be created with all the based data as made in the template and we can now start recording our business process on top of it.

How to reduce the amount of threads per process in LoadRunner

In LoadRunner 9.x, please perform the following steps:

1. Go to the /dat/protocols/QTWeb.lrp file.

2. By default, in web protocol the number of threads mdrv can spawn is 50. Change this value to a lower value (E.g. 15) by adding/changing the following:


[VuGen] //Add below entry if not available
MaxThreadPerProcess=15

3. Re-run the scenario.

Approach to Oracle Apps Performance testing

Oracle Apps Performance Testing

During one of my engagements, I was requested to come up with a strategy for performance testing Oracle E-Business suite. Client was major producer and leader of power transmission drives, components and bearings.

Client’s Oracle E-Business Suite is a complete set of business applications that enable the organization to efficiently manage customer interactions, manufacture products, ship orders, collect payments, and more.

Oracle Applications architecture is a framework for multi-tiered, distributed computing that supports Oracle Applications products. In this model, various services are distributed among multiple levels, or tiers. The tiers that compose Oracle Applications are the database tier, which manages the Oracle database; the application tier, which manages Oracle Applications and other tools; and the desktop tier, which provides the user interface display. Only the presentation layer of Oracle Applications is on the desktop tier in the form of a plug-in to a standard Web browser.

To come up with a performance strategy was a challenge to me because of their complex architecture and it was first of a kind in my organization to provide any kind of support for Oracle E-Business Suite.

Test Approach

My approach was to identify the configuration for performance testing in the test environment, for that present production configuration was analyzed to come up with a good approach.

In Production, Application and Database Tiers are present in two separate Solaris boxes and there are 6 instances which are sharing the application tier and 7 instances in DB tier. All the Instances present in the production boxes are oracle applications for different companies (Independent Entities) in that organization and all the hardware resources in the Box like CPU, Memory, and IO are shared across all the instances and the Transactions performed in the several instances are independent to each other.

Test environment which is very much similar to Production was considered for performance testing but Application tier was shared by 14 instances and Database tier was shared by 16 instances which was higher when compared to production.

First recommendation I had given was to the infrastructure team to map the number of instances similar to production in test environment.

I put forward two approaches to the client for the performance test along with the risks

Approach 1

In test environment, create the number of instances similar to prod in both App and DB tier and then analyze the work load model of each and every instance in both application as well as Database tiers and capture the important transactions for all instances and create scripts to replicate the same and execute those transactions in the background such that they utilize the hardware resources in test environment to some extent. Then simulate the work model for our instance and capture and publish the performance metrics

Risks

• Extremely difficult to understand the workload model for all the instances in the production

• Time Consuming and costly.

• Discussion required with multiple stakeholders

Approach 2

Second Approach of the performance test was to ignore the multiple instances and their transactions running in the server and dedicate the maximum amount of Hardware resources Client instance can consume in the test environment with through analysis from the existing production servers such that system utilization should not reach the identified critical level. Based on the analysis, CPU, Memory availability in the test environment should be constrained and Infrastructure Team should help in dedicating the CPU’s and Memory for client’s instance in the test environment for the performance test.

Performance test should then be carried out for the identified transactions in the workload model and capture and publish the performance metrics.

Risks

• Real time performance issues related to multiple instances may not be found

Conclusion

The above two approaches for the performance tests were discussed along with its Pros and Cons. First approach was ideal but given the timeframe available, simulating all the noise for other instances was difficult to implement and also it requires lot of coordination with multiple stakeholders. It was agreed to follow the second approach for the performance test after discussions with infrastructure team.

Understanding Snapshot Attribute in LoadRunner

A snapshot is a graphical representation of the current step and it is an Attribute to functions like Web_URL, web_custom_request etc in LoadRunner

A Sample request is shown below

web_custom_request("Sample_Request",
"URL=http://… /Service",
"Method=POST",
"Resource=0",
"RecContentType=text/xml",
"Mode=HTML",
"Snapshot=t1.inf", //Snapshot attribute is commented
"EncType=text/xml; charset=utf-8",
"Body="
LAST);

When working in Tree view, VuGen displays the snapshot of the selected step in the right pane (as Shown Below). The snapshot shows the client window after the step was executed.(Refer Snapshot)

LoadRunner Snapshot

VuGen captures a base snapshot during recording and another one during replay. You compare the Record and Replay snapshots to determine the dynamic values that need to be correlated in order to run the script.

If Snapshot Attribute is not placed in the request, then Image file will not be generated during the replay(Refer Snapshot).

web_custom_request("Sample_Request",
"URL=http://… /Service",
"Method=POST",
"Resource=0",
"RecContentType=text/xml",
"Mode=HTML",
"EncType=text/xml; charset=utf-8",
"Body="
LAST);

LoadRunner Snapshot