Friday, April 13, 2012

What is meant by concurrency testing?


INTRODUCTION TO CONCURRENT TESTING

1. Concurrency testing employs many users at a time and therefore it is considered to be a type of multi- user testing.

2. Concurrency testing was invented in order to determine what happens if many users are accessing the same data base records, modules or application code.

3. Other than this, concurrency testing also identifies the use of single threaded code, dead locking, locking semaphores and locking and it is also effective in measuring the level of these aspects.

4. While carrying out the concurrency testing several users perform same kind of task on the same application and that too at the same time.

5. The application serves too many users at a time.

TOOLS FOR PERFORMING CONCURRENT TESTING
Several tools are available for performing concurrency testing but we would recommend you load Runner since it helps you create concurrency at a point you wish.

- All you need to do is create a test scenario after you have recorded and improved your scripts using the virtual user generator.

- You can decide and add how many users you want your scripts to support.

- You should input the number of users in the controller component of this testing tool.

- There are several ways to add user like gradual ramp- up or spike, stepped etc.

- You can choose for yourself which way you want the users to be added.

- This is the most effective way to create concurrency.

- Apart from the multi user testing, the concurrency testing tests one application over the other one.

LEVELS OF CONCURRENCY TESTING
Two levels of concurrency testing have been developed so far and they have been discussed below:

# 1st Level:
- This involves concurrency testing of one application being executed on the top of another application.

- We can illustrate this by a simple example; you receive an incoming call while playing some game on your cell phone. The game will go to paused state. This is the simplest example we can give.

# 2nd Level:
- This involves concurrency testing of one application being executed on the top of other two applications.

- This can also be illustrated with a similar example, you receive an incoming call while playing games and while you are on the call you receive an SMS. The game is paused.

- Apart from just testing the multi user capacity of the software, concurrency is also responsible for finding out the bugs like dead lock, live lock, data race and data corruption.

- These bugs usually occur when the parallel processing is implemented in the application.

- If you are also using the user scenario apart from the test scenario your concurrency testing will yield much better results.

- Some applications make use of more than one module where some accept parallel processing and other are sequential.

- Identifying the type of modules can help you in writing effective test cases against them which in turn will reflect in your testing.

This is the internet era. Almost everyone is using web and web applications all over the world. Such a huge number of users require great stress handling capacity for the servers. If the servers are not able to take the load they often slip in to the position of the dead lock.

Concurrency testing is a way to ensure that such kinds of situations do not occur. Therefore concurrency testing is important. Performance is the basic aspect that is tested most in the concurrency testing.

Number of users using the application at a time equals to the number of hits per second. With the growing number of internet users world wide more and more security is needed over the web. Concurrency testing is one such measure which has helped to a great extent for improving the cyber standards.

Thursday, March 1, 2012

LoadRunner Analysis: Hints ‘n’ Tips.


The LoadRunner Analysis tool can either be a godsend or the devil’s daughter. I think most Performance Analysts have a love-hate relationship with the Analysis tool…and refer to it as some sort of necessary evil.

With a little bit of patience, the Analysis tool is actually quite powerful and can be used to produce some fantastic graphs and information for your reports.
Everyone seems to learn their own tricks, if you have any more of your own, please submit these as comments.

In no particular order:-
  • Setting the Granularity.
    The granularity affects the graph smoothing, or number of data points. A high granularity value will reduce clutter and improve readability, but it may hide events such as spikes. The trick is to adjust the granularity to maximize readability but accurately represent any important events. Caution, however for larger performance tests setting the granularity too low may cause the Analysis tool to hang for a few minutes.
  • Annotations – Comments and Arrows.
    Make use of comments and arrows to annotate important events on the graph. In addition, these are useful for labeling data lines…particularly when there are lots of lines and you want to highlight just one of two of them.
  • Focus on just a few transactions.
    Performance tests can have sometimes hundreds of transactions and this can generate some very busy graphs. Decide on 3-5 transactions that have the highest business value or are of particular interest…focusing your analysis on just a few transactions is more valuable to the readers of your report.
  • Set your Y-Axis to be the same for all graphs.
    Graphs should be easily comparable and “tell the same story”. By default the Y-Axis is set to Automatic, which sets the minimum and maximum scale to match your data. Go to Display Options > Advanced > Axis tab, and change Minimum and Maximum values from “Auto” to something sensible. Percentage graphs should always be ranges from 0% to 100%.
    As a general rule, your Y-Axis scale should always start at zero and your ideal or SLA level should be approximately one-third up the scale. People we automatically assume response times in the lower third of your graph are “good” and the upper two-thirds are “poor”.
  • Tell a Story
    All graphs you present via the Analysis tool should tell a story. Performance Testing is there to answer the question of whether the application will scale to the load level required in production…and as a performance analysis you should produce graphs to provide evidence of either the success or failure of this performance testing.
  • Don’t Take Screenshots with the Print Screen button
    Screenshots look bad, instead use Edit > Copy To Clipboard > Graph. Alternatively, you can export graphs as image files. Go to Display Options > Advanced > Export tab. By exporting in Metafile format you will avoid any blocky pixelation which looks great for presentations or printed reports. I also recommend setting the border background colour to White for a nicer look (Display Options > Advanced > Chart > Panel > Background > Color).
  • Filter on Peak Load
    When analysing a peak load test, make use of a global filter to limit your analysis to the time that was spent AT PEAK LOAD, so that your response times do not include values measured during ramp up.
  • Percentile Graph Analysis
    Look at each response time in the percentile graph to see if there are any weird “step” patterns. The lines in the percentile graph should always be smooth, if there is a weird “step” pattern that could indicate abnormal behaviour.
  • Import External Data
    Remember that you can import data from external monitors/text files (Tools > External Monitors > Import Data).
  • Merging graphs
    This is useful but don’t put too many values on the graph, or it will be hard to interpret. Combine multiple data types can be confusing. Use Annotations.
  • Edit the Transaction Summary in Excel or Word.
    The Transaction Summary can be easily copied and pasted into either Word or Excel, this allows it to be easily edited. For example, removing columns and reduced the decimal points displayed.
  • Only use Complete Data
    After initially opening a LoadRunner result, only summary data is available. In the lower-left corner, you will notice the generation of Complete Data. You should always wait until this background process completes before editing graphs.
  • Remove the data point markers
    Use Display Options > Graph Type to quickly remove all the data point marks for your line graph. This makes it look a lot less cluttered.
  • Make use of Templates
    This feature (Tools > Templates) allows you to quickly apply the same graphs and formatting from one Analysis scenario to another. This is great for comparing results; however always double-check filters to after applying a template to ensure you are not filtering out any important data.
I hope you all find this bag of tricks useful. I know I’ve learnt a thing or two by putting it together. ;-)

Scripting RMI with Loadrunner VuGen 9.5x and 11


The Java Remote Method Invocation (RMI) is a Java application programming interface that supports an object-oriented way of  remote procedure calls (RPC). In this article I show you two ways to create performance tests scripts with Loadrunner for RMI-driven applications.
As RMI is a Java-specific protocol, the Java vuser or the Java Record replay protocols mush be used in the Virtual User Generator.
I demonstrate two approaches:
  1. Code RMI calls manually – using Java VUser script type
  2. Record the network traffic of the RMI application’s client and then play it back – using Java Record replay script type
Note: The respective Loadrunner licenses are necessary to run the created tests in the Controller.

The test application

I created a small RMI client-server test application using the Java Spring Framework. For simplicity the RMI server application is packaged as a WAR and can be run in a java web container (e.g. Tomcat). The client has been packaged into a single JAR file for easy execution. This JAR file will be reused in VUgen.  It contains the RMI interface of the application under test and the necessary Spring Framework APIs for easy RMI access.
If you would like to try the test application, here are the necessary tools:
  • Java Development kit (e.g. 1.6)
  • Apache Maven for build and dependency management
  • Apache Tomcat to execute the RMI test server application
The test application can be downloaded here: spring-rmi-poc-test-app.zip

Using Java VUser script type (using Loadrunner Virtual User Generator 9.5x)

  1. Create a new Java VUser script
  2. Use File/Add files to script… menu to add the test client jar to the script. Adding JAR files this way ensures that the contained classes will be placed to the Classpath, therefore will be available in script.
  3. Code the RMI call manually, as it would be done in any other Java program. In our example we used Spring Framework to facilitate that.
JAR file added to Java virtual user script
  1. /*
  2. * LoadRunner Java script. (Build: 3020)
  3. *
  4. * Script Description:
  5. *
  6. */
  7. import lrapi.lr;
  8. import org.springframework.remoting.rmi.RmiProxyFactoryBean;
  9. import rmitest.intf.AccountService;
  10.  
  11. public class Actions
  12. {
  13.  
  14.   RmiProxyFactoryBean proxy = new RmiProxyFactoryBean();
  15.  
  16.   public int init() throws Throwable {
  17.     // TODO update here the name of the server and the port:
  18.     proxy.setServiceUrl("rmi://localhost:1199/AccountService");
  19.     proxy.setServiceInterface(AccountService.class);
  20.     proxy.afterPropertiesSet();
  21.  
  22.     return 0;
  23.   }//end of init
  24.  
  25.   public int action() throws Throwable {
  26.     try {
  27.       lr.start_transaction("getAccounts");
  28.  
  29.       Object obj = proxy.getObject();
  30.       AccountService service = (AccountService) obj;
  31.       service.getAccounts("akarmi");
  32.  
  33.       lr.end_transaction("getAccounts", lr.AUTO);
  34.     } catch (Exception e) {
  35.       lr.end_transaction("getAccounts", lr.FAIL);
  36.     }
  37.     return 0;
  38.   }//end of action
  39.  
  40.   public int end() throws Throwable {
  41.     return 0;
  42.   }//end of end
  43.  
  44. }

Using Java Record replay script type

(This script type available in VuGen 9.5, however it functioned for me only in Loadrunner 11 well.)
This approach is useful, if it is possible to run the RMI client application together with Loadrunner capturing. For complex or platform- or environment-dependent application it is usually not the case however.
To start the example RMI client the following batch file has been created. It is to be specified in the Start Recording dialog, via seting Application type to „Executable\Batch”. I set the working directory to point the directory of the jar file.
1
%JAVA_HOME%\bin\java -jar spring-rmi-jar-with-dependencies.jar
Ensure, that in the Recording Options the RMI protocol is selected.
RMI is selected as a recorded protocol
The client application should now start and the RMI traffic will be recorded and transformed into java code.
Start recording dialog to capture RMI client application’s traffic
Here is a sample generated raw code:


  1. /*
  2. * Code generated by RMI support (Build: _build_number_) - Vugen [JAPATA] for LoadRunner Version 11.0.0
  3. * Session was recorded on: Thu Dec 10 14:34:35 2011
  4. *
  5. * Using VM version : Executable
  6. * Script Author    : Sudhakar
  7. * Script Description:
  8. *
  9. */
  10.  
  11. import lrapi.lr;
  12. import java.util.Properties;
  13.  
  14. public class Actions
  15. {
  16.   // Public function : init
  17.   public int init() throws Throwable {
  18.     // Set system properties...
  19.     _properties1 = System.getProperties();
  20.     _properties1.put("sun.desktop", "windows");
  21.     _properties1.put("sun.jnu.encoding", "Cp1251");
  22.     _properties1.put("sun.management.compiler", "HotSpot Client Compiler");
  23.     _properties1.put("sun.java.launcher", "SUN_STANDARD");
  24.     System.setProperties(_properties1);
  25.  
  26.     // Installing RMISecurityManager
  27.     if (System.getSecurityManager() == null)
  28.     System.setSecurityManager(new java.rmi.RMISecurityManager());
  29.     return 0;
  30.   }//end of init
  31.  
  32.   // Public function : action
  33.   public int action() throws Throwable {
  34.     // Lookup a remote object...
  35.     _rmiinvocationhandler1 = (org.springframework.remoting.rmi.RmiInvocationHandler)java.rmi.Naming.lookup("rmi://localhost:1199/AccountService");
  36.     _string1 = "org.springframework.remoting.support.RemoteInvocation __CURRENT_OBJECT = {" +
  37.     "java.lang.Object arguments[] = {" +
  38.     "java.lang.Object arguments[0] = #akarmi#" +
  39.     "}" +
  40.     "java.util.Map attributes = _NULL_" +
  41.     "java.lang.String methodName = #getAccounts#" +
  42.     "java.lang.Class parameterTypes[] = {" +
  43.     "java.lang.Class parameterTypes[0] = {}" +
  44.     "}" +
  45.     "}";
  46.  
  47.     _remoteinvocation1 = (org.springframework.remoting.support.RemoteInvocation)lr.deserialize(_string1,0);  // RMIComponent
  48.     _object1 = _rmiinvocationhandler1.invoke((org.springframework.remoting.support.RemoteInvocation)_remoteinvocation1);
  49.     return 0;
  50.   }//end of action
  51.  
  52.   // Public function : end
  53.  
  54.   public int end() throws Throwable {
  55.     return 0;
  56.   }//end of end
  57.  
  58.   // Variable section
  59.   java.lang.Object[] _object_array1;
  60.   java.util.Properties _properties1;
  61.   org.springframework.remoting.rmi.RmiInvocationHandler _rmiinvocationhandler1;
  62.   org.springframework.remoting.support.RemoteInvocation _remoteinvocation1;
  63.   java.lang.String _string1;
  64.   java.lang.Object[] _object_array2;
  65.   java.lang.Object _object1;
  66. }


Thursday, February 16, 2012

Agent less performance monitoring using SNMP

We have many performance monitoring tools in the market from freeware to license. All those tools can be divided as Agent less and Agent based tools.
                   
                   Agent based tools are very intuitive in terms of performance monitoring but costly to implement. CA Wily Introscope is the best example for Agent based monitoring. Agents/Probes play a major part in collecting the performance metrics from the systems under test, we have to manually build them but are very intuitive as they collect the metrics from the lowest level possible and it depends on how its configured. This type of monitoring is very useful as you can track each and every request from a module/function/component where the agents are placed. The negative part of it is these configured agents will be an overhead to the system, more are the agents more are the metrics configured and more is the overhead on the system.
              
                   Most of the agent less monitoring tools used SNMP(Simple Network Management Protocol) to track all the performance metrics for the applications. All SNMP compliance systems can be monitored using SNMP. To monitor a system, we have to activate the SNMP service on that system and set all performance metrics to be monitored and the frequency at which it has to monitor. Once these basic settings are done you are ready to go. HP SiteScope is an example of agent less successful monitoring tool.

In brief:-
Agent based monitoring
Pros:-  more intuitive, monitoring at module/function/component level, more metrics can be configured.
Cons:- skill set to configure all agents, overhead on the system.

Agent less monitoring
Pros:- fast to implement, no skill set required, all high level metrics can be monitored.
Cons:- used more network, cannot be customized. metrics are limited

Thursday, January 5, 2012

Calculating Concurrency from Performance Test Results


So you are on a performance test engagement and your boss asks how many people concurrently executing certain transactions like buying a book or doing a search. He wants is a measure of active concurrency - how many people are doing certain transaction. This should not be confused with Passive concurrency like how many people are logged in. Before we go any further lets clarify that in this example a transaction is a request to the test system and a response back it does not include any think time. Now before you start getting out the virtual terminal server and incrementing counters at the start of the transaction and decrementing counters at the end. There is an easier way.

You can work this all out from your performance test results, without the need for code. Using a mathematical formula (it’s very simple so don’t panic) called Little’s Law. Little’s Law was first used to analyze the performance of telephone exchanges in 1969 by John Little.
Little’s law allows us to relate the mean number of items in the system in our case concurrent users with the mean time in the system (response time) as follows:

Number of Items in the system = Arrival Rate x Response Time

There is one rule to remember before you use little law you must make sure the system is balanced. That is the arrival rate into the system is the same at the exit rate.
I will begin with a none computer example the “Black Horse Pub” has a mean arrival rate of 5 customers per hour that stay for on average half an hour. Using little’s law we can calculate the mean number of customers in the pub as Arrival Rate x Response Time = 5 x 0.5 = 2.5 customers.
To apply little law to a performance test we must first make sure that we are taking measurements from when the system under test is balanced. Remember a balanced system the rate of work entering the system matches the rate of work leaving the system. This for a typical load testing tool is after the ramp up period and the number of virtual users remains constant and response times have stabilized and the transaction per second graph is level. To capture this period of time in LoadRunner for example you would need to select the time period in the Summary report filter or under the Tools -> Options.
So record the average response time for the transaction of interest and the number of times per second the transaction is executed.

So from the example above the response time is 43.613 seconds. The arrival rate is the number of transactions executed divided by the duration. The duration for this example was a 10 minute period as can be confirm by the LoadRunner summary below.


This gives you an arrival rate of 2.005 calculated by taking the count 1203 divided by the duration 600.

So the concurrent number of users waiting for a search to return is 87.44
There you go from your performance test results you can easily calculate the concurrency for a particular transaction.