Wednesday, July 18, 2012

" Failed to open the new Vuser. Check that the TMP directory is not full or write protected".


While scripting when you face issue like  “Failed to open the new Vuser. Check that the TMP directory is not full or write protected". Here is the resolution.

Clear all temp folders in Windows and where LR saves temp files and logoff and login to your system. it will works fine.

Tuesday, July 10, 2012

How do you determine the number of concurrent users you will need in your test?


It's quite a common question and sometimes a very difficult one to answer -
How do you estimate how many virtual users you should simulate in a performance test?
However if you have the following information you can make an estimate.

You need to know:

a: How many visits your site gets or how many visits you're expecting per day (choose a relevant day - e.g. if you're a business to business web site choose a weekday, if your a retail site perhaps a weekend day would be more appropriate)

b: The average time a user stays on the site

c: The ratio between a busy period and a slack period

d: How many minutes in a day (or at least how many minutes you consider the site to be active in a day)

You can then apply the following rule:
( a x b x c) / d = concurrent users

For example: you have a site which sees 30,000 visits each day, the average time a user spends on the site is 10 minutes and busy periods are 10 times busier than slack periods. This means that the concurrent user rate is (30,000 x 10 x 10) / 1440 = 2083 concurrent users.

I won't claim to have come up with this formula myself - it's from Performance Planning and Deployment with Content Management Server - a paper about MS's CMS system. The formula will apply to any web site, not just those hosted on these servers of course.

Why do performance tests fail?


Some of the reasons performance testing fails to stop performance problems occurring in the production environment. Below is a list of some of the reasons why performance testing can fail to spot these problems. Hopefully, the list below will provide, as a reminder of things to check next time you have to write a performance test plan. However, we must remember that like all testing, performance testing is about reducing the risk of failure and can never prove 100% that there will be no production performance problems. Indeed it may be more cost effective for some problems to occur in production than during test! Though your customer may not fully appreciate this approach.

So here is my list:

1) Ignoring the client processing time, performance test tools a designed to test the performance of the backend servers by emulating the network traffic coming from clients. They do not consider the delay induced by the client such as rendering and script execution.

2) Ignoring the WAN, again test labs often inject the load across a LAN ignoring any outside the data center network delays. This is a particular problem for chatty application when it comes to network traffic.

3) Load test scripts that do not check for valid responses, performance testing is not functional testing but it is important that for the test script you write they check they are receiving correct responses back. The classic problem has been tools that just check that a valid HTML code is returned. The problem with this is that the “We are busy” page has the same valid code as the normal page.

4) Poor workload modeling. If we can not estimate the user workload correctly the load test will never be right. You might do a great test testing for 10,000 users but that is no real help if 20,000 users arrive on day one. Don’t under estimate the need to get a good workload model.

5) Assuming perfect users, alas users are not perfect and they make mistakes, cancel order before committing and forget to log off. This leads to a very different workload than if all the users where perfect, putting a different load on the environment.

6) Bad Test Environments, a test environment should be as representative as the production environment as possible. I have seen failures particularly when the test environment has been undersized but also where is has not been configured in a similar fashion to production.

7) Neglecting Go-live+10 days performance issues, Performance testing typically focuses on testing the peak hour and a soak test. What is difficult to do in a performance test is to represent how the system will be after several days of operations. Systems can ground to a halt as logs build up and nobody has got round to running the clean up scripts or transactions slow as SQL cannot cope with the increased rows in tables.

8) Unexpected user behavior, Very difficult to mitigate this one as it unexpected! However, in many cases a lack of end user training has resulted in users doing the unexpected like the car part salesman that didn’t know how to use the system and did a wild card search to return the complete part catalog and then scrolled through it to find the part manually each time! Caused a killer performance issue.

9) Lack of statistical rigor. You don’t need to statistical guru to run a performance test but you should at least run the test long enough and enough times to be confident that the results are repeatable.

10) Poor test data, like the test environment the test data should be as representative as possible. Logging in all the virtual users with the same user id may put a different load on the system then if each had their own user id.

Tuesday, July 3, 2012

What is the need to execute Network Sensitivity Tests?


The three principle reasons for executing network sensitivity tests are as follows:
 
- Determine the impact on response time of WAN link.
- Determine the capacity of a system based on a given WAN link.
- Determine the impact on the system under test that is under dirty communications load.

Execution of performance and load tests for analysis of network sensitivity require test system configuration to emulate a WAN. Once a WAN link has been configured, performance and load tests conducted will become Network Sensitivity Tests.

There are two ways of configuring such tests:

- Use a simulated WAN and inject appropriate background traffic

This can be achieved by putting back to back routers between a load generator and the system under test. The routers can be configured to allow the required level of bandwidth, and instead of connecting to a real WAN, they connect directly through to each other.
When back to back routers are configured to be part of a test, they will basically limit the bandwidth. If the test is to be realistic, then additional traffic will need to be applied to the routers. This can be achieved by a web server at one end of the link serving pages and another load generator generating
requests. It is important that the mix of traffic is realistic.

For example, a few continuous file transfers may impact response time in a different way to a large number of small transmissions. By forcing extra more traffic over the simulated WAN link, the latency will increase and some packet loss may even occur. While this is much more realistic than testing over a high speed LAN, it does not take into account many features of a congested WAN such as out of sequence packets.

- Use the WAN emulation facility within LoadRunner

The WAN emulation facility within LoadRunner supports a variety of WAN scenarios. Each load generator can be assigned a number of WAN emulation parameters, such as error rates and latency. WAN parameters can be set individually, or WAN link types can be selected from a list of pre-set configurations.

It is important to ensure that measured response times incorporate the impact of WAN effects both at an individual session, as part of a performance test, and under load as part of a load test, because a system under WAN affected load may work much harder than a system doing the same actions over a clean communications link.