Tuesday, July 10, 2012

Why do performance tests fail?


Some of the reasons performance testing fails to stop performance problems occurring in the production environment. Below is a list of some of the reasons why performance testing can fail to spot these problems. Hopefully, the list below will provide, as a reminder of things to check next time you have to write a performance test plan. However, we must remember that like all testing, performance testing is about reducing the risk of failure and can never prove 100% that there will be no production performance problems. Indeed it may be more cost effective for some problems to occur in production than during test! Though your customer may not fully appreciate this approach.

So here is my list:

1) Ignoring the client processing time, performance test tools a designed to test the performance of the backend servers by emulating the network traffic coming from clients. They do not consider the delay induced by the client such as rendering and script execution.

2) Ignoring the WAN, again test labs often inject the load across a LAN ignoring any outside the data center network delays. This is a particular problem for chatty application when it comes to network traffic.

3) Load test scripts that do not check for valid responses, performance testing is not functional testing but it is important that for the test script you write they check they are receiving correct responses back. The classic problem has been tools that just check that a valid HTML code is returned. The problem with this is that the “We are busy” page has the same valid code as the normal page.

4) Poor workload modeling. If we can not estimate the user workload correctly the load test will never be right. You might do a great test testing for 10,000 users but that is no real help if 20,000 users arrive on day one. Don’t under estimate the need to get a good workload model.

5) Assuming perfect users, alas users are not perfect and they make mistakes, cancel order before committing and forget to log off. This leads to a very different workload than if all the users where perfect, putting a different load on the environment.

6) Bad Test Environments, a test environment should be as representative as the production environment as possible. I have seen failures particularly when the test environment has been undersized but also where is has not been configured in a similar fashion to production.

7) Neglecting Go-live+10 days performance issues, Performance testing typically focuses on testing the peak hour and a soak test. What is difficult to do in a performance test is to represent how the system will be after several days of operations. Systems can ground to a halt as logs build up and nobody has got round to running the clean up scripts or transactions slow as SQL cannot cope with the increased rows in tables.

8) Unexpected user behavior, Very difficult to mitigate this one as it unexpected! However, in many cases a lack of end user training has resulted in users doing the unexpected like the car part salesman that didn’t know how to use the system and did a wild card search to return the complete part catalog and then scrolled through it to find the part manually each time! Caused a killer performance issue.

9) Lack of statistical rigor. You don’t need to statistical guru to run a performance test but you should at least run the test long enough and enough times to be confident that the results are repeatable.

10) Poor test data, like the test environment the test data should be as representative as possible. Logging in all the virtual users with the same user id may put a different load on the system then if each had their own user id.

Tuesday, July 3, 2012

What is the need to execute Network Sensitivity Tests?


The three principle reasons for executing network sensitivity tests are as follows:
 
- Determine the impact on response time of WAN link.
- Determine the capacity of a system based on a given WAN link.
- Determine the impact on the system under test that is under dirty communications load.

Execution of performance and load tests for analysis of network sensitivity require test system configuration to emulate a WAN. Once a WAN link has been configured, performance and load tests conducted will become Network Sensitivity Tests.

There are two ways of configuring such tests:

- Use a simulated WAN and inject appropriate background traffic

This can be achieved by putting back to back routers between a load generator and the system under test. The routers can be configured to allow the required level of bandwidth, and instead of connecting to a real WAN, they connect directly through to each other.
When back to back routers are configured to be part of a test, they will basically limit the bandwidth. If the test is to be realistic, then additional traffic will need to be applied to the routers. This can be achieved by a web server at one end of the link serving pages and another load generator generating
requests. It is important that the mix of traffic is realistic.

For example, a few continuous file transfers may impact response time in a different way to a large number of small transmissions. By forcing extra more traffic over the simulated WAN link, the latency will increase and some packet loss may even occur. While this is much more realistic than testing over a high speed LAN, it does not take into account many features of a congested WAN such as out of sequence packets.

- Use the WAN emulation facility within LoadRunner

The WAN emulation facility within LoadRunner supports a variety of WAN scenarios. Each load generator can be assigned a number of WAN emulation parameters, such as error rates and latency. WAN parameters can be set individually, or WAN link types can be selected from a list of pre-set configurations.

It is important to ensure that measured response times incorporate the impact of WAN effects both at an individual session, as part of a performance test, and under load as part of a load test, because a system under WAN affected load may work much harder than a system doing the same actions over a clean communications link.

Thursday, May 17, 2012

Difference between TCP and UDP


   

There are two types of internet protocol (IP) traffic, and both have very different uses.
  1. TCP(Transmission Control Protocol). TCP is a connection-oriented protocol, a connection can be made from client to server, and from then on any data can be sent along that connection.
  •  Reliable - when you send a message along a TCP socket, you know it will get there unless the connection fails completely. If it gets lost along the way, the server will re-request the lost part. This means complete integrity, things don't get corrupted.
  • Ordered - if you send two messages along a connection, one after the other, you know the first message will get there first. You don't have to worry about data arriving in the wrong order.
  • Heavyweight - when the low level parts of the TCP "stream" arrive in the wrong order, resend requests have to be sent, and all the out of sequence parts have to be put back together, so requires a bit of work to piece together.
  • Streaming: Data is read as a "stream," with nothing distinguishing where one packet ends and another begins. There may be multiple packets per read call.
  • Examples: World Wide Web (Apache TCP port 80), e-mail (SMTP TCP port 25 Postfix MTA), File Transfer Protocol (FTP port 21) and Secure Shell (OpenSSH port 22) etc.
2.  UDP(User Datagram Protocol). A simpler message-based connectionless protocol. With UDP you send messages (packets) across the network in chunks.
  • Unreliable - When you send a message, you don't know if it'll get there, it could get lost on the way.
  • Not ordered - If you send two messages out, you don't know what order they'll arrive in.
  • Lightweight - No ordering of messages, no tracking connections, etc. It's just fire and forget! This means it's a lot quicker, and the network card / OS have to do very little work to translate the data back from the packets.
  • Datagrams: Packets are sent individually and are guaranteed to be whole if they arrive. One packet per one read call.
  • Examples: Domain Name System (DNS UDP port 53), streaming media applications such as IPTV or movies, Voice over IP (VoIP), Trivial File Transfer Protocol (TFTP) and online multiplayer games etc.

Cookies and its types? Where are cookies used?


A cookie or an HTTP cookie can be defined as a message used by an origin website to send the information about the state to the browser of the user and by the browser to send the information about its state to the origin site.
An HTTP cookie is known by many names such as web cookie, browser cookie etc.

The information of the state that is sent across the origin site and the user’s browser is used for the purpose of:

- Authentication
- Identification of the session of an user
- Preferences of the user and
- Contents of the shopping cart

In other word HTTP cookies are used for any purpose that can be accomplished using the process storing text data on the computer of the user.

Characteristics and Uses of Cookie
- The main characteristic of Cookies is that they cannot be programmed and thus, cannot carry any kind of viruses or worms.

- Any malware cannot be installed on the host system with the use of a cookie. So they are safe to this extent.

- However, cookies can be effectively used by a spyware to track the browsing activities of the users.

- This is a major privacy concern and has prompted European and US law makers to take action in the past few years.

- Cookies are very easy to steal and are thus often misused by the hackers.

- Hackers steal the cookies and use them to gain access to the web account of the victim.

- Cookies were first used to solve the problem of implementation of the shopping cart.

- Initially the cookies were developed for the Netscape browser.

- They were used to check if the earlier visitors visited the site again.

- Later cookies were developed for internet explorer and other browsers.

- The concept of the cookies was not widely known to the public at that time.

The term “HTTP cookie” came into existence in the year of 1994. It has been derived from “magic cookie”.

What are Magic Cookies?
- Magic cookie was actually a data packet that a program receives and sends again to the program on the other side without altering the contents of the packet.

- Magic cookies were used in computing systems long back and were introduced in web communications by Lou Montulli in June 1994.


The development of a cookie for formal specifications is always in progress. Till date many types of cookies have developed. They have been discussed below:

Session cookie:
- This cookie has a lifetime equal to the time period of the user using the website.
- These cookies are automatically deleted after the end of a session.

Persistent cookie:
- These cookies last even after the session has expired.
- If a persistent cookie has its maximum age set to one year, then till the one year is over, the cookie will be sending information to the server every time the website is visited.
- These are also called tracking cookies.

Secure cookie:
- These cookies are used by the browser if it accessing server through an HTTPS connection.
- This ensures that the cookie is always encrypted during the transmission of the information.
- This prevents cookie theft.

HTTP only cookie:
- This type of cookie is mostly supported by all the modern browsers.
- On a browser which supports HTTP, an HTTP only cookie is used during transmission of HTTP requests.
- It restricts the access from other non HTTP scripts.

Third party cookie:
- The first party cookies are set with the same domain or sub domain in the address bar of the browser.
- But, third party cookies are set with various domains other than the one mentioned in the address bar.

Super cookie:
- A cookie with a public suffix domain like .co.uk, .com etc.

Zombie cookie:
- This cookie is automatically recreated after its deletion.

Friday, April 13, 2012

What is meant by concurrency testing?


INTRODUCTION TO CONCURRENT TESTING

1. Concurrency testing employs many users at a time and therefore it is considered to be a type of multi- user testing.

2. Concurrency testing was invented in order to determine what happens if many users are accessing the same data base records, modules or application code.

3. Other than this, concurrency testing also identifies the use of single threaded code, dead locking, locking semaphores and locking and it is also effective in measuring the level of these aspects.

4. While carrying out the concurrency testing several users perform same kind of task on the same application and that too at the same time.

5. The application serves too many users at a time.

TOOLS FOR PERFORMING CONCURRENT TESTING
Several tools are available for performing concurrency testing but we would recommend you load Runner since it helps you create concurrency at a point you wish.

- All you need to do is create a test scenario after you have recorded and improved your scripts using the virtual user generator.

- You can decide and add how many users you want your scripts to support.

- You should input the number of users in the controller component of this testing tool.

- There are several ways to add user like gradual ramp- up or spike, stepped etc.

- You can choose for yourself which way you want the users to be added.

- This is the most effective way to create concurrency.

- Apart from the multi user testing, the concurrency testing tests one application over the other one.

LEVELS OF CONCURRENCY TESTING
Two levels of concurrency testing have been developed so far and they have been discussed below:

# 1st Level:
- This involves concurrency testing of one application being executed on the top of another application.

- We can illustrate this by a simple example; you receive an incoming call while playing some game on your cell phone. The game will go to paused state. This is the simplest example we can give.

# 2nd Level:
- This involves concurrency testing of one application being executed on the top of other two applications.

- This can also be illustrated with a similar example, you receive an incoming call while playing games and while you are on the call you receive an SMS. The game is paused.

- Apart from just testing the multi user capacity of the software, concurrency is also responsible for finding out the bugs like dead lock, live lock, data race and data corruption.

- These bugs usually occur when the parallel processing is implemented in the application.

- If you are also using the user scenario apart from the test scenario your concurrency testing will yield much better results.

- Some applications make use of more than one module where some accept parallel processing and other are sequential.

- Identifying the type of modules can help you in writing effective test cases against them which in turn will reflect in your testing.

This is the internet era. Almost everyone is using web and web applications all over the world. Such a huge number of users require great stress handling capacity for the servers. If the servers are not able to take the load they often slip in to the position of the dead lock.

Concurrency testing is a way to ensure that such kinds of situations do not occur. Therefore concurrency testing is important. Performance is the basic aspect that is tested most in the concurrency testing.

Number of users using the application at a time equals to the number of hits per second. With the growing number of internet users world wide more and more security is needed over the web. Concurrency testing is one such measure which has helped to a great extent for improving the cyber standards.