How to be a good tester?

It’s a every testers question.How to be a good tester?Apart from the technical knowledge, testing skills, tester should have some personal level skills which will help them to build a good rapport in the testing team.

What are these abilities , skills which make a tester as a good tester? Well, I was readingDave Whalen’s article “Ugly Baby Syndrome!” and found it very interesting. Dave compared software developers with the parents who deliver a baby (software) with countless efforts. Naturally the product managers, architectures, developers spent their countless time on developing application for the customer. Then they show it to us (testers) and asks: “ How is the baby (Application)? “ And testers tell them often that they have and ugly baby. (Application with Bugs!)

Testers don’t want to tell them that they have ugly baby, but unfortunately its our job. So effectively tester can convey the message to the developers without hurting them. How can be this done? Ya that is the skill of a good tester!

Here are the tips sated by Dave to handle such a delicate situation:

Be honest and Responsive:
Tell developers what are your plans to attack their application.

Be open and available:
If any dev ask you to have a look at the application developed by him before the release, then politely give feedback on it and report any extra efforts needed. Don’t log the bug’s for these notes.

Let them review your tests:
If you have designed or wrote some test cases from the requirement specifications then just show them those test cases. Let them know your stuff as you are going to critic on developers work!

Use of Bug tracker:
Some testers have habit to report each and everything publicly. This attitude hurts the developers. So if you have logged any bug then let the bug tracking system report it to respective developers and managers. Also don’t each time rely on bug tracker, talk personally to developers what you logged and why you logged?

Finally some good personal points:

Don’t take it personally:
Do the job of messenger. You could be a close target always. So build a thick skin!

Be prepared:
A good message in the end, Be prepared for everything! If worst things might not happened till now but they can happen at any moment in your career. So be ready to face them.

Stress-Testing Process

Stress test your application by subjecting it to very high loads that are beyond the capacity of the application, while denying it the resources required to process that load. For example, you can deploy your application on a server that is running a processor-intensive application already. In this way, your application is immediately starved of processor resources and must compete with the other application for CPU cycles.
The goal of stress testing is to unearth application bugs that surface only under high
load conditions. These bugs can include:
● Synchronization issues
● Race conditions
● Memory leaks
● Loss of data during network congestion
Unlike load testing, where you have a list of prioritized scenarios, with stress testing you identify a particular scenario that needs to be stress tested. There may be more than one scenario or there may be a combination of scenarios that you can stress test during a particular test run to reproduce a potential problem. You can also stress test a single Web page or even a single item, such as a stored procedure or class.

Testing .NET Application Performance

Performance Testing

Performance testing is the process of identifying how an application responds to a specified set of conditions and input. Multiple individual performance test scenarios (suites, cases, scripts) are often needed to cover all of the conditions and/or input of interest. For testing purposes, if possible, the application should be hosted on a hardware infrastructure that is representative of the live environment. By examining your application’s behavior under simulated load conditions, you identify whether your application is trending toward or away from its defined performance objectives.

Goals of Performance Testing

The main goal of performance testing is to identify how well your application performs in relation to your performance objectives. Some of the other goals of performance testing include the following:

● Identify bottlenecks and their causes.

● Optimize and tune the platform configuration (both the hardware and software) for maximum performance.

● Verify the reliability of your application under stress.

You may not be able to identify all the characteristics by running a single type of performance test. The following are some of the application characteristics that performance testing helps you identify:

● Response time.

● Throughput.

● Maximum concurrent users supported. For a definition of concurrent users, see “Testing Considerations,” later in this chapter.

● Resource utilization in terms of the amount of CPU, RAM, network I/O, and disk I/O resources your application consumes during the test.

● Behavior under various workload patterns including normal load conditions, excessive load conditions, and conditions in between.

Application breaking point. The application breaking point means a condition where the application stops responding to requests. Some of the symptoms of breaking point include 503 errors with a “Server Too Busy” message, and errors in the application event log that indicate that the ASPNET worker process recycled because of potential deadlocks.

● Symptoms and causes of application failure under stress conditions.

● Weak points in your application.

● What is required to support a projected increase in load. For example, an increase in the number of users, amount of data, or application activity might cause an increase in load.

Performance Objectives

Most of the performance tests depend on a set of predefined, documented, and agreed-upon performance objectives. Knowing the objectives from the beginning helps make the testing process more efficient. You can evaluate your application’s performance by comparing it with your performance objectives.

You may run tests that are exploratory in nature to know more about the system without having any performance objective. But even these eventually serve as input to the tests that are conducted for evaluating performance against performance objectives.

Performance objectives often include the following:

Response time or latency

Throughput

Resource utilization (CPU, network I/O, disk I/O, and memory)

Workload

Response Time or Latency

Response time is the amount of time taken to respond to a request. You can measure response time at the server or client as follows:

Latency measured at the server. This is the time taken by the server to complete the execution of a request. This does not include the client-to-server latency, which includes additional time for the request and response to cross the network.

Latency measured at the client. The latency measured at the client includes the request queue, plus the time taken by the server to complete the execution of the request and the network latency. You can measure the latency in various ways.

Two common approaches are time taken by the first byte to reach the client (timeto first byte, TTFB), or the time taken by the last byte of the response to reach the client (time to last byte, TTLB). Generally, you should test this using various network bandwidths between the client and the server.

By measuring latency, you can gauge whether your application takes too long to respond to client requests.

Throughput

Throughput is the number of requests that can be served by your application per unit time. It can vary depending upon the load (number of users) and the type of user activity applied to the server. For example, downloading files requires higher throughput than browsing text-based Web pages. Throughput is usually measured in terms of requests per second. There are other units for measurement, such as transactions per second or orders per second.

Resource Utilization

Identify resource utilization costs in terms of server and network resources.

The primary resources are:

● CPU

● Memory

● Disk I/O

● Network I/O

You can identify the resource cost on a per operation basis. Operations might include browsing a product catalog, adding items to a shopping cart, or placing an order. You can measure resource costs for a given user load, or you can average resource costs when the application is tested using a given workload profile. A workload profile consists of an aggregate mix of users performing various operations. For example, for a load of 200 concurrent users (as defined below), the profile might indicate that 20 percent of users perform order placement, 30 percent add items to a shopping cart, while 50 percent browse the product catalog. This helps you identify and optimize areas that consume an unusually large proportion of server resources and response time.

Workload

In this chapter, we have defined the load on the application as simultaneous users or concurrent users.

Simultaneous users have active connections to the same Web site, whereas concurrent users hit the site at exactly the same moment. Concurrent access is likely to occur at infrequent intervals. Your site may have 100 to 150 concurrent users but 1,000 to 1,500 simultaneous users.

When load testing your application, you can simulate simultaneous users by including a random think time in your script such that not all the user threads from the load generator are firing requests at the same moment. This is useful to simulate real world situations.

Common Automation Mistakes

Watch out for these common errors when writing test code:

  • Hard-coded paths Tests often need external files during test execution. The quickest and
    simplest method to point the test to a network share or other location is to embed the path in the
    source file. Unfortunately, paths can change and servers can be reconfigured or retired. It is a
    much better practice to store information about support files in the TCM or automation
    database.
  • Complexity The goal for test code must be to write the
    simplest code possible to test the feature sufficiently.
  • Difficult debugging When a failure occurs, debugging should be a quick and painless
    procedure—not a multihour time investment for the tester. Insufficient logging is a key
    contributor to making debugging difficult. When a test fails, it is a good practice to log why the
    test failed. “Streaming test failed: buffer size expected 2048, actual size 1024” is a much better
    result than “Streaming test failed: bad buffer size” or simply “Streaming test failed.” With good
    logging information, failures can be reported and fixed without ever needing to touch a
    debugger.
  • False positives A tester investigates a failure and discovers that the product code is fine, but a
    bug in her test caused the test to report a failure result. The opposite of this, a false negative, is
    much worse—a test incorrectly reports a passing result. When analyzing test results, testers
    examine failures, not passing tests. Unless a test with a false negative repeats in another test or
    is caught by an internal user during normal usage, the consequences of false negatives are bugs
    in the hands of the consumer.