How to be a good tester?

It’s a every testers question.How to be a good tester?Apart from the technical knowledge, testing skills, tester should have some personal level skills which will help them to build a good rapport in the testing team.

What are these abilities , skills which make a tester as a good tester? Well, I was readingDave Whalen’s article “Ugly Baby Syndrome!” and found it very interesting. Dave compared software developers with the parents who deliver a baby (software) with countless efforts. Naturally the product managers, architectures, developers spent their countless time on developing application for the customer. Then they show it to us (testers) and asks: “ How is the baby (Application)? “ And testers tell them often that they have and ugly baby. (Application with Bugs!)

Testers don’t want to tell them that they have ugly baby, but unfortunately its our job. So effectively tester can convey the message to the developers without hurting them. How can be this done? Ya that is the skill of a good tester!

Here are the tips sated by Dave to handle such a delicate situation:

Be honest and Responsive:
Tell developers what are your plans to attack their application.

Be open and available:
If any dev ask you to have a look at the application developed by him before the release, then politely give feedback on it and report any extra efforts needed. Don’t log the bug’s for these notes.

Let them review your tests:
If you have designed or wrote some test cases from the requirement specifications then just show them those test cases. Let them know your stuff as you are going to critic on developers work!

Use of Bug tracker:
Some testers have habit to report each and everything publicly. This attitude hurts the developers. So if you have logged any bug then let the bug tracking system report it to respective developers and managers. Also don’t each time rely on bug tracker, talk personally to developers what you logged and why you logged?

Finally some good personal points:

Don’t take it personally:
Do the job of messenger. You could be a close target always. So build a thick skin!

Be prepared:
A good message in the end, Be prepared for everything! If worst things might not happened till now but they can happen at any moment in your career. So be ready to face them.

Advertisements

Stress-Testing Process

Stress test your application by subjecting it to very high loads that are beyond the capacity of the application, while denying it the resources required to process that load. For example, you can deploy your application on a server that is running a processor-intensive application already. In this way, your application is immediately starved of processor resources and must compete with the other application for CPU cycles.
The goal of stress testing is to unearth application bugs that surface only under high
load conditions. These bugs can include:
● Synchronization issues
● Race conditions
● Memory leaks
● Loss of data during network congestion
Unlike load testing, where you have a list of prioritized scenarios, with stress testing you identify a particular scenario that needs to be stress tested. There may be more than one scenario or there may be a combination of scenarios that you can stress test during a particular test run to reproduce a potential problem. You can also stress test a single Web page or even a single item, such as a stored procedure or class.

Testing .NET Application Performance

Performance Testing

Performance testing is the process of identifying how an application responds to a specified set of conditions and input. Multiple individual performance test scenarios (suites, cases, scripts) are often needed to cover all of the conditions and/or input of interest. For testing purposes, if possible, the application should be hosted on a hardware infrastructure that is representative of the live environment. By examining your application’s behavior under simulated load conditions, you identify whether your application is trending toward or away from its defined performance objectives.

Goals of Performance Testing

The main goal of performance testing is to identify how well your application performs in relation to your performance objectives. Some of the other goals of performance testing include the following:

● Identify bottlenecks and their causes.

● Optimize and tune the platform configuration (both the hardware and software) for maximum performance.

● Verify the reliability of your application under stress.

You may not be able to identify all the characteristics by running a single type of performance test. The following are some of the application characteristics that performance testing helps you identify:

● Response time.

● Throughput.

● Maximum concurrent users supported. For a definition of concurrent users, see “Testing Considerations,” later in this chapter.

● Resource utilization in terms of the amount of CPU, RAM, network I/O, and disk I/O resources your application consumes during the test.

● Behavior under various workload patterns including normal load conditions, excessive load conditions, and conditions in between.

Application breaking point. The application breaking point means a condition where the application stops responding to requests. Some of the symptoms of breaking point include 503 errors with a “Server Too Busy” message, and errors in the application event log that indicate that the ASPNET worker process recycled because of potential deadlocks.

● Symptoms and causes of application failure under stress conditions.

● Weak points in your application.

● What is required to support a projected increase in load. For example, an increase in the number of users, amount of data, or application activity might cause an increase in load.

Performance Objectives

Most of the performance tests depend on a set of predefined, documented, and agreed-upon performance objectives. Knowing the objectives from the beginning helps make the testing process more efficient. You can evaluate your application’s performance by comparing it with your performance objectives.

You may run tests that are exploratory in nature to know more about the system without having any performance objective. But even these eventually serve as input to the tests that are conducted for evaluating performance against performance objectives.

Performance objectives often include the following:

Response time or latency

Throughput

Resource utilization (CPU, network I/O, disk I/O, and memory)

Workload

Response Time or Latency

Response time is the amount of time taken to respond to a request. You can measure response time at the server or client as follows:

Latency measured at the server. This is the time taken by the server to complete the execution of a request. This does not include the client-to-server latency, which includes additional time for the request and response to cross the network.

Latency measured at the client. The latency measured at the client includes the request queue, plus the time taken by the server to complete the execution of the request and the network latency. You can measure the latency in various ways.

Two common approaches are time taken by the first byte to reach the client (timeto first byte, TTFB), or the time taken by the last byte of the response to reach the client (time to last byte, TTLB). Generally, you should test this using various network bandwidths between the client and the server.

By measuring latency, you can gauge whether your application takes too long to respond to client requests.

Throughput

Throughput is the number of requests that can be served by your application per unit time. It can vary depending upon the load (number of users) and the type of user activity applied to the server. For example, downloading files requires higher throughput than browsing text-based Web pages. Throughput is usually measured in terms of requests per second. There are other units for measurement, such as transactions per second or orders per second.

Resource Utilization

Identify resource utilization costs in terms of server and network resources.

The primary resources are:

● CPU

● Memory

● Disk I/O

● Network I/O

You can identify the resource cost on a per operation basis. Operations might include browsing a product catalog, adding items to a shopping cart, or placing an order. You can measure resource costs for a given user load, or you can average resource costs when the application is tested using a given workload profile. A workload profile consists of an aggregate mix of users performing various operations. For example, for a load of 200 concurrent users (as defined below), the profile might indicate that 20 percent of users perform order placement, 30 percent add items to a shopping cart, while 50 percent browse the product catalog. This helps you identify and optimize areas that consume an unusually large proportion of server resources and response time.

Workload

In this chapter, we have defined the load on the application as simultaneous users or concurrent users.

Simultaneous users have active connections to the same Web site, whereas concurrent users hit the site at exactly the same moment. Concurrent access is likely to occur at infrequent intervals. Your site may have 100 to 150 concurrent users but 1,000 to 1,500 simultaneous users.

When load testing your application, you can simulate simultaneous users by including a random think time in your script such that not all the user threads from the load generator are firing requests at the same moment. This is useful to simulate real world situations.

Common Automation Mistakes

Watch out for these common errors when writing test code:

  • Hard-coded paths Tests often need external files during test execution. The quickest and
    simplest method to point the test to a network share or other location is to embed the path in the
    source file. Unfortunately, paths can change and servers can be reconfigured or retired. It is a
    much better practice to store information about support files in the TCM or automation
    database.
  • Complexity The goal for test code must be to write the
    simplest code possible to test the feature sufficiently.
  • Difficult debugging When a failure occurs, debugging should be a quick and painless
    procedure—not a multihour time investment for the tester. Insufficient logging is a key
    contributor to making debugging difficult. When a test fails, it is a good practice to log why the
    test failed. “Streaming test failed: buffer size expected 2048, actual size 1024” is a much better
    result than “Streaming test failed: bad buffer size” or simply “Streaming test failed.” With good
    logging information, failures can be reported and fixed without ever needing to touch a
    debugger.
  • False positives A tester investigates a failure and discovers that the product code is fine, but a
    bug in her test caused the test to report a failure result. The opposite of this, a false negative, is
    much worse—a test incorrectly reports a passing result. When analyzing test results, testers
    examine failures, not passing tests. Unless a test with a false negative repeats in another test or
    is caught by an internal user during normal usage, the consequences of false negatives are bugs
    in the hands of the consumer.

Binary Search Tree Validity

Write a function to determine whether a given binary tree of distinct integers is a
valid binary search tree. Assume that each node contains a pointer to its left child, a
pointer to its right child, and an integer, but not a pointer to its parent. You may use
any language you like.
Good Answer: Note that it’s not enough to write a recursive function that just checks
if the left and right nodes of each node are less than and greater than the current
node (and calls that recursively). You need to make sure that all the nodes of the
subtree starting at your current node are within the valid range of values allowed by
the current node’s ancestors. Therefore you can solve this recursively by writing a
helper function that accepts a current node, the smallest allowed value, and the
largest allowed value for that subtree. An example of this is the following (in Java):

   1: boolean isValid(Node root) {

   2: return isValidHelper(root, Integer.MIN_VALUE,

   3: Integer.MAX_VALUE);

   4: }

   5: boolean isValidHelper(Node curr, int min, int max) {

   6: if (curr.left != null) {

   7: if (curr.left.value < min ||

   8: !isValidHelper(curr.left, min, curr.value))

   9: return false;

  10: }

  11: if (curr.right != null) {

  12: if (curr.right.value > max ||

  13: !isValidHelper(curr.right, curr.value, max))

  14: return false;

  15: }

  16: return true;

  17: }

The running time of this algorithm is O(n).

Test Case Methodologies

EP = Equivalence Partitioning. As an example, if you have a range of valid values, like 1-10, you would choose to test one valid value (say 7), and one invalid value (like 0).

BVA = Boundary Value Analysis. If you take the example above, you would test the minimum and maximum boundaries (1 and 10), and test beyond both boundaries (0 and 11). Boundary Value Analysis can be applied to a field, record, file, or anything with a stated or implied limit of some kind.

CE= Cause/effect. This is normally input of a combination of conditions (cause) in order to yield a single system result or transformation (effect). For example, you might want to test the ability to add a customer using a particular screen. This may involve entering multiple fields, such as name, address, and phone number, followed by pressing the “add” button. This is the “cause” portion of the equation. Once you press the “add” button, the system will return a customer number and add the customer to the database. This is the “effect”.

EG = Error guessing. This is when the test analyst uses their knowledge of the system and ability to interpret specifications to “guess” at what type of input might yield an error. For example, perhaps the spec says “the user must enter a code”. The test analyst will think “what if I don’t enter a code?”, “what if I enter the wrong code?”, and so on. This is error guessing.

ECP = Equivalence Class Partitioning – A software testing technique that involves identifying a small set of representative input values that invoke as many different input conditions as possible.

Test Strategy Vs Test Plan

Test Strategy :
A test strategy is a statement of the overall approach to testing, identifying what levels of testing are to be applied and the methods, techniques and tools to be used. A test strategy should ideally be organization wide, being applicable to all of organizations software developments.The application of a test strategy to a software development project should be detailed in the projects software quality plan.
The next stage of test design, which is the first stage within a software development project, is the development of a test plan. A test plan states what the items to be tested are, at what level they will be tested, what sequence they are to be tested in, how the test strategy will be applied to the testing of each item, and describes the test environment.
Components in the Test Strategy are as follows:
1. Scope and objective
2. Business issues
3. Roles and responsibilities
4. Communication and status reporting
5. Test deliverability
6. Test approach
7. Test automation and tools
8. Testing measurements and metrices
9. Risks and mitigation
10. Defect reporting and tracking
11. Change and configuration management
12. Training plan
Test Plan :
A Test Plan describes the approach, Features to be tested, Testers assigned, and whatever you plan for your project. A Test Plan is usually prepared by Manager or Team Lead. That is true but not exclusively. It depends on what the test plan is intended for. Some companies have defined a test plan as being what most would consider a test case. Meaning that it is for one part of the functionality validation.
A test plan may be project wide, or may in fact be a hierarchy of plans relating to the various levels of specification and testing:
• An Acceptance Test Plan, describing the plan for acceptance testing of the software. This would usually be published as a separate document, but might be published with the system test plan as a single document.
• A System Test Plan, describing the plan for system integration and testing. This would also usually be published as a separate document, but might be published with the acceptance test plan.
• A Software Integration Test Plan, describing the plan for integration of testes software components. This may form part of the Architectural Design Specification.
• Unit Test Plan(s), describing the plans for testing of individual units of software. These may form part of the Detailed Design Specifications.
The objective of each test plan is to provide a plan for verification, by testing the software, that the software produced fulfils the requirements or design statements of the appropriate software specification. In the case of acceptance testing and system testing, this means the Requirements Specification.
Test plan is the freezed document developed from SRS(Specification Requirement Document). After completion of testing team formation and risk analysis, Test Lead is preparing test plan document in term of what to test, how to test, who to test, and when to test.There is one Master Test Plan consists of reviewed Project Test Plan and Phase Test Plan. so there is general talk about Project Test Plan.
Components are as follows:
1. Test Plan id
2. Introduction
3. Test items
4. Features to be tested
5. Features not to be tested
6. Approach
7. Testing tasks
8. Suspension criteria
9. Features pass or fail criteria
10. Test environment (Entry criteria, Exit criteria)
11. Test deliverable
12. Staff and training needs
13. Responsibilities
14. Schedule
15. Risk and mitigation
16. Approach
Conclusion :Test Plan is the Document which deals with the When,What and Who will do the Project and Test Strategy is the document which deals with the How to do the project, In case if i wrong anywhere kindly give the feedback.
Why does software have bugs?
1. understanding or no communication – understand the application requirements.
2. Software complexity – the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development.
3. Programming errors – programmers “can” make mistakes.
4. Changing requirements – A redesign, rescheduling of engineers, effects on other projects, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors.
5. Time pressures – scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
6. Poorly documented code – it’s tough to maintain and modify code that is badly written or poorly documented that result as bugs.
7. Software development tools – various tools often introduce their own bugs or are poorly documented, resulting in added bugs.

Methods of Black box Testing

Graph Based Testing Methods:
Each and every application is build up of some objects. All such objects are identified and graph is prepared. From this object graph each object relationship is identified and test cases written accordingly to discover the errors.

Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths.

Boundary Value Analysis:
Many systems have tendency to fail on boundary. So testing boundry values of application is important. Boundary Value Analysis (BVA) is a test Functional Testing technique where the extreme boundary values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values.

Extends equivalence partitioning
Test both sides of each boundary
Look at output boundaries for test cases too
Test min, min-1, max, max+1, typical values

BVA techniques:
1. Number of variables
For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges
Generalizing ranges depends on the nature or type of variables
Advantages of Boundary Value Analysis
1. Robustness Testing – Boundary Value Analysis plus values that go beyond the limits
2. Min – 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling

Limitations of Boundary Value Analysis
Boundary value testing is efficient only for variables of fixed values i.e boundary.

Equivalence Partitioning:
Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived.

How is this partitioning performed while testing:
1. If an input condition specifies a range, one valid and one two invalid classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.

Comparison Testing:
Different independent versions of same software are used to compare to each other for testing in this method.