How to be a good tester?

It’s a every testers question.How to be a good tester?Apart from the technical knowledge, testing skills, tester should have some personal level skills which will help them to build a good rapport in the testing team.

What are these abilities , skills which make a tester as a good tester? Well, I was readingDave Whalen’s article “Ugly Baby Syndrome!” and found it very interesting. Dave compared software developers with the parents who deliver a baby (software) with countless efforts. Naturally the product managers, architectures, developers spent their countless time on developing application for the customer. Then they show it to us (testers) and asks: “ How is the baby (Application)? “ And testers tell them often that they have and ugly baby. (Application with Bugs!)

Testers don’t want to tell them that they have ugly baby, but unfortunately its our job. So effectively tester can convey the message to the developers without hurting them. How can be this done? Ya that is the skill of a good tester!

Here are the tips sated by Dave to handle such a delicate situation:

Be honest and Responsive:
Tell developers what are your plans to attack their application.

Be open and available:
If any dev ask you to have a look at the application developed by him before the release, then politely give feedback on it and report any extra efforts needed. Don’t log the bug’s for these notes.

Let them review your tests:
If you have designed or wrote some test cases from the requirement specifications then just show them those test cases. Let them know your stuff as you are going to critic on developers work!

Use of Bug tracker:
Some testers have habit to report each and everything publicly. This attitude hurts the developers. So if you have logged any bug then let the bug tracking system report it to respective developers and managers. Also don’t each time rely on bug tracker, talk personally to developers what you logged and why you logged?

Finally some good personal points:

Don’t take it personally:
Do the job of messenger. You could be a close target always. So build a thick skin!

Be prepared:
A good message in the end, Be prepared for everything! If worst things might not happened till now but they can happen at any moment in your career. So be ready to face them.

Stress-Testing Process

Stress test your application by subjecting it to very high loads that are beyond the capacity of the application, while denying it the resources required to process that load. For example, you can deploy your application on a server that is running a processor-intensive application already. In this way, your application is immediately starved of processor resources and must compete with the other application for CPU cycles.
The goal of stress testing is to unearth application bugs that surface only under high
load conditions. These bugs can include:
● Synchronization issues
● Race conditions
● Memory leaks
● Loss of data during network congestion
Unlike load testing, where you have a list of prioritized scenarios, with stress testing you identify a particular scenario that needs to be stress tested. There may be more than one scenario or there may be a combination of scenarios that you can stress test during a particular test run to reproduce a potential problem. You can also stress test a single Web page or even a single item, such as a stored procedure or class.

Testing .NET Application Performance

Performance Testing

Performance testing is the process of identifying how an application responds to a specified set of conditions and input. Multiple individual performance test scenarios (suites, cases, scripts) are often needed to cover all of the conditions and/or input of interest. For testing purposes, if possible, the application should be hosted on a hardware infrastructure that is representative of the live environment. By examining your application’s behavior under simulated load conditions, you identify whether your application is trending toward or away from its defined performance objectives.

Goals of Performance Testing

The main goal of performance testing is to identify how well your application performs in relation to your performance objectives. Some of the other goals of performance testing include the following:

● Identify bottlenecks and their causes.

● Optimize and tune the platform configuration (both the hardware and software) for maximum performance.

● Verify the reliability of your application under stress.

You may not be able to identify all the characteristics by running a single type of performance test. The following are some of the application characteristics that performance testing helps you identify:

● Response time.

● Throughput.

● Maximum concurrent users supported. For a definition of concurrent users, see “Testing Considerations,” later in this chapter.

● Resource utilization in terms of the amount of CPU, RAM, network I/O, and disk I/O resources your application consumes during the test.

● Behavior under various workload patterns including normal load conditions, excessive load conditions, and conditions in between.

Application breaking point. The application breaking point means a condition where the application stops responding to requests. Some of the symptoms of breaking point include 503 errors with a “Server Too Busy” message, and errors in the application event log that indicate that the ASPNET worker process recycled because of potential deadlocks.

● Symptoms and causes of application failure under stress conditions.

● Weak points in your application.

● What is required to support a projected increase in load. For example, an increase in the number of users, amount of data, or application activity might cause an increase in load.

Performance Objectives

Most of the performance tests depend on a set of predefined, documented, and agreed-upon performance objectives. Knowing the objectives from the beginning helps make the testing process more efficient. You can evaluate your application’s performance by comparing it with your performance objectives.

You may run tests that are exploratory in nature to know more about the system without having any performance objective. But even these eventually serve as input to the tests that are conducted for evaluating performance against performance objectives.

Performance objectives often include the following:

Response time or latency

Throughput

Resource utilization (CPU, network I/O, disk I/O, and memory)

Workload

Response Time or Latency

Response time is the amount of time taken to respond to a request. You can measure response time at the server or client as follows:

Latency measured at the server. This is the time taken by the server to complete the execution of a request. This does not include the client-to-server latency, which includes additional time for the request and response to cross the network.

Latency measured at the client. The latency measured at the client includes the request queue, plus the time taken by the server to complete the execution of the request and the network latency. You can measure the latency in various ways.

Two common approaches are time taken by the first byte to reach the client (timeto first byte, TTFB), or the time taken by the last byte of the response to reach the client (time to last byte, TTLB). Generally, you should test this using various network bandwidths between the client and the server.

By measuring latency, you can gauge whether your application takes too long to respond to client requests.

Throughput

Throughput is the number of requests that can be served by your application per unit time. It can vary depending upon the load (number of users) and the type of user activity applied to the server. For example, downloading files requires higher throughput than browsing text-based Web pages. Throughput is usually measured in terms of requests per second. There are other units for measurement, such as transactions per second or orders per second.

Resource Utilization

Identify resource utilization costs in terms of server and network resources.

The primary resources are:

● CPU

● Memory

● Disk I/O

● Network I/O

You can identify the resource cost on a per operation basis. Operations might include browsing a product catalog, adding items to a shopping cart, or placing an order. You can measure resource costs for a given user load, or you can average resource costs when the application is tested using a given workload profile. A workload profile consists of an aggregate mix of users performing various operations. For example, for a load of 200 concurrent users (as defined below), the profile might indicate that 20 percent of users perform order placement, 30 percent add items to a shopping cart, while 50 percent browse the product catalog. This helps you identify and optimize areas that consume an unusually large proportion of server resources and response time.

Workload

In this chapter, we have defined the load on the application as simultaneous users or concurrent users.

Simultaneous users have active connections to the same Web site, whereas concurrent users hit the site at exactly the same moment. Concurrent access is likely to occur at infrequent intervals. Your site may have 100 to 150 concurrent users but 1,000 to 1,500 simultaneous users.

When load testing your application, you can simulate simultaneous users by including a random think time in your script such that not all the user threads from the load generator are firing requests at the same moment. This is useful to simulate real world situations.

Common Automation Mistakes

Watch out for these common errors when writing test code:

  • Hard-coded paths Tests often need external files during test execution. The quickest and
    simplest method to point the test to a network share or other location is to embed the path in the
    source file. Unfortunately, paths can change and servers can be reconfigured or retired. It is a
    much better practice to store information about support files in the TCM or automation
    database.
  • Complexity The goal for test code must be to write the
    simplest code possible to test the feature sufficiently.
  • Difficult debugging When a failure occurs, debugging should be a quick and painless
    procedure—not a multihour time investment for the tester. Insufficient logging is a key
    contributor to making debugging difficult. When a test fails, it is a good practice to log why the
    test failed. “Streaming test failed: buffer size expected 2048, actual size 1024” is a much better
    result than “Streaming test failed: bad buffer size” or simply “Streaming test failed.” With good
    logging information, failures can be reported and fixed without ever needing to touch a
    debugger.
  • False positives A tester investigates a failure and discovers that the product code is fine, but a
    bug in her test caused the test to report a failure result. The opposite of this, a false negative, is
    much worse—a test incorrectly reports a passing result. When analyzing test results, testers
    examine failures, not passing tests. Unless a test with a false negative repeats in another test or
    is caught by an internal user during normal usage, the consequences of false negatives are bugs
    in the hands of the consumer.

Test Case Methodologies

EP = Equivalence Partitioning. As an example, if you have a range of valid values, like 1-10, you would choose to test one valid value (say 7), and one invalid value (like 0).

BVA = Boundary Value Analysis. If you take the example above, you would test the minimum and maximum boundaries (1 and 10), and test beyond both boundaries (0 and 11). Boundary Value Analysis can be applied to a field, record, file, or anything with a stated or implied limit of some kind.

CE= Cause/effect. This is normally input of a combination of conditions (cause) in order to yield a single system result or transformation (effect). For example, you might want to test the ability to add a customer using a particular screen. This may involve entering multiple fields, such as name, address, and phone number, followed by pressing the “add” button. This is the “cause” portion of the equation. Once you press the “add” button, the system will return a customer number and add the customer to the database. This is the “effect”.

EG = Error guessing. This is when the test analyst uses their knowledge of the system and ability to interpret specifications to “guess” at what type of input might yield an error. For example, perhaps the spec says “the user must enter a code”. The test analyst will think “what if I don’t enter a code?”, “what if I enter the wrong code?”, and so on. This is error guessing.

ECP = Equivalence Class Partitioning – A software testing technique that involves identifying a small set of representative input values that invoke as many different input conditions as possible.

Test Strategy Vs Test Plan

Test Strategy :
A test strategy is a statement of the overall approach to testing, identifying what levels of testing are to be applied and the methods, techniques and tools to be used. A test strategy should ideally be organization wide, being applicable to all of organizations software developments.The application of a test strategy to a software development project should be detailed in the projects software quality plan.
The next stage of test design, which is the first stage within a software development project, is the development of a test plan. A test plan states what the items to be tested are, at what level they will be tested, what sequence they are to be tested in, how the test strategy will be applied to the testing of each item, and describes the test environment.
Components in the Test Strategy are as follows:
1. Scope and objective
2. Business issues
3. Roles and responsibilities
4. Communication and status reporting
5. Test deliverability
6. Test approach
7. Test automation and tools
8. Testing measurements and metrices
9. Risks and mitigation
10. Defect reporting and tracking
11. Change and configuration management
12. Training plan
Test Plan :
A Test Plan describes the approach, Features to be tested, Testers assigned, and whatever you plan for your project. A Test Plan is usually prepared by Manager or Team Lead. That is true but not exclusively. It depends on what the test plan is intended for. Some companies have defined a test plan as being what most would consider a test case. Meaning that it is for one part of the functionality validation.
A test plan may be project wide, or may in fact be a hierarchy of plans relating to the various levels of specification and testing:
• An Acceptance Test Plan, describing the plan for acceptance testing of the software. This would usually be published as a separate document, but might be published with the system test plan as a single document.
• A System Test Plan, describing the plan for system integration and testing. This would also usually be published as a separate document, but might be published with the acceptance test plan.
• A Software Integration Test Plan, describing the plan for integration of testes software components. This may form part of the Architectural Design Specification.
• Unit Test Plan(s), describing the plans for testing of individual units of software. These may form part of the Detailed Design Specifications.
The objective of each test plan is to provide a plan for verification, by testing the software, that the software produced fulfils the requirements or design statements of the appropriate software specification. In the case of acceptance testing and system testing, this means the Requirements Specification.
Test plan is the freezed document developed from SRS(Specification Requirement Document). After completion of testing team formation and risk analysis, Test Lead is preparing test plan document in term of what to test, how to test, who to test, and when to test.There is one Master Test Plan consists of reviewed Project Test Plan and Phase Test Plan. so there is general talk about Project Test Plan.
Components are as follows:
1. Test Plan id
2. Introduction
3. Test items
4. Features to be tested
5. Features not to be tested
6. Approach
7. Testing tasks
8. Suspension criteria
9. Features pass or fail criteria
10. Test environment (Entry criteria, Exit criteria)
11. Test deliverable
12. Staff and training needs
13. Responsibilities
14. Schedule
15. Risk and mitigation
16. Approach
Conclusion :Test Plan is the Document which deals with the When,What and Who will do the Project and Test Strategy is the document which deals with the How to do the project, In case if i wrong anywhere kindly give the feedback.
Why does software have bugs?
1. understanding or no communication – understand the application requirements.
2. Software complexity – the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development.
3. Programming errors – programmers “can” make mistakes.
4. Changing requirements – A redesign, rescheduling of engineers, effects on other projects, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors.
5. Time pressures – scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
6. Poorly documented code – it’s tough to maintain and modify code that is badly written or poorly documented that result as bugs.
7. Software development tools – various tools often introduce their own bugs or are poorly documented, resulting in added bugs.

Methods of Black box Testing

Graph Based Testing Methods:
Each and every application is build up of some objects. All such objects are identified and graph is prepared. From this object graph each object relationship is identified and test cases written accordingly to discover the errors.

Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths.

Boundary Value Analysis:
Many systems have tendency to fail on boundary. So testing boundry values of application is important. Boundary Value Analysis (BVA) is a test Functional Testing technique where the extreme boundary values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values.

Extends equivalence partitioning
Test both sides of each boundary
Look at output boundaries for test cases too
Test min, min-1, max, max+1, typical values

BVA techniques:
1. Number of variables
For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges
Generalizing ranges depends on the nature or type of variables
Advantages of Boundary Value Analysis
1. Robustness Testing – Boundary Value Analysis plus values that go beyond the limits
2. Min – 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling

Limitations of Boundary Value Analysis
Boundary value testing is efficient only for variables of fixed values i.e boundary.

Equivalence Partitioning:
Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived.

How is this partitioning performed while testing:
1. If an input condition specifies a range, one valid and one two invalid classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.

Comparison Testing:
Different independent versions of same software are used to compare to each other for testing in this method.

Mobile game test cases

1. Check for background music and sound effects
– ON/OFF sound & background music check
– Put the device into sleep mode n check
– Receive the call and check
– Verify if sound effects are in sync with action
– ON/OFF device sound(native sound) and check
– Check for vibration effect if present

2. UI
– Check in Landscape/Portrait mode
– Check for animation, movement of character, graphics, Zoom In/Out (all gestures) etc
– There should not be any clipping
– Test when one object overlaps with another
– Verify if loading indicator is displayed wherever required
– Character should not move out of the screen/specified area
– Test for enable and disable images/icons/buttons etc
– Check for screen title
– Check for message title, message description, label (should be appropriate)
– Check scrolling
– Font
– Check other objects too (ex -if its a car race- you need to look at road, people, other objects like buildings etc)

3. Performance (imp)
– Check the loading time of a game
– Make sure that any action is not taking considerable time, game flow should be fast

4. Score
– score calculation
– Verify leaderboards General/All time/Weekly/local etc
– Check the score registration functionality
– Check the format (whether, coma is required in score etc ideally if customer is a foriegner coma should be in millions not in thousands )
– Check for level completion syncs with the score

5. Time Out
– check for time out
– Do the actions when time-out yet to happen

6. Multitasking
– Switch b/w different apps and play game , check for
sound, score, UI, time-out etc

7. Pause
– Check if game is paused when call received or multitasking or sleep mode

8. Save Settings
– Turnoff and ON device, check if settings are saved
– Log out /On , check same
– User should not loose his game in above conditions

9. User profile
– Put a all types of images in Player profile n check
– Put special character, numbers,space in username and check
– Password should be in masked

10. Push notifications

11. Chat feature
– Check the profile images
– max limit of chat description
– Enter empty string, special character and check
– For a opponent , there should be a notification that he has received a message

13. Functionality
– Check game area, game logic
– play till last level
– get the cheat codes from development team and check all the levels
– Check for the features that will be unlocked level-wise
– Check for bonus score
– Check the score hike when level gets increased
– Check for multi-tap action (example in a car race we hold accelerator and left/right turn button simultaneously)
– Menu options
– Different game modes/location

14. Help & About Screen
– its must
– Should be in easily understandable format
– free from spelling mistakes
– URL should be hyperlinked (depends)

15. Multiplayer game
– Session expiry check
– login/log out
– Registration (Sign Up)
– Verify account (receive verification mail)
– login with registered but not verified account (without clicking verification link)
– Forgot password checks (many cases here)
– Game flow
– Check for WIN/lost/Draw
– Check user statistics graph 

– Challenge/Decline challenge/receive challenge
– Check for forfeit
– Check when player 2’s turn is on Player 1 is not able to do actions (should not be able to forfeit also)
– Check for pass turn
– Check for time-out (for one player)
– Check the score for both the players till game ends

16. Memory leak
– Check the game when device memory is low

17. N/w check
– N/w messages if n/w is not present
– check if what happens when n/w not present and user plays a move (whether score submitted for that move etc)

18. Check for localization (Support of different languages)

19. Check for time format
– Change the device time , format etc

20. Size
– User wont like if your game takes lot of device space, so keep one eye on game file size

21. Device , OS
– Check in supported screen sizes and os versions

22. Depends on platform
– Sometime we need to check as per os guidliness as well. For ex in Wp7
we need to check in 2 background (light/dark).

23. Check Share options
– Post score via mail/FB/Twitter
– Check the posted/sent messages in FB/Twitter/Mail. Check links are hyperlinked and application icon is displayed in the post (depends)
– If twitter integration is a manual ( custom UI developed by developer), check what happens when u enter more than 140 chars (as twitter limit is 140)

Choosing Test Data

In system testing, test data should cover the possible values of each parameter based on the the requirements. Since testing every value is impractical, a few values should be chosen from each equivalence class. An equivalence class is a set of values that should all be treated the same.

Ideally, test cases that check error conditions are written separately from the functional test cases and should have steps to verify the error messages and logs. Realistically, if error test cases are not yet written, it is OK for testers to check for error conditions when performing normal functional test cases. It should be clear which test data, if any, is expected to trigger errors.

Example equivalence classes:

Strings

  • empty string
  • String consisting solely of white space
  • String with leading or trailing white space
  • syntactically legal: short and long values
  • syntactically legal: semantically legal and illegal values
  • syntactically illegal value: illegal characters or combinations
  • Make sure to test special characters such as #, “, ‘, &, and <
  • Make sure to test “Foreign” characters typed on international keyboards
Numbers

  • empty string, if possible
  • 0
  • in range positive, small and large
  • in range negative, small and large
  • out of range positive
  • out of range negative
  • with leading zeros
  • syntactically invalid (e.g., includes letters)
Identifiers

  • empty string
  • syntactically legal value
  • syntactically legal: reference to existing ID, invalid reference
  • syntactically illegal value
Radio buttons

  • one item checked
  • nothing checked, if possible
Checkboxes

  • checked
  • unchecked
Drop down menus

  • select each item in turn
Scrolling Lists

  • select no item, if possible
  • select each item in turn
  • select combinations of items, if possible
  • select all items, if possible
File upload

  • blank
  • 0 byte file
  • long file
  • short file name
  • long file name
  • syntactically illegal file name, if possible (e.g., “File With Spaces.tar.gz”)

Selected Software Engineering Books

Methodologies / Requirements
Testing / Quality
Web Services and .Net
Java (and other languages)
User Interfaces
General Textbooks on Software Engineering
Configuration Management
Design / UML
Open source tools
Web development
More Textbooks and Classics
Professional Development

Sample Security Test Cases For A Shopping Cart Application

Functional Tests
    * Customer Order File
      * Ensure that ‘orders.txt’ file permissions are as restrictive as possible. If these permissions are loosely defined then this as a severity 1 security issue.
      * Ensure that sensitive data within the ‘orders.txt’ file is encrypted using a known strong algorithm. This is a severity 1 security issue.
    * Customer Data Stored in a SQL Database
      * Ensure that sensitive data within the SQL Database is encrypted using a known strong algorithm. This is a severity 1 security issue.
    * Registration Form
      * For each user input perform common security related input validation tests. See The Web Application Security Consortium’s Threat Classification for a list of common input vulnerability types. For each input perform each vulnerability type. The severity level of a vulnerability will be determined by the vulnerability type, and probability.
      * (If SQL is Used) Perform both standard SQL Injection, and Blind SQL Injection tests as outlined by http://www.spidynamics.com/whitepapers/Blind_SQLInjection.pdf and http://www.securiteam.com/securityreviews/5DP0N1P76E.html. If SQL Injection is present file this as a severity 1 issue.
    * Login
      * For each user input perform common security related input validation tests. See The Web Application Security Consortium’s Threat Classification for a list of common input vulnerability types. For each input perform each vulnerability type. The severity level of a vulnerability will be determined by the vulnerability type, and probability.
      * (If SQL is Used) Perform both standard SQL Injection, and Blind SQL Injection tests as outlined by http://www.spidynamics.com/whitepapers/Blind_SQLInjection.pdf and http://www.securiteam.com/securityreviews/5DP0N1P76E.html. If SQL Injection is present file this as a severity 1 issue.
    * Buying Items
      * Ensure that the user is unable to modify the price for a given item. Ensure that the price is not exposed in a web form, cookie, query string, or POST data. If the price is exposed through one of these vectors ensure that if changed, the application detects the modification on the server side and refuses to sell the item for anything other than the stated price.
      * For each user input perform common security related input validation tests. See The Web Application Security Consortium’s Threat Classification for a list of common input vulnerability types. For each input perform each vulnerability type.
    * Search Engine
      * For each user input perform common security related input validation tests. See The Web Application Security Consortium’s Threat Classification for a list of common input vulnerability types. For each input perform each vulnerability type.
      * (If user text is echo’d back) Test for Cross site scripting vulnerabilities. If discovered file a severity 2 issue.

Web Testing: Complete guide on testing web applications

Here we will see some more details on web application testing with web testing test cases. Let me tell you one thing that I always like to share practical knowledge, which can be useful to users in their career life. This is a quite long article so sit back and get relaxed to get most out of it.

Let’s have first web testing checklist.
1) Functionality Testing
2) Usability testing
3) Interface testing
4) Compatibility testing
5) Performance testing
6) Security testing

1) Functionality Testing:

Test for – all the links in web pages, database connection, forms used in the web pages for submitting or getting information from user, Cookie testing.

Check all the links:

  • Test the outgoing links from all the pages from specific domain under test.
  • Test all internal links.
  • Test links jumping on the same pages.
  • Test links used to send the email to admin or other users from web pages.
  • Test to check if there are any orphan pages.
  • Lastly in link checking, check for broken links in all above-mentioned links.

Test forms in all pages:
Forms are the integral part of any web site. Forms are used to get information from users and to keep interaction with them. So what should be checked on these forms?

  • First check all the validations on each field.
  • Check for the default values of fields.
  • Wrong inputs to the fields in the forms.
  • Options to create forms if any, form delete, view or modify the forms.

Let’s take example of the search engine project currently I am working on, In this project we have advertiser and affiliate signup steps. Each sign up step is different but dependent on other steps. So sign up flow should get executed correctly. There are different field validations like email Ids, User financial info validations. All these validations should get checked in manual or automated web testing.

Cookies testing:
Cookies are small files stored on user machine. These are basically used to maintain the session mainly login sessions. Test the application by enabling or disabling the cookies in your browser options. Test if the cookies are encrypted before writing to user machine. If you are testing the session cookies (i.e. cookies expire after the sessions ends) check for login sessions and user stats after session end. Check effect on application security by deleting the cookies. (I will soon write separate article on cookie testing)

Validate your HTML/CSS:
If you are optimizing your site for Search engines then HTML/CSS validation is very important. Mainly validate the site for HTML syntax errors. Check if site is crawlable to different search engines.

Database testing:
Data consistency is very important in web application. Check for data integrity and errors while you edit, delete, modify the forms or do any DB related functionality.
Check if all the database queries are executing correctly, data is retrieved correctly and also updated correctly. More on database testing could be load on DB, we will address this in web load or performance testing below.

2) Usability Testing:

Test for navigation:
Navigation means how the user surfs the web pages, different controls like buttons, boxes or how user using the links on the pages to surf different pages.
Usability testing includes:
Web site should be easy to use. Instructions should be provided clearly. Check if the provided instructions are correct means whether they satisfy purpose.
Main menu should be provided on each page. It should be consistent.

Content checking:
Content should be logical and easy to understand. Check for spelling errors. Use of dark colors annoys users and should not be used in site theme. You can follow some standards that are used for web page and content building. These are common accepted standards like as I mentioned above about annoying colors, fonts, frames etc.
Content should be meaningful. All the anchor text links should be working properly. Images should be placed properly with proper sizes.
These are some basic standards that should be followed in web development. Your task is to validate all for UI testing

Other user information for user help:
Like search option, sitemap, help files etc. Sitemap should be present with all the links in web sites with proper tree view of navigation. Check for all links on the sitemap.
“Search in the site” option will help users to find content pages they are looking for easily and quickly. These are all optional items and if present should be validated.

3) Interface Testing:
The main interfaces are:
Web server and application server interface
Application server and Database server interface.

Check if all the interactions between these servers are executed properly. Errors are handled properly. If database or web server returns any error message for any query by application server then application server should catch and display these error messages appropriately to users. Check what happens if user interrupts any transaction in-between? Check what happens if connection to web server is reset in between?

4) Compatibility Testing:
Compatibility of your web site is very important testing aspect. See which compatibility test to be executed:

  • Browser compatibility
  • Operating system compatibility
  • Mobile browsing
  • Printing options

Browser compatibility:
In my web-testing career I have experienced this as most influencing part on web site testing.
Some applications are very dependent on browsers. Different browsers have different configurations and settings that your web page should be compatible with. Your web site coding should be cross browser platform compatible. If you are using java scripts or AJAX calls for UI functionality, performing security checks or validations then give more stress on browser compatibility testing of your web application.
Test web application on different browsers like Internet explorer, Firefox, Netscape navigator, AOL, Safari, Opera browsers with different versions.

OS compatibility:
Some functionality in your web application is may not be compatible with all operating systems. All new technologies used in web development like graphics designs, interface calls like different API’s may not be available in all Operating Systems.
Test your web application on different operating systems like Windows, Unix, MAC, Linux, Solaris with different OS flavors.

Mobile browsing:
This is new technology age. So in future Mobile browsing will rock. Test your web pages on mobile browsers. Compatibility issues may be there on mobile.

Printing options:
If you are giving page-printing options then make sure fonts, page alignment, page graphics getting printed properly. Pages should be fit to paper size or as per the size mentioned in printing option.

5) Performance testing:
Web application should sustain to heavy load. Web performance testing should include:
Web Load Testing
Web Stress Testing

Test application performance on different internet connection speed.
In web load testing test if many users are accessing or requesting the same page. Can system sustain in peak load times? Site should handle many simultaneous user requests, large input data from users, Simultaneous connection to DB, heavy load on specific pages etc.

Stress testing: Generally stress means stretching the system beyond its specification limits. Web stress testing is performed to break the site by giving stress and checked how system reacts to stress and how system recovers from crashes.
Stress is generally given on input fields, login and sign up areas.

In web performance testing web site functionality on different operating systems, different hardware platforms is checked for software, hardware memory leakage errors,

6) Security Testing:

Following are some test cases for web security testing:

  • Test by pasting internal url directly into browser address bar without login. Internal pages should not open.
  • If you are logged in using username and password and browsing internal pages then try changing url options directly. I.e. If you are checking some publisher site statistics with publisher site ID= 123. Try directly changing the url site ID parameter to different site ID which is not related to logged in user. Access should denied for this user to view others stats.
  • Try some invalid inputs in input fields like login username, password, input text boxes. Check the system reaction on all invalid inputs.
  • Web directories or files should not be accessible directly unless given download option.
  • Test the CAPTCHA for automates scripts logins.
  • Test if SSL is used for security measures. If used proper message should get displayed when user switch from non-secure http:// pages to secure https:// pages and vice versa.
  • All transactions, error messages, security breach attempts should get logged in log files somewhere on web server.

Testing a Web Method Using Sockets

Problem
You want to test a Web method in a Web service by calling the method using sockets.
Design
First, construct a SOAP message to send to the Web method. Second, instantiate a Socket object and connect to the remote server that hosts the Web service. Third, construct a header that contains HTTP information. Fourth, send the header plus SOAP message using the Socket.Send() method. Fifth, receive the SOAP response using Socket.Receive() in a while loop. Sixth, analyze the SOAP response for an expected value(s).

            Console.WriteLine("Calling Web Method GetTitles() using sockets");
            string input = "testing";
            string soapMessage = "<?xml version=\"1.0\" encoding=\"utf-8\"?>";
            soapMessage += "<soap:Envelope xmlns:xsi=\"http://www.w3.org/2001/XMLSchemainstance\"";
            soapMessage += " xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\"";
            soapMessage += " xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\">";
            soapMessage += "<soap:Body>";
            soapMessage += "<GetTitles xmlns=\"http://tempuri.org/\">";
            soapMessage += "<filter>" + input + "</filter>";
            soapMessage += "</GetTitles>";
            soapMessage += "</soap:Body>";
            soapMessage += "</soap:Envelope>";
            Console.WriteLine("SOAP message is: \n");
            Console.WriteLine(soapMessage);
            string host = "localhost";
            string webService = "/TestAuto/Ch8/TheWebService/BookSearch.asmx";
            string webMethod = "GetTitles";
            IPHostEntry iphe = Dns.Resolve(host);
            IPAddress[] addList = iphe.AddressList; // addList[0] == 127.0.0.1
            EndPoint ep = new IPEndPoint(addList[0], 80); // ep = 127.0.0.1:80
            Socket socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
            socket.Connect(ep);
            if (socket.Connected)
                Console.WriteLine("\nConnected to " + ep.ToString());
            else
                Console.WriteLine("\nError: socket not connected");
            string header = "POST " + webService + " HTTP/1.1\r\n";
            header += "Host: " + host + "\r\n";
            header += "Content-Type: text/xml; charset=utf-8\r\n";
            header += "Content-Length: " + soapMessage.Length.ToString() + "\r\n";
            header += "Connection: close\r\n";
            header += "SOAPAction: \"http://tempuri.org/" + webMethod + "\"\r\n\r\n";
            Console.Write("Header is: \n" + header);
            string sendAsString = header + soapMessage;
            byte[] sendAsBytes = Encoding.ASCII.GetBytes(sendAsString);
            int numBytesSent = socket.Send(sendAsBytes, sendAsBytes.Length,   SocketFlags.None);
            Console.WriteLine("Sending = " + numBytesSent + " bytes\n");
            byte[] receiveBufferAsBytes = new byte[512];
            string receiveAsString = "";
            string entireReceive = "";
            int numBytesReceived = 0;
            while ((numBytesReceived = socket.Receive(receiveBufferAsBytes, 512,
            SocketFlags.None)) > 0)
            {
                receiveAsString = Encoding.ASCII.GetString(receiveBufferAsBytes, 0,
                numBytesReceived);
                entireReceive += receiveAsString;
            }
            Console.WriteLine("\nThe SOAP response is " + entireReceive);
            Console.WriteLine("\nDetermining pass/fail");
            if (entireReceive.IndexOf("002") >= 0 &&
            entireReceive.IndexOf("004") >= 0 &&
            entireReceive.IndexOf("005") >= 0)
                Console.WriteLine("\nPass");
            else
                Console.WriteLine("\nFail");

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

Let’s say you are working for a company that manufacturers soda machines. Your job is to test the machine. How would you do that? Walk me through your test cases.

I wrote up some test plans, but I may need to rewrite the plans depending on the answer of the following questions.

  1. What country is the machine going to be used in? US?
  2. Does it dispense cans or bottles or paper cup?
  3. Does the machine accept credit cards or cell phone payments?
  4. Does it have buttons or a touch screen?

Test Plans
Turns on after plugging it, and test to turn off.
Display issues (Soda machine should have the following components displayed).

  • dispense button
  • coin return button
  • dispenser
  • coin return dispenser
  • coin slot
  • bill slot
  • message display
  • Clicking on coin buttons should deposit appropriate amount into soda machine.
  • Each coin deposited (It should increase the total amount deposited by the appropriate amount).
  • Clicking on a dollar bill to deposit one dollar into the machine.
  • Clicking dispense button without enough money deposited.
  • Clicking dispense button with enough money deposited should dispense a pop.
  • Clicking dispense button with more money that required to buy a pop should dispense pop and return any money over the amount required to buy pop.
  • Clicking a counterfeit coin and bill (They should be rejected and returned immediately).
  • Inserting money then pressing the coin return (total amount inserted should be returned).
  • Coins return by coin return (For example, 10 coins deposited should yield 10 coins returned).
  • Inserting one dollar and pressing dispense button with pop already in the dispenser – user should be prompted to remove pop from the dispenser before the machine dispenses another pop.
  • When the storage rack is empty – pressing the dispense button with appropriate amount of money inserted will not dispense a pop.
  • Inserted money when storage rack is empty (The money should be returned to user).
  • Entering more items than the storage rack can hold (User will be prompted that there are too many items).
  • Inserting one dollar and pressing the dispense button.
  • Clicking dispense button with no money inserted (user should be prompted for more money and no pop should be dispensed).
  • Generate a power surge of varying intensities and verify that the vending machine can handle it within specification.
  • Feed bills of all denominations and make product selections as quickly as possible and verify that soda is dispensed and the correct change is returned.
  • Feed change and make product selections as quickly as possible and verify that soda is dispensed and the correct change is returned.
  • Use a machine to press the buttons while periodically raising the pressure until failure and verify that their failure thresholds meet specification.
  • Use a machine to press the buttons with normal pressure but as rapidly as possible over several weeks and then place product in the machine and see if the buttons still function reliably.
  • Verify performance while tilting the vending machine, rocking the vending machine, and hitting the vending machine and verify it functions properly.

What is in a bug report?

A bug report should have:

  • A detailed description of what part of the product is defective. (Description)
  • Detailed data identifying the product, version, module, build, and other information to help identify exactly where the error can be found. (Environment, Version number, Feature area)
  • An evaluation of the severity of the problem. (Severity)
  • Customer impact descriptions include how the bug affects the user and how the problem will affect customer scenarios and requirements. (Customer Impact)
  • How to reproduce the bug, with detailed steps. (Reproduction steps)
  • The expected behavior versus the actual behavior.
  • Attached data files/logs, test codes/scripts, or other things necessary to reproduce the bug.
  • Some other information; Assignment, Bug Status, Resolution

You found a bug and the Developer you are working with does not want to fix it—you think it is important. What do you do?

I think one of the very best ways I can report a bug is by showing it to the Developer. I will stand them in front of my computer, fire up their software, and demonstrate the thing that goes wrong. Once they can see the problem happening, they can usually take it from there and start trying to fix it.

And, I will clarify “Customer impact descriptions” for the bug and send it to Developer and stakeholders with the bug to discuss whether we should fix it or not. Also the Customer impact descriptions include how the bug affects the user and how the problem will affect customer scenarios and requirements. Items to consider when writing a customer impact description include the following:

· Determine the customer scenarios and requirements that the bug affects.

· Determine the frequency or likelihood of the customer encountering the issue.

Simulation of Browser Caching during load tests

In a VS load test that contains Web tests, the load test attempts to simulate the caching behavior of the browser. Here are some notes on how that is done:

  • There is a property named on each request in a Web test named “Cache Control” in the Web test editor (and named “Cache” on the WebTestRequest object in the API used by coded Web tests).
  • When the Cache Control property on a request in the Web test is false, the request is always issued.
  • When the Cache Control property is true, the VS load test runtime code attempts to emulate the Internet Explorer caching behavior (with the “Automatically” setting).This includes reading and following the HTTP cache control directives.
  • The Cache Control property is automatically set to true for all dependent requests (typically for images, style sheets, etc embedded on the page).
  • In a load test, the browser caching behavior is simulated separately for each user running in the load test.
  • When a virtual user in a load test completes a Web test and a new Web test session is started to keep the user load at the same level, sometimes the load test starts simulates a “new user” with a clean cache, and sometimes the load test simulates a return user that has items cached from a previous session. This is determined by the “Percentage of New Users” property on the Scenario in the load test. The default for “Percentage of New Users” is 0.

Important Note: When running a Web test by itself (outside of the load test), the Cache Control property is automatically set to false for all dependent requests so they are always fetched; this is so that they can be displayed in the browser pane of the Web test results viewer without broken images.

Performance tests

Performance tests focused on determining or validating performance characteristics of the product under test when subjected to workload models, and load volumes beyond those anticipated during production operations.

  • How many users can the application handle before “bad stuff” happens
  • How much data can my database/file server handle?
  • Are the network components adequate?

Stress test

These tests are all about determining under what conditions an application will fail how it will fail and what indicators can be monitored to warn of an impending failure.

What are the benefits?

  • Determining if data can be corrupted by over stressing the system
  • Estimating how far beyond the target load an application can go before causing failures and errors in addition to slowness
  • Establishing application monitoring triggers to warn of impending failures
  • Ensuring that security holes are not opened up by stressful conditions.
  • Determining the side effects of common hardware or supporting application failures.
  • What risks does it address?
  • What happens if we underestimated the peak load?
  • What kind of failures should we plan for?
  • What indicators should we be looking for to intervene prior to failure?

Endurance test

A performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes anticipated during production operations over an extended period of time.

What are the benefits?

  • Slow memory leaks
  • Insufficient file storage capacity
  • Performance degradation as a result of an increased in stored data
  • Overnight, automatic virus definition updates on a server causing performance degradation
  • What risks does it address?
  • Will performance be consistent over time?
  • Are there slow growing problems that we haven’t detected?
  • Is there external interference that we didn’t account for?

Spike test

A performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes that repeatedly increase beyond anticipated production operations for short periods of time.

What are the benefits?

  • Memory leaks
  • Disk I/O (thrashing)
  • Slow return to steady – state
  • What risks does it address?
  • What happens if we underestimated the peak load?
  • What kind of failures should we plan for?
  • What indicators should we be looking for to intervene prior to failure?

Capacity testing

Capacity testing is related to stress testing .It determines your server’s ultimate failure point. You perform capacity testing in conjunction with capacity planning.

You use capacity planning to plan for future growth, such as an increased user base or increased volume of data. For example, to accommodate future loads you need to know how many additional resources (such as CPU, RAM, disk space, or network bandwidth) are necessary to support future usage levels.

Capacity testing helps you identify a scaling strategy to determine whether you should scale up or scale out.

What are the benefits?

  • Provide actual data to the capacity planners to validate or enhance their models and/or predictions.
  • Conduct various tests to compare capacity planning models and/or predictions.
  • Determine current usage and capacity of existing system to aid in capacity planning.
  • Provide usage and capacity trends of existing system to aid in capacity planning.

What risks does it address?

  • Validate that capacity planning models represent reality.
  • Ensure capacity planning remains in sync with actual system usage and growth patterns

Testing question (No.7)

Q: Why are there so many software bugs?
A: Generally speaking, there are bugs in software because of unclear requirements, software complexity, programming errors, changes in requirements, errors made in bug tracking, time pressure, poorly documented code and/or bugs in tools used in software development.

  • There are unclear software requirements because there is miscommunication as to what the software should or shouldn’t do.
  • Software complexity. All of the followings contribute to the exponential growth in software and system complexity: Windows interfaces, client-server and distributed applications, data communications, enormous relational databases and the sheer size of applications.
  • Programming errors occur because programmers and software engineers, like everyone else, can make mistakes.
  • As to changing requirements, in some fast-changing business environments, continuously modified requirements are a fact of life. Sometimes customers do not understand the effects of changes, or understand them but request them anyway. And the changes require redesign of the software, rescheduling of resources and some of the work already completed have to be redone or discarded and hardware requirements can be effected, too.

Q: Do automated testing tools make testing easier?
A: Yes and no.
For larger projects, or ongoing long-term projects, they can be valuable. But for small projects, the time needed to learn and implement them is usually not worthwhile.
A common type of automated tool is the record/playback type. For example, a test engineer clicks through all combinations of menu choices, dialog box choices, buttons, etc. in a GUI and has an automated testing tool record and log the results. The recording is typically in the form of text, based on a scripting language that the testing tool can interpret.
If a change is made (e.g. new buttons are added, or some underlying code in the application is changed), the application is then re-tested by just playing back the recorded actions and compared to the logged results in order to check effects of the change.
One problem with such tools is that if there are continual changes to the product being tested, the recordings have to be changed so often that it becomes a very time-consuming task to continuously update the scripts.
Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task.

Q: What makes a good test engineer?
A: Good test engineers have a “test to break” attitude. We, good test engineers, take the point of view of the customer, have a strong desire for quality and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers and an ability to communicate with both technical and non-technical people. Previous software development experience is also helpful as it provides a deeper understanding of the software development process, gives the test engineer an appreciation for the developers’ point of view and reduces the learning curve in automated test tool programming.

Q: What is a test plan?
A: A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation. It should be thorough enough to be useful, but not so thorough that none outside the test group will be able to read it.

Q: What is a test case?
A: A test case is a document that describes an input, action, or event and its expected result, in order to determine if a feature of an application is working correctly. A test case should contain particulars such as a…

· Test case identifier;

· Test case name;

· Objective;

· Test conditions/setup;

· Input data requirements/steps, and

· Expected results.

Q: How do you create a test plan/design?
A: Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results. Generally speaking…

· Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.

· Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.

· It is the test team that, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.

· Test scenarios are executed through the use of test procedures or scripts.

· Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.

· Test procedures or scripts include the specific data that will be used for testing the process or transaction.

· Test procedures or scripts may cover multiple test scenarios.

· Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope.

Q: How do you create a test plan/design? (Cont’d…)

  • Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment.
  • Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing.
  • A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release.

Inputs for this process:

  • Approved Test Strategy Document.
  • Test tools, or automated test tools, if applicable.
  • Previously developed scripts, if applicable.
  • Test documentation problems uncovered as a result of testing.
  • A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code, and software complexity data.

Testing question (No.6)

Q: What is a test scenario?
A: The terms “test scenario” and “test case” are often used synonymously.
Test scenarios are test cases, or test scripts, and the sequence in which they are to be executed.
Test scenarios are test cases that ensure that business process flows are tested from end to end.
Test scenarios are independent tests, or a series of tests, that follow each other, where each of them dependent upon the output of the previous one.
Test scenarios are prepared by reviewing functional requirements, and preparing logical groups of functions that can be further broken into test procedures.
Test scenarios are designed to represent both typical and unusual situations that may occur in the application.
Test engineers define unit test requirements and unit test scenarios. Test engineers also execute unit test scenarios.
It is the test team that, with assistance of developers and clients, develops test scenarios for integration and system testing.
Test scenarios are executed through the use of test procedures or scripts.
Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.
Test procedures or scripts may cover multiple test scenarios.

Q: What is verification?
A: Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walk-throughs and inspection meetings. You CAN learn to do verification, with little or no outside help. Get CAN get free information. Click on a link!

Q: What is validation?
A: Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.

Q: What is a walk-through?
A: A walk-through is an informal meeting for evaluation or informational purposes. A walk-through is also a process at an abstract level. It’s the process of inspecting software code by following paths through the code (as determined by input conditions and choices made along the way). The purpose of code walk-throughs is to ensure the code fits the purpose.
Walk-throughs also offer opportunities to assess an individual’s or team’s competency.

Q: What is good code?
A: A good code is code that works, is free of bugs and is readable and maintainable. Organizations usually have coding standards all developers should adhere to, but every programmer and software engineer has different ideas about what is best and what are too many or too few rules. We need to keep in mind that excessive use of rules can stifle both productivity and creativity. Peer reviews and code analysis tools can be used to check for problems and enforce standards.

Q: What is good design?
A: Design could mean to many things, but often refers to functional design or internal design. Good functional design is indicated by software functionality can be traced back to customer and end-user requirements. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable and maintainable; is robust with sufficient error handling and status logging capability; and works correctly when implemented.

Q: What is software life cycle?
A: Software life cycle begins when a software product is first conceived and ends when it is no longer in use. It includes phases like initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, re-testing and phase-out.