Let me show the experience of my software testing!!
Q: What is system testing?
A: System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment.
The purpose of system testing is to validate an application’s accuracy and completeness in performing the functions as designed.
System testing simulates real life scenarios that occur in a "simulated real life" test environment and test all functions of the system that are required in real life.
System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable, based on client input.
Q: What is integration testing?
A: Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.
Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable / acceptable, based on client input.
Q: What is stochastic testing?
A: Stochastic testing is the same as "monkey testing", but stochastic testing is a more technical sounding name for the same testing process.
Stochastic testing is black box testing, random testing, performed by automated testing tools. Stochastic testing is a series of random tests over time.
The software under test typically passes the individual tests, but our goal is to see if it can pass a large series of the individual tests.
Q: What is regression testing?
A: The objective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and executed to verify changes introduced during the release have not "undone" any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.
Q: What is mutation testing?
A: In mutation testing, we create mutant software, we make mutant software to fail, and thus demonstrate the adequacy of our test case.
When we create a set of mutant software, each mutant software differs from the original software by one mutation, i.e. one single syntax change made to one of its program statements, i.e. each mutant software contains only one single fault.
When we apply test cases to the original software and to the mutant software, we evaluate if our test case is adequate.
Our test case is inadequate, if both the original software and all mutant software generate the same output.
Our test case is adequate, if our test case detects faults, or, if, at least one mutant software generates a different output than does the original software for our test case.
Q: How do test case templates look like?
A: Software test cases are documents that describe inputs, actions, or events and their expected results, in order to determine if all features of an application are working correctly.
A software test case template is, for example, a 6-column table, where column 1 is the "Test case ID number", column 2 is the "Test case name", column 3 is the "Test objective", column 4 is the "Test conditions/setup", column 5 is the "Input data requirements/steps", and column 6 is the "Expected results".
All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. It also helps in learning where information is located, making it easier for a user to find what they want. Lastly, with standards and templates, information will not be accidentally omitted from a document.
Q: What is the difference between system testing and integration testing?
A: System testing is high level testing, and integration testing is a lower level testing. Integration testing is completed first, not the system testing. In other words, upon completion of integration testing, system testing is started, and not vice versa.
For integration testing, test cases are developed with the express purpose of exercising the interfaces between the components.
For system testing, on the other hand, the complete system is configured in a controlled environment, and test cases are developed to simulate real life scenarios that occur in a simulated real life test environment.
The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements.
The purpose of system testing, on the other hand, is to validate an application’s accuracy and completeness in performing the functions as designed, and to test all functions of the system that are required in real life.
Q: How do you perform integration testing?
A: First, unit testing has to be completed. Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements.
Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.
Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input.
Q: What is monkey testing?
A: "Monkey testing" is random testing performed by automated testing tools. These automated testing tools are considered "monkeys", if they work at random.
We call them "monkeys" because it is widely believed, if we allow six monkeys to pound on six typewriters at random, for a million years, they will recreate all the works of Isaac Asimov.
There are "smart monkeys" and "dumb monkeys".
"Smart monkeys" are valuable for load and stress testing, and will find a significant number of bugs, but they’re also very expensive to develop.
"Dumb monkeys", on the other hand, are inexpensive to develop, are able to do some basic testing, but they will find few bugs. However, the bugs "dumb monkeys" do find will be hangs and crashes, i.e. the bugs you least want to have in your software product.
"Monkey testing" can be valuable, but they should not be your only testing.
Q: What is smoke testing?
A: Smoke testing is a relatively simple check to see whether the product "smokes" when it runs. Smoke testing is also known as ad hoc testing, i.e. testing without a formal test plan.
With many projects, smoke testing is carried out in addition to formal testing. If smoke testing is carried out by a skilled tester, it can often find problems that are not caught during regular testing.
Sometimes, if testing occurs very early or very late in the software development cycle, this can be the only kind of testing that can be performed.
Smoke tests are, by definition, not exhaustive, but, over time, you can increase your coverage of smoke testing.
A common practice at Microsoft, and some other software companies, is the daily build and smoke test process. This means, every file is compiled, linked, and combined into an executable file every single day, and then the software is smoke tested.
Smoke testing minimizes integration risk, reduces the risk of low quality, supports easier defect diagnosis, and improves morale.
Smoke testing does not have to be exhaustive, but should expose any major problems. Smoke testing should be thorough enough that, if it passes, the tester can assume the product is stable enough to be tested more thoroughly.
Without smoke testing, the daily build is just a time wasting exercise. Smoke testing is the sentry that guards against any errors in development and future problems during integration.
At first, smoke testing might be the testing of something that is easy to test. Then, as the system grows, smoke testing should expand and grow, from a few seconds to 30 minutes or more.
Q: What is structural testing?
A: Structural testing is also known as clear box testing, glass box testing. Structural testing is a way to test software with knowledge of the internal workings of the code being tested.
Structural testing is white box testing, not black box testing, since black boxes are considered opaque and do not permit visibility into the code.
Q: What is grey box testing?
A: Grey box testing is a software testing technique that uses a combination of black box testing and white box testing. Gray box testing is not black box testing, because the tester does know some of the internal workings of the software under test.
In grey box testing, the tester applies a limited number of test cases to the internal workings of the software under test. In the remaining part of the grey box testing, one takes a black box approach in applying inputs to the software under test and observing the outputs.
Gray box testing is a powerful idea. The concept is simple; if one knows something about how the product works on the inside, one can test it better, even from the outside.
Grey box testing is not to be confused with white box testing; i.e. a testing approach that attempts to cover the internals of the product in detail. Grey box testing is a test strategy based partly on internals.
The testing approach is known as gray box testing, when one does have some knowledge, but not the full knowledge of the internals of the product one is testing.
In gray box testing, just as in black box testing, you test from the outside of a product, just as you do with black box, but you make better-informed testing choices because you’re better informed; because you know how the underlying software components operate and interact.
Q: When do you choose automated testing?
A: For larger projects, or ongoing long-term projects, automated testing can be valuable. But for small projects, the time needed to learn and implement the automated testing tools is usually not worthwhile.
Automated testing tools sometimes do not make testing easier. One problem with automated testing tools is that if there are continual changes to the product being tested, the recordings have to be changed so often, that it becomes a very time-consuming task to continuously update the scripts.
Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task.
Q: What’s the difference between priority and severity?
A: The simple answer is, "Priority is about scheduling, and severity is about standards."
The complex answer is, "Priority means something is afforded or deserves prior attention; a precedence established by order of importance (or urgency). Severity is the state or quality of being severe; severe implies adherence to rigorous standards or high principles and often suggests harshness; severe is marked by or requires strict adherence to rigorous standards or high principles, e.g. a severe code of behavior."
Q: What’s the difference between efficient and effective?
A: "Efficient" means having a high ratio of output to input; working or producing with a minimum of waste. For example, "An efficient test engineer wastes no time", or "An efficient engine saves gas".
"Effective", on the other hand, means producing, or capable of producing, an intended result, or having a striking effect. For example, "For automated testing, WinRunner is more effective than an oscilloscope", or "For rapid long-distance transportation, the jet engine is more effective than a witch’s broomstick".