The application is tested against heavy loads or inputs such as testing of web sites in order to find out at what point the web-site/application fails or at what point its performance degrades. Load testing operates at a predefined load level, usually the highest load that the system can accept while still functioning properly
Software Quality is the most important service in software development process. This is the Good platform for unemployed people.
Tuesday, 21 April 2015
What is Exploratory Testing
This testing is similar to the ad-hoc testing and is done in order to learn/explore the application. It si shortly known as ET.
Exploratory software testing is a powerful and fun approach to testing. View the pdf tutorials about Exploratory Testing. In some situations, it can be orders of magnitude more productive than scripted testing. At least unconsciously, testers perform exploratory testing at one time or another. Yet it doesn’t get much respect in our field. It can be considered as “Scientific Thinking” at real time
What is Usability Testing n Software testing and quality?
This testing is also called as ‘Testing for User-Friendliness’. This testing is done if User Interface of the application stands an important consideration and needs to be specific for the specific type of user. You can also read pdf tutorials about usability tests after this description.
Usability testing is the process of working with end-users directly and indirectly to assess how the user perceives a software package and how they interact with it. This process will uncover areas of difficulty for users as well as areas of strength.
usability-testing
The goal of usability testing should be to limit and remove difficulties for users and to leverage areas of strength for maximum usability. This testing should ideally involve direct user feedback, indirect feedback (observed behavior), and when possible computer supported feedback. Computer supported feedback is often (if not always) left out of this process. Computer supported feedback can be as simple as a timer on a dialog to monitor how long it takes users to use the dialog and counters to determine how often certain conditions occur (ie. error messages, help messages, etc). Often, this involves trivial modifications to existing software, but can result in tremendous return on investment.
Ultimately, usability testing should result in changes to the delivered product in line with the discoveries made regarding usability. These changes should be directly related to real-world usability by average users. As much as possible, documentation should be written supporting changes so that in the future, similar situations can be handled with ease.
Usability testing is the process of working with end-users directly and indirectly to assess how the user perceives a software package and how they interact with it. This process will uncover areas of difficulty for users as well as areas of strength.
usability-testing
The goal of usability testing should be to limit and remove difficulties for users and to leverage areas of strength for maximum usability. This testing should ideally involve direct user feedback, indirect feedback (observed behavior), and when possible computer supported feedback. Computer supported feedback is often (if not always) left out of this process. Computer supported feedback can be as simple as a timer on a dialog to monitor how long it takes users to use the dialog and counters to determine how often certain conditions occur (ie. error messages, help messages, etc). Often, this involves trivial modifications to existing software, but can result in tremendous return on investment.
Ultimately, usability testing should result in changes to the delivered product in line with the discoveries made regarding usability. These changes should be directly related to real-world usability by average users. As much as possible, documentation should be written supporting changes so that in the future, similar situations can be handled with ease.
What is Smoke Testing in Software testing and quality?
This type of testing is also called sanity testing. But there are some difference between Smoke and Sanity testing. and is done in order to check if the application is ready for further major testing and is working properly without failing up to least expected level. Read pdf tutorials about Smoke test at the end of this page. A test of new or repaired equipment by turning it on. If it smokes… guess what… it doesn’t work! The term also refers to testing the basic functions of software. The term was originally coined in the manufacture of containers and pipes, where smoke was introduced to determine if there were any leaks.
What is Volume Testing in software testing and quality?
Volume testing is done against the efficiency of the application. Huge amount of data is processed through the application (which is being tested) in order to check the extreme limitations of the system. Read a pdf tutorial about Experiments with High Volume Test Automation after this article.
Volume Testing, as its name implies, is testing that purposely subjects a system (both hardware and software) to a series of tests where the volume of data being processed is the subject of the test. Such systems can be transactions processing systems capturing real time sales or could be database updates and or data retrieval.
volume-testing
Volume testing will seek to verify the physical and logical limits to a system’s capacity and ascertain whether such limits are acceptable to meet the projected capacity of the organization’s business processing.
What is Domain Testing in Software testing and quality?
What is Domain Testing
Domain testing is the most frequently described test technique. Some authors write only about domain testing when they write about test design. Also read pdf tutorials about domain based testing. The basic notion is that you take the huge space of possible tests of an individual variable and subdivide it into subsets that are (in some way) equivalent. Then you test a representative from each subset. This type of testing also known as equivalence testing or boundary analysis.
Domain testing is the most frequently described test technique. Some authors write only about domain testing when they write about test design. Also read pdf tutorials about domain based testing. The basic notion is that you take the huge space of possible tests of an individual variable and subdivide it into subsets that are (in some way) equivalent. Then you test a representative from each subset. This type of testing also known as equivalence testing or boundary analysis.
What is Scenario Testing in software testing and quality?
Scenario tests are realistic, credible and motivating to stakeholders, challenging for the program and easy to evaluate for the tester. They provide meaningful combination of functions and variables rather than the more artificial combination you get with domain testing or combinatorial test design.
scenario-testing
This test find out the issues in our software against the practical usage. The end users creates the scenario here. Now we can consider an example to get more idea. Supposed We have developed a billing software for shop. We have completed many testing and there is no bugs in coding and all features are working, thats good. Now we are discussing with our customer and starting and regression test. He is telling a scenario, that I have entered a processed bill for one order, then my customer require to change the quantity of material he purchased. I need to give its as same bill. Then we will try this scenario in our software, and we found that our software not able to edit the generated bill because there is no option for that. So we need to add that facility too. Its only a general example. In simple words it doing the test against practical situation , and that stories can be given by end customers.
What is Regression Testing
Regression testing is a style of testing that focuses on retesting after changes are made. In traditional regression testing, we reuse the same tests (the regression tests). In risk-oriented regression testing, we test the same areas as before, but we use different (increasingly complex) tests. Traditional regression tests are often partially automated. These note focus on traditional regression.
What is User Acceptance Testing
In this type of testing, the software is handed over to the user in order to find out if the software meets the user expectations and works as it is expected to. In software development, user acceptance testing (UAT) – also called beta testing, application testing, and end user testing – is a phase of software development in which the software is tested in the “real world” by the intended audience.
User Acceptance Testing can be done by in-house testing in which volunteers or paid test subjects use the software or, more typically for widely-distributed software, by making the test version available for downloading and free trial over the Web. The experiences of the early users are forwarded back to the developers who make final changes before releasing the software commercially.
User Acceptance Testing can be done by in-house testing in which volunteers or paid test subjects use the software or, more typically for widely-distributed software, by making the test version available for downloading and free trial over the Web. The experiences of the early users are forwarded back to the developers who make final changes before releasing the software commercially.
What is ALPHA TESTING in software testing and quality?
In this type of testing, the users are invited at the development center where they use the application and the developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is noted and rectified by the developers.
Alpha testing is done before the beta testing and after the acceptance testing. Mostly its done by the in-house members from developers and qa teams. I simple words its the testing by developed team just before launching the live beta version of that software.
Alpha testing is done before the beta testing and after the acceptance testing. Mostly its done by the in-house members from developers and qa teams. I simple words its the testing by developed team just before launching the live beta version of that software.
What is Beta Testing in software testing and quality
What is Beta Testing
In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers. Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the company. View the advantages of beta testing
beta-testing
The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.
Waterfall Model
Waterfall Model is the most common method used in software testing. It is said to be a water fall method because it is like flowing downwards steadily from step to step.
The main phase or steps in the water fall method are
Conception,
Initiation,
Analysis,
Design,
Construction,
Testing,
Production/Implementation,
Maintenance.
This water fall method actually originated in the manufacturing and construction industries. At that time since no software methodologies existed, this was taken to the software development and testing. the main highlight of this method is one can go to the next step of the development only after completing the on going step.
waterfall testing
Also the developers can go only to one step behind that is the immediately previous phase only. in this method, each phase of the development activity is followed by verification and validation activities. in the waterfall method, the following are the steps involved. you can move on to the next step only when you finishes the present one. the phases or steps are:
Software requirement specification
System and sotware design
Implementation (coding or unit testing)
Integration
Testing and validation
Operation or installation
Maintenance
Conception,
Initiation,
Analysis,
Design,
Construction,
Testing,
Production/Implementation,
Maintenance.
This water fall method actually originated in the manufacturing and construction industries. At that time since no software methodologies existed, this was taken to the software development and testing. the main highlight of this method is one can go to the next step of the development only after completing the on going step.
waterfall testing
Also the developers can go only to one step behind that is the immediately previous phase only. in this method, each phase of the development activity is followed by verification and validation activities. in the waterfall method, the following are the steps involved. you can move on to the next step only when you finishes the present one. the phases or steps are:
Software requirement specification
System and sotware design
Implementation (coding or unit testing)
Integration
Testing and validation
Operation or installation
Maintenance
What is Black Box Testing
Black Box Testing is testing without knowledge of the internal workings of the item being tested. For example, when black box testing is applied to software engineering, the tester would only know the “legal” inputs and what the expected outputs should be, but not how the program actually arrives at those outputs.
It is because of this that black box testing can be considered testing with respect to the specifications, no other knowledge of the program is necessary. For this reason, the tester and the programmer can be independent of one another, avoiding programmer bias toward his own work. For this testing, test groups are often used, “Test groups are sometimes called professional idiots…people who are good at designing incorrect data.” 1 Also, do to the nature of black box testing, the test planning can begin as soon as the specifications are written. The opposite of this would be glass box testing, where test data are derived from direct examination of the code to be tested. For glass box testing, the test cases cannot be determined until the code has actually been written. Both of these testing techniques have advantages and disadvantages, but when combined, they help to ensure thorough testing of the product.
Role of a tester in Defect Prevention
“What is the role of a tester in Defect Prevention and Defect Detection?”. In this post we will discuss the role of a tester in these phases and how to testers can prevent more defects in Defect Prevention phase and how testers can detect more bugs in Defect Detection phase
Role of a tester in defect prevention and defect detection.
Defect prevention – In Defect prevention, developers plays an important role. In this phase Developers do activities like – code reviews/static code analysis, unit testing, etc. Testers are also involved in defect prevention by reviewing specification documents. Studying the specification document is an art.
While studying specification documents, testers encounter various queries. And many times it happens that with those queries, requirement document gets changed/updated.
Developers often neglect primary ambiguities in specification documents in order to complete the project; or they fail to identify them when they see them. Those ambiguities are then built into the code and represent a bug when compared to the end-user's needs. This is how testers help in defect prevention.
2014 Winners For Software Testing and Quality
The Borland European Software Testing Award
Home Office Technology – Test Design & Consultancy Services
---------------------------------------------------------------------------------------
Lifetime Achievement Award
Bob Bartlett
---------------------------------------------------------------------------------------
The Cigniti Technologies Best Agile Project
EPAM Systems (winner)
- Mindfire Solutions
- Black Pepper Software
- AkBank
- Cognizant Technology Solutions
----------------------------------------------------------------------------------------
The Neotys Best Mobile Project
Waitrose in partnership with Cognizant Technology Solutions (winner)
- Centrica in partnership with Cognizant Technology Solutions
- Virgin Media in Partnership with Accenture
- Lloyds Banking Group in partnership with Cognizant Technology Solutions
- Proxama
------------------------------------------------------------------------------------------
Best Test Automation Project
TIBCO Jaspersoft (winner)
- Original Software
- Infuse IT powered by useMangoTM
- BD Medication Workflow Solutions (Becton Dickinson Austria GmbH)
- HCL Technologies Ltd
- Lloyds Banking Group in partnership with Cognizant Technologies Solutions
---------------------------------------------------------------------------------------------
The Sogeti Green Testing Team Of The Year
Tech Mahindra (winner)
- Sage UK
- Banking Testing team of Cognizant Technology Solutions
-----------------------------------------------------------------------------------------------
Graduate Tester Of The Year
Kieran Hunter, Cognizant Technology Solutions (winner)
- Paul Foy, Sogeti UK
- Karthik Kannan, Tata Consultancy Services
- Stacey Ballance, Wincor Nixdorf
- Prabhdeep Bhopal, Sopra
------------------------------------------------------------------------------------------------
Best Overall Testing Project – Finance Sector
Barclays (winner)
- Brickendon Consulting
- Xbosoft
- Credit Suisse in partnership with Cognizant Technology Solutions
- Infosys Limited
- IFDS - Oval Project
------------------------------------------------------------------------------------------------
Leading Vendor
Tata Consultancy Services (winner)
- Cognizant Technology Solutions with Credit Suisse
- Neotys
-------------------------------------------------------------------------------------------------
The Sage Most Innovative Project
Proxama (winner)
King (highly commended)
- King
- Philips in partnership with Tech Mahindra
- Allianz Insurance and ACIS
- British Gas
--------------------------------------------------------------------------------------------------
The Maveric Systems Best Overall Project
Aditi Technologies (winner)
- Barclays
- Leading Global Reinsurer in partnership with Cognizant Technology Solutions
- British Gas
- HISCOX in partnership with Cognizant Technology Solutions
Software Testing Necessary?
Software Testing is necessary because we all make mistakes. Some of those mistakes are unimportant, but some of them are expensive or dangerous. We need to check everything and anything we produce because things can always go wrong – humans make mistakes all the time.
Since we assume that our work may have mistakes, hence we all need to check our own work. However some mistakes come from bad assumptions and blind spots, so we might make the same mistakes when we check our own work as we made when we did it. So we may not notice the flaws in what we have done.
Ideally, we should get someone else to check our work because another person is more likely to spot the flaws.
There are several reasons which clearly tells us as why Software Testing is important and what are the major things that we should consider while testing of any product or application.
Software testing is very important because of the following reasons:
Software testing is really required to point out the defects and errors that were made during the development phases.
It’s essential since it makes sure of the Customer’s reliability and their satisfaction in the application.
It is very important to ensure the Quality of the product. Quality product delivered to the customers helps in gaining their confidence.
Testing is necessary in order to provide the facilities to the customers like the delivery of high quality product or software application which requires lower maintenance cost and hence results into more accurate, consistent and reliable results.
Testing is required for an effective performance of software application or product.
It’s important to ensure that the application should not result into any failures because it can be very expensive in the future or in the later stages of the development.
It’s required to stay in the business.
What are software testing objectives and purpose in software quality?
Software Testing has different goals and objectives.The major objectives of Software testing are as follows:
Finding defects which may get created by the programmer while developing the software.
Gaining confidence in and providing information about the level of quality.
To prevent defects.
To make sure that the end result meets the business and user requirements.
To ensure that it satisfies the BRS that is Business Requirement Specification and SRS that is System Requirement Specifications.
To gain the confidence of the customers by providing them a quality product.
Software testing helps in finalizing the software application or product against business and user requirements. It is very important to have good test coverage in order to test the software application completely and make it sure that it’s performing well and as per the specifications.
While determining the coverage the test cases should be designed well with maximum possibilities of finding the errors or bugs. The test cases should be very effective. This objective can be measured by the number of defects reported per test cases. Higher the number of the defects reported the more effective are the test cases.
Once the delivery is made to the end users or the customers they should be able to operate it without any complaints. In order to make this happen the tester should know as how the customers are going to use this product and accordingly they should write down the test scenarios and design the test cases. This will help a lot in fulfilling all the customer’s requirements.
Software testing makes sure that the testing is being done properly and hence the system is ready for use. Good coverage means that the testing has been done to cover the various areas like functionality of the application, compatibility of the application with the OS, hardware and different types of browsers, performance testing to test the performance of the application and load testing to make sure that the system is reliable and should not crash or there should not be any blocking issues. It also determines that the application can be deployed easily to the machine and without any resistance. Hence the application is easy to install, learn and use.
What is a Failure in software testing?
If under certain environment and situation defects in the application or product get executed then the system will produce the wrong results causing a failure.
Not all defects result in failures, some may stay inactive in the code and we may never notice them. Example: Defects in dead code will never result in failures.
It is not just defects that give rise to failure. Failures can also be caused because of the other reasons also like:
Because of the environmental conditions as well like a radiation burst, a strong magnetic field, electronic field or pollution could cause faults in hardware or firmware. Those faults might prevent or change the execution of software.
Failures may also arise because of human error in interacting with the software, perhaps a wrong input value being entered or an output being misinterpreted.
Finally failures may also be caused by someone deliberately trying to cause a failure in the system.
Difference between Error, Defect and Failure in software testing:
Error: The mistakes made by programmer is knowns as an ‘Error’. This could happen because of the following reasons:
- Because of some confusion in understanding the functionality of the software
- Because of some miscalculation of the values
- Because of misinterpretation of any value, etc.
Defect: The bugs introduced by programmer inside the code are known as a defect. This can happen because of some programatical mistakes.
Failure: If under certain circumstances these defects get executed by the tester during the testing then it results into the failure which is known as software failure.
Few points that are important to know:
When tester is executing a test he/she may observe some difference in the behavior of the feature or functionality, but this not because of the failure. This may happen because of the wrong test data entered, tester may not be aware of the feature or functionality or because of the bad environment. Because of these reasons incidents are reported. They are known as incident report. The condition or situation which requires further analysis or clarification is known as incident. To deal with the incidents the programmer need to to the analysis that whether this incident has occurred because of the failure or not.
It’s not necessary that defects or bugs introduced in the product are only by the software. To understand it further let’s take an example. A bug or defect can also be introduced by a business analyst. Defects present in the specifications like requirements specification and design specifications can be detected during the reviews. When the defect or bug is caught during the review cannot result into failure because the software has not yet been executed.
These defects or bugs are reported not to blame the developers or any people but to judge the quality of the product. The quality of product is of utmost importance. To gain the confidence of the customers it’s very important to deliver the quality product on time.
From where do defects and failures in software testing arise?
Defects and failures basically arise from:
Errors in the specification, design and implementation of the software and system
Errors in use of the system
Environmental conditions
Intentional damage
Potential consequences of earlier errors
Errors in the specification and design of the software:
Specification is basically a written document which describes the functional and non – functional aspects of the software by using prose and pictures. For testing specifications there is no need of having code. Without having code we can test the specifications. About 55% of all the bugs present in the product are because of the mistakes present in the specification. Hence testing the specifications can lots of time and the cost in future or in later stages of the product.
Errors in use of the system:
Errors in use of the system or product or application may arise because of the following reasons:
- Inadequate knowledge of the product or the software to the tester. The tester may not be aware of the functionalities of the product and hence while testing the product there might be some defects or failures.
- Lack of the understanding of the functionalities by the developer. It may also happen that the developers may not have understood the functionalities of the product or application properly. Based on their understanding the feature they will develop may not match with the specifications. Hence this may result into the defect or failure.
Environmental conditions:
Because of the wrong setup of the testing environment testers may report the defects or failures. As per the recent surveys it has been observed that about 40% of the tester’s time is consumed because of the environment issues and this has a great impact on quality and productivity. Hence proper test environments are required for quality and on time delivery of the product to the customers.
Intentional damage:
The defects and failures reported by the testers while testing the product or the application may arise because of the intentional damage.
Potential consequences of earlier errors:
Errors found in the earlier stages of the development reduce our cost of production. Hence it’s very important to find the error at the earlier stage. This could be done by reviewing the specification documents or by walkthrough. The downward flow of the defect will increase the cost of production.
When will defects in software testing arise?
Because of the following reasons the software defects arise:
- The person using the software application or product may not have enough knowledge of the product.
- Maybe the software is used in the wrong way which leads to the defects or failures.
- The developers may have coded incorrectly and there can be defects present in the design.
- Incorrect setup of the testing environments.
To know when defects in software testing arise, let us take a small example with a diagram as given below.
We can see that Requirement 1 is implemented correctly – we understood the customer’s requirement, designed correctly to meet that requirement, built correctly to meet the design, and so deliver that requirement with the right attributes: functionally, it does what it is supposed to do and it also has the right non-functional attributes, so it is fast enough, easy to understand and so on.
Types of errors and defects - when do defects arise With the other requirements, errors have been made at different stages. Requirement 2 is fine until the software is coded, when we make some mistakes and introduce defects. Probably, these are easily spotted and corrected during testing, because we can see the product does not meet its design specification.
The defects introduced in Requirement 3 are harder to deal with; we built exactly what we were told to but unfortunately the designer made some mistakes so there are defects in the design. Unless we check against the requirements definition, we will not spot those defects during testing. When we do notice them they will be hard to fix because design changes will be required.
The defects in Requirement 4 were introduced during the definition of the requirements; the product has been designed and built to meet that flawed requirements definition. If we test the product meets its requirements and design, it will pass its tests but may be rejected by the user or customer. Defects reported by the customer in acceptance test or live use can be very costly. Unfortunately, requirements and design defects are not rare; assessments of thousands of projects have shown that defects introduced during requirements and design make up close to half of the total number of defects.
What is the cost of defects in software testing and quality?
Software books
If the error is made and the consequent defect is detected in the requirements phase then it is relatively cheap to fix it.
Similarly if an error is made and the consequent defect is found in the design phase then the design can be corrected and reissued with relatively little expense.
cost of defects in software testing
The same applies for construction phase. If however, a defect is introduced in the requirement specification and it is not detected until acceptance testing or even once the system has been implemented then it will be much more expensive to fix. This is because rework will be needed in the specification and design before changes can be made in construction; because one defect in the requirements may well propagate into several places in the design and code; and because all the testing work done-to that point will need to be repeated in order to reach the confidence level in the software that we require.
It is quite often the case that defects detected at a very late stage, depending on how serious they are, are not corrected because the cost of doing so is too expensive.
The cost of defects can be measured by the impact of the defects and when we find them. Earlier the defect is found lesser is the cost of defect. For example if error is found in the requirement specifications then it is somewhat cheap to fix it. The correction to the requirement specification can be done and then it can be re-issued. In the same way when defect or error is found in the design then the design can be corrected and it can be re-issued. But if the error is not caught in the specifications and is not found till the user acceptance then the cost to fix those errors or defects will be way too expensive.
If the error is made and the consequent defect is detected in the requirements phase then it is relatively cheap to fix it.
Similarly if an error is made and the consequent defect is found in the design phase then the design can be corrected and reissued with relatively little expense.
cost of defects in software testing
The same applies for construction phase. If however, a defect is introduced in the requirement specification and it is not detected until acceptance testing or even once the system has been implemented then it will be much more expensive to fix. This is because rework will be needed in the specification and design before changes can be made in construction; because one defect in the requirements may well propagate into several places in the design and code; and because all the testing work done-to that point will need to be repeated in order to reach the confidence level in the software that we require.
It is quite often the case that defects detected at a very late stage, depending on how serious they are, are not corrected because the cost of doing so is too expensive.
What is the difference between Severity and Priority in software testing and quality?
There are two key things in defects of the software testing. They are:
1) Severity
2) Priority
What is the difference between Severity and Priority?
1) Severity:
It is the extent to which the defect can affect the software. In other words it defines the impact that a given defect has on the system. For example: If an application or web page crashes when a remote link is clicked, in this case clicking the remote link by an user is rare but the impact of application crashing is severe. So the severity is high but priority is low.
Severity can be of following types:
Critical: The defect that results in the termination of the complete system or one or more component of the system and causes extensive corruption of the data. The failed function is unusable and there is no acceptable alternative method to achieve the required results then the severity will be stated as critical.
Major: The defect that results in the termination of the complete system or one or more component of the system and causes extensive corruption of the data. The failed function is unusable but there exists an acceptable alternative method to achieve the required results then the severity will be stated as major.
Moderate: The defect that does not result in the termination, but causes the system to produce incorrect, incomplete or inconsistent results then the severity will be stated as moderate.
Minor: The defect that does not result in the termination and does not damage the usability of the system and the desired results can be easily obtained by working around the defects then the severity is stated as minor.
Cosmetic: The defect that is related to the enhancement of the system where the changes are related to the look and field of the application then the severity is stated as cosmetic.
2) Priority:
Priority defines the order in which we should resolve a defect. Should we fix it now, or can it wait? This priority status is set by the tester to the developer mentioning the time frame to fix the defect. If high priority is mentioned then the developer has to fix it at the earliest. The priority status is set based on the customer requirements. For example: If the company name is misspelled in the home page of the website, then the priority is high and severity is low to fix it.
Priority can be of following types:
Low: The defect is an irritant which should be repaired, but repair can be deferred until after more serious defect have been fixed.
Medium: The defect should be resolved in the normal course of development activities. It can wait until a new build or version is created.
High: The defect must be resolved as soon as possible because the defect is affecting the application or the product severely. The system cannot be used until the repair has been done.
Few very important scenarios related to the severity and priority which are asked during the interview:
High Priority & High Severity: An error which occurs on the basic functionality of the application and will not allow the user to use the system. (Eg. A site maintaining the student details, on saving record if it, doesn’t allow to save the record then this is high priority and high severity bug.)
High Priority & Low Severity: The spelling mistakes that happens on the cover page or heading or title of an application.
High Severity & Low Priority: An error which occurs on the functionality of the application (for which there is no workaround) and will not allow the user to use the system but on click of link which is rarely used by the end user.
Low Priority and Low Severity: Any cosmetic or spelling issues which is within a paragraph or in the report (Not on cover page, heading, title).
What are the principles of software testing?
There are seven principles of testing. They are as follows:
1) Testing shows presence of defects: Testing can show the defects are present, but cannot prove that there are no defects. Even after testing the application or product thoroughly we cannot say that the product is 100% defect free. Testing always reduces the number of undiscovered defects remaining in the software but even if no defects are found, it is not a proof of correctness.
2) Exhaustive testing is impossible: Testing everything including all combinations of inputs and preconditions is not possible. So, instead of doing the exhaustive testing we can use risks and priorities to focus testing efforts. For example: In an application in one screen there are 15 input fields, each having 5 possible values, then to test all the valid combinations you would need 30 517 578 125 (515) tests. This is very unlikely that the project timescales would allow for this number of tests. So, accessing and managing risk is one of the most important activities and reason for testing in any project.
3) Early testing: In the software development life cycle testing activities should start as early as possible and should be focused on defined objectives.
4) Defect clustering: A small number of modules contains most of the defects discovered during pre-release testing or shows the most operational failures.
5) Pesticide paradox: If the same kinds of tests are repeated again and again, eventually the same set of test cases will no longer be able to find any new bugs. To overcome this “Pesticide Paradox”, it is really very important to review the test cases regularly and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.
6) Testing is context depending: Testing is basically context dependent. Different kinds of sites are tested differently. For example, safety – critical software is tested differently from an e-commerce site.
7) Absence – of – errors fallacy: If the system built is unusable and does not fulfil the user’s needs and expectations then finding and fixing defects does not help.
What are the principles of testing?
Principles of Testing
There are seven principles of testing. They are as follows:
1) Testing shows presence of defects: Testing can show the defects are present, but cannot prove that there are no defects. Even after testing the application or product thoroughly we cannot say that the product is 100% defect free. Testing always reduces the number of undiscovered defects remaining in the software but even if no defects are found, it is not a proof of correctness.
2) Exhaustive testing is impossible: Testing everything including all combinations of inputs and preconditions is not possible. So, instead of doing the exhaustive testing we can use risks and priorities to focus testing efforts. For example: In an application in one screen there are 15 input fields, each having 5 possible values, then to test all the valid combinations you would need 30 517 578 125 (515) tests. This is very unlikely that the project timescales would allow for this number of tests. So, accessing and managing risk is one of the most important activities and reason for testing in any project.
3) Early testing: In the software development life cycle testing activities should start as early as possible and should be focused on defined objectives.
4) Defect clustering: A small number of modules contains most of the defects discovered during pre-release testing or shows the most operational failures.
5) Pesticide paradox: If the same kinds of tests are repeated again and again, eventually the same set of test cases will no longer be able to find any new bugs. To overcome this “Pesticide Paradox”, it is really very important to review the test cases regularly and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.
6) Testing is context depending: Testing is basically context dependent. Different kinds of sites are tested differently. For example, safety – critical software is tested differently from an e-commerce site.
7) Absence – of – errors fallacy: If the system built is unusable and does not fulfil the user’s needs and expectations then finding and fixing defects does not help.
Sunday, 19 April 2015
What is fundamental test process about manual testing in software testing?
Testing is a process rather than a single activity. This process starts from test planning then designing test cases, preparing for execution and evaluating status till the test closure. So, we can divide the activities within the fundamental test process into the following basic steps:
1) Planning and Control
2) Analysis and Design
3) Implementation and Execution
4) Evaluating exit criteria and Reporting
5) Test Closure activities
1) Planning and Control:
Test planning has following major tasks:
i. To determine the scope and risks and identify the objectives of testing.
ii. To determine the test approach.
iii. To implement the test policy and/or the test strategy. (Test strategy is an outline that describes the testing portion of the software development cycle. It is created to inform PM, testers and developers about some key issues of the testing process. This includes the testing objectives, method of testing, total time and resources required for the project and the testing environments.).
iv. To determine the required test resources like people, test environments, PCs, etc.
v. To schedule test analysis and design tasks, test implementation, execution and evaluation.
vi. To determine the Exit criteria we need to set criteria such as Coverage criteria. (Coverage criteria are the percentage of statements in the software that must be executed during testing. This will help us track whether we are completing test activities correctly. They will show us which tasks and checks we must complete for a particular level of testing before we can say that testing is finished.)
Test control has the following major tasks:
i. To measure and analyze the results of reviews and testing.
ii. To monitor and document progress, test coverage and exit criteria.
iii. To provide information on testing.
iv. To initiate corrective actions.
v. To make decisions.
2) Analysis and Design:
Test analysis and Test Design has the following major tasks:
i. To review the test basis. (The test basis is the information we need in order to start the test analysis and create our own test cases. Basically it’s a documentation on which test cases are based, such as requirements, design specifications, product risk analysis, architecture and interfaces. We can use the test basis documents to understand what the system should do once built.)
ii. To identify test conditions.
iii. To design the tests.
iv. To evaluate testability of the requirements and system.
v. To design the test environment set-up and identify and required infrastructure and tools.
3) Implementation and Execution:
During test implementation and execution, we take the test conditions into test cases and procedures and other testware such as scripts for automation, the test environment and any other test infrastructure. (Test cases is a set of conditions under which a tester will determine whether an application is working correctly or not.)
(Testware is a term for all utilities that serve in combination for testing a software like scripts, the test environment and any other test infrastructure for later reuse.)
Test implementation has the following major task:
i. To develop and prioritize our test cases by using techniques and create test data for those tests. (In order to test a software application you need to enter some data for testing most of the features. Any such specifically identified data which is used in tests is known as test data.)
We also write some instructions for carrying out the tests which is known as test procedures.
We may also need to automate some tests using test harness and automated tests scripts. (A test harness is a collection of software and test data for testing a program unit by running it under different conditions and monitoring its behavior and outputs.)
ii. To create test suites from the test cases for efficient test execution.
(Test suite is a collection of test cases that are used to test a software program to show that it has some specified set of behaviours. A test suite often contains detailed instructions and information for each collection of test cases on the system configuration to be used during testing. Test suites are used to group similar test cases together.)
iii. To implement and verify the environment.
Test execution has the following major task:
i. To execute test suites and individual test cases following the test procedures.
ii. To re-execute the tests that previously failed in order to confirm a fix. This is known as confirmation testing or re-testing.
iii. To log the outcome of the test execution and record the identities and versions of the software under tests. The test log is used for the audit trial. (A test log is nothing but, what are the test cases that we executed, in what order we executed, who executed that test cases and what is the status of the test case (pass/fail). These descriptions are documented and called as test log.).
iv. To Compare actual results with expected results.
v. Where there are differences between actual and expected results, it report discrepancies as Incidents.
4) Evaluating Exit criteria and Reporting:
Based on the risk assessment of the project we will set the criteria for each test level against which we will measure the “enough testing”. These criteria vary from project to project and are known as exit criteria.
Exit criteria come into picture, when:
– Maximum test cases are executed with certain pass percentage.
– Bug rate falls below certain level.
– When achieved the deadlines.
Evaluating exit criteria has the following major tasks:
i. To check the test logs against the exit criteria specified in test planning.
ii. To assess if more test are needed or if the exit criteria specified should be changed.
iii. To write a test summary report for stakeholders.
5) Test Closure activities:
Test closure activities are done when software is delivered. The testing can be closed for the other reasons also like:
When all the information has been gathered which are needed for the testing.
When a project is cancelled.
When some target is achieved.
When a maintenance release or update is done.
Test closure activities have the following major tasks:
i. To check which planned deliverables are actually delivered and to ensure that all incident reports have been resolved.
ii. To finalize and archive testware such as scripts, test environments, etc. for later reuse.
iii. To handover the testware to the maintenance organization. They will give support to the software.
iv To evaluate how the testing went and learn lessons for future releases and projects.
What is the Psychology of testing? The balance between self-testing and independent testing.
In this section we will discuss:
The comparison of the mindset of the tester and the developer.
The balance between self-testing and independent testing.
There should be clear and courteous communication and feedback on defects between tester and developer.
Comparison of the mindset of the tester and developer:
The testing and reviewing of the applications are different from the analysing and developing of it. By this we mean to say that if we are building or developing applications we are working positively to solve the problems during the development process and to make the product according to the user specification. However while testing or reviewing a product we are looking for the defects or failures in the product. Thus building the software requires a different mindset from testing the software.
The balance between self-testing and independent testing:
The comparison made on the mindset of the tester and the developer in the above article is just to compare the two different perspectives. It does not mean that the tester cannot be the programmer, or that the programmer cannot be the tester, although they often are separate roles. In fact programmers are the testers. They always test their component which they built. While testing their own code they find many problems so the programmers, architect and the developers always test their own code before giving it to anyone. However we all know that it is difficult to find our own mistakes. So, programmers, architect, business analyst depend on others to help test their work. This other person might be some other developer from the same team or the Testing specialists or professional testers. Giving applications to the testing specialists or professional testers allows an independent test of the system.
This degree of independence avoids author bias and is often more effective at finding defects and failures.
There is several level of independence in software testing which is listed here from the lowest level of independence to the highest:
i. Tests by the person who wrote the item.
ii. Tests by another person within the same team, like another programmer.
iii. Tests by the person from some different group such as an independent test team.
iv. Tests by a person from a different organization or company, such as outsourced testing or certification by an external body.
Clear and courteous communication and feedback on defects between tester and developer:
We all make mistakes and we sometimes get annoyed and upset or depressed when someone points them out. So, when as testers we run a test which is a good test from our viewpoint because we found the defects and failures in the software. But at the same time we need to be very careful as how we react or report the defects and failures to the programmers. We are pleased because we found a good bug but how will the requirement analyst, the designer, developer, project manager and customer react.
The people who build the application may react defensively and take this reported defect as personal criticism.
The project manager may be annoyed with everyone for holding up the project.
The customer may lose confidence in the product because he can see defects.
Because testing can be seen as destructive activity we need to take care while reporting our defects and failures as objectively and politely as possible.
What is independent testing? It’s benefits and risks
The degree of independence avoids author bias and is often more effective at finding defects and failures.
There is several level of independence which is listed here from the lowest level of independence to the highest:
i. Tests by the person who wrote the item.
ii. Tests by another person within the same team, like another programmer.
iii.Tests by the person from some different group such as an independent test team.
iv.Tests by a person from a different organization or company, such as outsourced testing or certification by an external body.
When we think about how independent the test team is? It is really very important to understand that independence is not an either/or condition, but a range:
At one end of the range lies the absence of independence, where the programmer performs testing within the programming team.
Moving toward independence, we find an integrated tester or group of testers working alongside the programmers, but still within and reporting to the development manager.
Then moving little bit more towards independence we might find a team of testers who are independent and outside the development team, but reporting to project management.
Near the other end of the continuum lies complete independence. We might see a separate test team reporting into the organization at a point equal to the development or project team. We might find specialists in the business domain (such as users of the system), specialists in technology (such as database experts), and specialists in testing (such as security testers, certification testers, or test automation experts) in a separate test team, as part of a larger independent test team, or as part of a contract, outsourced test team.
Benefits of independence testing:
An independent tester can repeatedly find out more, other, and different defects than a tester working within a programming team – or a tester who is by profession a programmer.
While business analysts, marketing staff, designers, and programmers bring their own assumptions to the specification and implementation of the item under test, an independent tester brings a different set of assumptions to testing and to reviews, which often helps in exposing the hidden defects and problems
An independent tester who reports to senior management can report his results honestly and without any concern for reprisal that might result from pointing out problems in coworkers’ or, worse yet, the manager’s work.
An independent test team often has a separate budget, which helps ensure the proper level of money is spent on tester training, testing tools, test equipment, etc.
In addition, in some organizations, testers in an independent test team may find it easier to have a career path that leads up into more senior roles in testing.
Risks of independence and integrated testing:
There is a possibility that the testers and the test team can get isolated. This can take the form of interpersonal isolation from the programmers, the designers, and the project team itself, or it can take the form of isolation from the broader view of quality and the business objectives (e.g., obsessive focus on defects, often accompanied by a refusal to accept business prioritization of defects).
This leads to communication problems, feelings of unfriendliness and hostility.
Lack of identification with and support for the project goals, spontaneous blame festivals and political backstabbing.
Even well-integrated test teams can suffer problems. Other project stakeholders might come to see the independent test team – rightly or wrongly – as a bottleneck and a source of delay. Some programmers give up their responsibility for quality, saying, ‘Well, we have this test team now, so why do I need to unit test my code?’
What is Capability Maturity Model (CMM)? What are CMM Levels? in software testing
Capability Maturity Model is a bench-mark for measuring the maturity of an organization’s software process. It is a methodology used to develop and refine an organization’s software development process. CMM can be used to assess an organization against a scale of five process maturity levels based on certain Key Process Areas (KPA). It describes the maturity of the company based upon the project the company is dealing with and the clients. Each level ranks the organization according to its standardization of processes in the subject area being assessed.
A maturity model provides:
A place to start
The benefit of a community’s prior experiences
A common language and a shared vision
A framework for prioritizing actions
A way to define what improvement means for your organization
In CMMI models with a staged representation, there are five maturity levels designated by the numbers 1 through 5 as shown below:
Initial
Managed
Defined
Quantitatively Managed
Optimizing
CMM level diagram - Characteristics of maturity levelsMaturity levels consist of a predefined set of process areas. The maturity levels are measured by the achievement of the specific and generic goals that apply to each predefined set of process areas. The following sections describe the characteristics of each maturity level in detail.
Maturity Level 1 – Initial: Company has no standard process for software development. Nor does it have a project-tracking system that enables developers to predict costs or finish dates with any accuracy.
In detail we can describe it as given below:
At maturity level 1, processes are usually ad hoc and chaotic.
The organization usually does not provide a stable environment. Success in these organizations depends on the competence and heroics of the people in the organization and not on the use of proven processes.
Maturity level 1 organizations often produce products and services that work but company has no standard process for software development. Nor does it have a project-tracking system that enables developers to predict costs or finish dates with any accuracy.
Maturity level 1 organizations are characterized by a tendency to over commit, abandon processes in the time of crisis, and not be able to repeat their past successes.
Maturity Level 2 – Managed: Company has installed basic software management processes and controls. But there is no consistency or coordination among different groups.
In detail we can describe it as given below:
At maturity level 2, an organization has achieved all the specific and generic goals of the maturity level 2 process areas. In other words, the projects of the organization have ensured that requirements are managed and that processes are planned, performed, measured, and controlled.
The process discipline reflected by maturity level 2 helps to ensure that existing practices are retained during times of stress. When these practices are in place, projects are performed and managed according to their documented plans.
At maturity level 2, requirements, processes, work products, and services are managed. The status of the work products and the delivery of services are visible to management at defined points.
Commitments are established among relevant stakeholders and are revised as needed. Work products are reviewed with stakeholders and are controlled.
The work products and services satisfy their specified requirements, standards, and objectives.
Maturity Level 3 – Defined: Company has pulled together a standard set of processes and controls for the entire organization so that developers can move between projects more easily and customers can begin to get consistency from different groups.
In detail we can describe it as given below:
At maturity level 3, an organization has achieved all the specific and generic goals.
At maturity level 3, processes are well characterized and understood, and are described in standards, procedures, tools, and methods.
A critical distinction between maturity level 2 and maturity level 3 is the scope of standards, process descriptions, and procedures. At maturity level 2, the standards, process descriptions, and procedures may be quite different in each specific instance of the process (for example, on a particular project). At maturity level 3, the standards, process descriptions, and procedures for a project are tailored from the organization’s set of standard processes to suit a particular project or organizational unit.
The organization’s set of standard processes includes the processes addressed at maturity level 2 and maturity level 3. As a result, the processes that are performed across the organization are consistent except for the differences allowed by the tailoring guidelines.
Another critical distinction is that at maturity level 3, processes are typically described in more detail and more rigorously than at maturity level 2.
At maturity level 3, processes are managed more proactively using an understanding of the interrelationships of the process activities and detailed measures of the process, its work products, and its services.
Maturity Level 4 – Quantitatively Managed: In addition to implementing standard processes, company has installed systems to measure the quality of those processes across all projects.
In detail we can describe it as given below:
At maturity level 4, an organization has achieved all the specific goals of the process areas assigned to maturity levels 2, 3, and 4 and the generic goals assigned to maturity levels 2 and 3.
At maturity level 4 Sub-processes are selected that significantly contribute to overall process performance. These selected sub-processes are controlled using statistical and other quantitative techniques.
Quantitative objectives for quality and process performance are established and used as criteria in managing processes. Quantitative objectives are based on the needs of the customer, end users, organization, and process implementers. Quality and process performance are understood in statistical terms and are managed throughout the life of the processes.
For these processes, detailed measures of process performance are collected and statistically analyzed. Special causes of process variation are identified and, where appropriate, the sources of special causes are corrected to prevent future occurrences.
Quality and process performance measures are incorporated into the organizations measurement repository to support fact-based decision making in the future.
A critical distinction between maturity level 3 and maturity level 4 is the predictability of process performance. At maturity level 4, the performance of processes is controlled using statistical and other quantitative techniques, and is quantitatively predictable. At maturity level 3, processes are only qualitatively predictable.
Maturity Level 5 – Optimizing: Company has accomplished all of the above and can now begin to see patterns in performance over time, so it can tweak its processes in order to improve productivity and reduce defects in software development across the entire organization.
In detail we can describe it as given below:
At maturity level 5, an organization has achieved all the specific goals of the process areas assigned to maturity levels 2, 3, 4, and 5 and the generic goals assigned to maturity levels 2 and 3.
Processes are continually improved based on a quantitative understanding of the common causes of variation inherent in processes.
Maturity level 5 focuses on continually improving process performance through both incremental and innovative technological improvements.
Quantitative process-improvement objectives for the organization are established, continually revised to reflect changing business objectives, and used as criteria in managing process improvement.
The effects of deployed process improvements are measured and evaluated against the quantitative process-improvement objectives. Both the defined processes and the organization’s set of standard processes are targets of measurable improvement activities.
Optimizing processes that are agile and innovative depends on the participation of an empowered workforce aligned with the business values and objectives of the organization.
The organization’s ability to rapidly respond to changes and opportunities is enhanced by finding ways to accelerate and share learning. Improvement of the processes is inherently part of everybody’s role, resulting in a cycle of continual improvement.
A critical distinction between maturity level 4 and maturity level 5 is the type of process variation addressed. At maturity level 4, processes are concerned with addressing special causes of process variation and providing statistical predictability of the results. Though processes may produce predictable results, the results may be insufficient to achieve the established objectives. At maturity level 5, processes are concerned with addressing common causes of process variation and changing the process (that is, shifting the mean of the process performance) to improve process performance (while maintaining statistical predictability) to achieve the established quantitative process-improvement objectives.
Agile model – advantages, disadvantages and when to use it?
Agile development model is also a type of Incremental model. Software is developed in incremental, rapid cycles. This results in small incremental releases with each release building on previous functionality. Each release is thoroughly tested to ensure software quality is maintained. It is used for time critical applications. Extreme Programming (XP) is currently one of the most well known agile development life cycle model.
Diagram of Agile model:
Agile model in Software testing
Advantages of Agile model:
Customer satisfaction by rapid, continuous delivery of useful software.
People and interactions are emphasized rather than process and tools. Customers, developers and testers constantly interact with each other.
Working software is delivered frequently (weeks rather than months).
Face-to-face conversation is the best form of communication.
Close, daily cooperation between business people and developers.
Continuous attention to technical excellence and good design.
Regular adaptation to changing circumstances.
Even late changes in requirements are welcomed
Disadvantages of Agile model:
In case of some software deliverables, especially the large ones, it is difficult to assess the effort required at the beginning of the software development life cycle.
There is lack of emphasis on necessary designing and documentation.
The project can easily get taken off track if the customer representative is not clear what final outcome that they want.
Only senior programmers are capable of taking the kind of decisions required during the development process. Hence it has no place for newbie programmers, unless combined with experienced resources.
When to use Agile model:
When new changes are needed to be implemented. The freedom agile gives to change is very important. New changes can be implemented at very little cost because of the frequency of new increments that are produced.
To implement a new feature the developers need to lose only the work of a few days, or even only hours, to roll back and implement it.
Unlike the waterfall model in agile model very limited planning is required to get started with the project. Agile assumes that the end users’ needs are ever changing in a dynamic business and IT world. Changes can be discussed and features can be newly effected or removed based on feedback. This effectively gives the customer the finished system they want or need.
Both system developers and stakeholders alike, find they also get more freedom of time and options than if the software was developed in a more rigid sequential way. Having options gives them the ability to leave important decisions until more or better data or even entire hosting programs are available; meaning the project can continue to move forward without fear of reaching a sudden standstill.
Spiral model- advantages, disadvantages and when to use it in software industry
The spiral model is similar to the incremental model, with more emphasis placed on risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and Evaluation. A software project repeatedly passes through these phases in iterations (called Spirals in this model). The baseline spiral, starting in the planning phase, requirements are gathered and risk is assessed. Each subsequent spirals builds on the baseline spiral.
Planning Phase: Requirements are gathered during the planning phase. Requirements like ‘BRS’ that is ‘Bussiness Requirement Specifications’ and ‘SRS’ that is ‘System Requirement specifications’.
Risk Analysis: In the risk analysis phase, a process is undertaken to identify risk and alternate solutions. A prototype is produced at the end of the risk analysis phase. If any risk is found during the risk analysis then alternate solutions are suggested and implemented.
Engineering Phase: In this phase software is developed, along with testing at the end of the phase. Hence in this phase the development and testing is done.
Evaluation phase: This phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral.
Diagram of Spiral model:
Spiral model
Advantages of Spiral model:
High amount of risk analysis hence, avoidance of Risk is enhanced.
Good for large and mission-critical projects.
Strong approval and documentation control.
Additional Functionality can be added at a later date.
Software is produced early in the software life cycle.
Disadvantages of Spiral model:
Can be a costly model to use.
Risk analysis requires highly specific expertise.
Project’s success is highly dependent on the risk analysis phase.
Doesn’t work well for smaller projects.
When to use Spiral model:
When costs and risk evaluation is important
For medium to high-risk projects
Long-term project commitment unwise because of potential changes to economic priorities
Users are unsure of their needs
Requirements are complex
New product line
Significant changes are expected (research and exploration)
Planning Phase: Requirements are gathered during the planning phase. Requirements like ‘BRS’ that is ‘Bussiness Requirement Specifications’ and ‘SRS’ that is ‘System Requirement specifications’.
Risk Analysis: In the risk analysis phase, a process is undertaken to identify risk and alternate solutions. A prototype is produced at the end of the risk analysis phase. If any risk is found during the risk analysis then alternate solutions are suggested and implemented.
Engineering Phase: In this phase software is developed, along with testing at the end of the phase. Hence in this phase the development and testing is done.
Evaluation phase: This phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral.
Diagram of Spiral model:
Spiral model
Advantages of Spiral model:
High amount of risk analysis hence, avoidance of Risk is enhanced.
Good for large and mission-critical projects.
Strong approval and documentation control.
Additional Functionality can be added at a later date.
Software is produced early in the software life cycle.
Disadvantages of Spiral model:
Can be a costly model to use.
Risk analysis requires highly specific expertise.
Project’s success is highly dependent on the risk analysis phase.
Doesn’t work well for smaller projects.
When to use Spiral model:
When costs and risk evaluation is important
For medium to high-risk projects
Long-term project commitment unwise because of potential changes to economic priorities
Users are unsure of their needs
Requirements are complex
New product line
Significant changes are expected (research and exploration)
Subscribe to:
Posts (Atom)