Overview of Software Testing Process
The quality assurance and software testing process of software development is as necessary as the actual code had written. However, most of the time, the software QA process is an afterthought, carried out at the last minute, just before the release of the project is planned.
There are many thoughts when it comes to quality assurance and best software testing practices. And it’s so important to plan your QA process in advance because the consequences of performing improper testing can be even more costly.
You know it’s essential, so let’s move on and dive right into our recipe for success. As real quality comes from quality – minded people, we have subscribed to a different angle to QA, which differs from traditional practices.
These best QA practices will change the way you integrate testing into your development process. Let’s take a look at the agile methodology to help you improve speed, scale, and coverage in the software testing process.
Software Testing Best Practices
Adopt ‘Test Early Test Often’ Principle in Software Testing
We all have heard the phrase “test early and test often” as it applies to QA in software development. Software quality assurance activities should be planned while the initial start of development to achieve optimal results. By building in additional time for testing throughout the entire development process, you can come up with a realistic time frame and deadline without sacrificing the quality of the product.
Typically, testing begins when the coding phase is completed. However, for excellent quality in an agile methodology, you need to shift your focus and start monitoring and testing right from the start of the project. This ensures that bugs are detected earlier, which not only saves time and money but also helps to maintain a good relationship with team developers, accelerates the delivery of the product, and also allows for more excellent test coverage. Shift-Left Approach is all about the basic test principle; testing early.
However, Continuous Testing is a process of executing tests as a part of the delivery pipeline to get faster feedback on the potential software bugs related to a release as soon as possible.
Combining these concepts – Shift-Left Continuous Testing
To combine shift – left testing and continuous testing effectively, we must first understand why we are connecting them and what the goal is.
Essentially, the goal is to automate our application testing as much as possible, maximize our test coverage to the best of our ability, perform testing in the development pipeline as soon as possible.
Why Shift left continuous testing matters
- Early Bug Detection Ensures Better Code and Product Quality
- Massive Time and Effort Saved
- Enhanced Test Coverage
- Reduced Costs Involved in Development and Testing
Concluding Shift-Left Continuous Testing
Shift – left testing and continuous testing are two useful strategies that can help your organization truly adopt the philosophies of DevOps. When used together, they serve to catch bugs early and validate the code often, and they save your organization time and effort by reducing the time and effort required to fix problems with the application and increasing the quality of the product being released.
Test Coverage and Code Coverage in Software Testing
Today, Many QA engineers talk a lot about “test coverage,” which gives a good overview of the quality of the product. However, to achieve real quality, both code coverage and test coverage analysis must be considered. For example, even if you reach 100 percent test coverage, you still need to aim for at least 90 percent function code coverage to ensure the best results.
Test coverage is often confused with the Code Coverage. Although both of the metrics are used to evaluate the quality of the application code, code coverage is a term to describe what % of the application code used when a user interacts with an application. On the other hand, Test Coverage is testing each business requirement at least once and is a QA team activity.
How to attain more Test Coverage in less time
Testers always run on a tight schedule and have to focus on ensuring maximum coverage within the specified time primarily. There are a few methods described below to achieve this –
Using Automation Tools
One of the modern software testing methods that any company or testing group can adopt is the use of the right Automation Tool. There is the number of tools on the market today to make life easy for testers. The right tool for the application must be identified.
Maintain Proper Checklist
Maintaining a correct checklist for each communication under the given module can help to achieve efficient task coverage.
Prioritizing requirements is one thing that is necessary to achieve maximum test coverage in less time. Sort out the given requirements into Simple, Medium, and Complex priorities that allow testers to focus vigorously on their tasks. More focus should be on the new LIVE requirements in the next release.
Identifying impacts in the initial builds and consequently increasing the need for elimination of these impacts can help to achieve high coverage in the upcoming builds.
The test manager should keep track of all impacts and fixes in the current build and ensure QA receive clean build with efficient repairs.
Test Coverage Metrics
- Code coverage = (No. of lines of code exercised by the test suites)/(total no. of lines of code)*100
- Requirement coverage = [(Total no. of requirements) – (Total no. of missed requirements)]/(Total no. of requirements)*100
Test Coverage Best Practices
- Sort business requirements/modules according to their criticality, frequency of use and most complex workflows.
- Develop a traceability matrix for the modules/requirements.
- Use test coverage as a measure for “untested paths” instead of “false sense of security.”
- Develop automated suites using the frameworks integrated with code coverage utilities.
- Measure code coverage for each release and plan to improve it with every subsequent release.
- Utilize the metrics like ‘Defect density,’ ‘feature wise defect distribution’ and ‘Defect removal efficiency’ as a guide to ensure the improved coverage for the subsequent releases.
Key benefits of Testing Coverage awareness for a Tester
- Primarily helps to prioritize with the testing task.
- Helps to achieve 100% requirement coverage.
- Useful in determining the EXIT criteria.
- Impacts Analysis becomes easier.
- A test lead can also prepare a clear test closure report.
Concluding Test and Code Coverage
Test coverage does not end with the above contexts. Many other points need to be considered when it comes to testing coverage.
It’s not always true that when you test more, the results are better. When you check more with no planned strategy, you probably will end up investing a lot of time.
With a more structured approach, a 100% requirement coverage and effective testing methods, you will not compromise the quality of the product.
Automate Testing Where it Makes Sense
It can be challenging to determine the ideal level of test automation your team should be striving for during the DevOps transition. But with releases happening significantly more frequently, the volume of testing must also increase significantly. That’s where automated testing comes into play.
However, teams should not just proceed with the goal of automating as much as possible. Instead, the goal should be to lean on the team’s more tenured testers to develop a test automation strategy that maximizes resources and eliminates the need for testers to perform repetitive test runs manually. By eliminating rote tasks, automation allows testers to spend their time more strategically – both planning and executing automation strategies and performing the exploratory testing that can’t be replaced by automation. Because as the testing resources are limited, one of the first considerations in launching a test automation project is where to focus the efforts. Which test cases will give the highest return on time and effort invested?
What to Automate
To get the best return of your efforts, focus your automation strategy on those test cases that can meet one or more of the following criteria –
Regression tests in Software Testing
A regression test is one that passed by the system in a previous development cycle. Re-running your regression tests in the subsequent release cycles will ensure that a new release does not reintroduce an old defect or introduce a new one. Because regression tests are often performed, they should be at the top of your automation priority list.
Tests for stable features
Automating tests for unstable features can result in significant maintenance effort. To avoid this, test the functionality manually as long as it is actively being developed.
Use risk analysis to determine which of the features carry the highest failure costs, and focus on automating these tests. Then, add these tests to your regression suite.
Any tests that are repeated are good candidates for test automation, and data-driven tests are the main ones. Instead of entering multiple combinations of username and password or email address and payment type manually to validate the entry fields, let an automated analysis do that for you.
Load tests are merely a variation on data-driven tests, where the aim is to test the system’s response to simulated demand. Combine a data-driven test case with the tool that can run the test parallelly, or distribute it on a grid to simulate the desired load.
Depending on the size of your regression test suite, it may not make sense to execute the entire test suite for each new build of the system. Smoke tests are a subset of regression tests which check that you have a good build before spending time and effort on further testing. Smoke testing typically checks that the application opens, allows login and performs other essential functions. Include smoke testing in your Continuous Integration (CI) process and automatically trigger them with each new build of the system.
Mobile applications must be able to perform well in a wide range of sizes, screen resolutions and O/S versions. In 2018, According To software Testing News, a new manual testing lab needs almost 50 devices to provide 80 percent coverage of the possible combinations. Automating cross-device tests can reduce the testing costs and also saves significant time.
Cross-browser testing ensures that a web application performs consistently regardless of the version of the web browser used to access it. In general, it is not necessary to run your entire test suite against each combination of browser and device, but instead to focus on the high-risk features and most popular browser versions currently in use. Now, Chrome is the leading browser on both desktop and mobile, and the second-largest tablet behind Safari. It’d make sense to run your entire test suite against the Chrome browser, and then your high-risk test cases against Safari browser, Firefox browser, Internet Explorer and Microsoft Edge.
What You Shouldn’t Automate
There are certain types of tests where automation may not be possible or advised. This includes any test in which the time and effort required to automate the test exceed the potential savings. Plan to perform these test types manually.
Tests with unpredictable results
Automate a test when the results are objective and easy to measure. For example, a login process is an excellent choice for automation because it is clear what should happen when a valid username and invalid password are entered, or when an invalid username or invalid password are entered. If your test case does not have a clear pass/fail criteria, it would be better to perform it manually by a tester.
Automating a single-use test may take longer than executing that test manually once. Note that the definition of “single-use tests” doesn’t include tests that become part of a regression suite or that are data-driven.
Testing unstable features manually is best. Invest the effort in the automation once the feature has reached a stable point in the development.
Features that resist automation
Some of the features are designed to resist automation, such as CAPTCHAs on the web forms. Instead of trying to automate the CAPTCHA, it would be better to disable the CAPTCHA in your test environment or have developers create an entry into the application that bypasses CAPTCHA for testing purposes. If this isn’t possible, another solution is to have a tester complete the CAPTCHA manually and then perform the automated test after passing the CAPTCHA. Just include the logic in the test, which pauses until the tester can complete the CAPTCHA, and then resume the test once the login success is returned.
Native O/S features on mobile devices
Particularly on Apple iOS, non-instrumented native system applications are difficult to automate due to built-in security.
Concluding Test Automation
To make sure that you achieve automation goals, concentrate on the right test cases for automation. And make sure to build in time for exploratory and UX/usability testing – by their nature, these types of tests can not and should not be automated.
Write Testable Requirements in Software Testing
A testable requirement describes the behavior of an application in such a way that tests can be developed to determine whether the condition has been met or not. To be testable, a requirement should be clear, complete, and measurable, without any ambiguity.
Assume, for example, that you plan to test a web shopping application. You have the following requirement: “Easy-to-use search for available inventory.” Testing this requirement as written requires the assumptions about what is meant by ambiguous terms such as “easy-to-use” and “available inventory.” To make the criteria more testable, clarify vague wording such as “fast,” “intuitive” or “user-friendly.”
Requirements should not contain implementation details such as “the search box will be located at the top right corner of the screen,” but should be measurable and complete otherwise. Take the following example for a web shopping platform -When at least one of the matching item is found, display up to 20 matching inventory items in a list and sort according to user preference settings.
This requirement leads to the creation of boundary test cases, such as no matching items, 1 or 2 matching items, and, 20 and 21 matching items. This requirement, however, describes more than one function. It would be better practice to divide it into three separate elements, as shown below –
- When at least one of the matching item is found, display up to 20 matching inventory items
- Display search results in the list according to the user preference settings
- Display search results in the sort order according. to the user preference settings
Approaches to Requirements
There is the number of methods for writing requirements: from traditional requirements documents to more agile approaches such as user stories, TDD, ATDD, and BDD.
A user story is a requirement that is written as an objective and is also meaningful to the end-users. User stories are brief and often follow the format – As a [user role] I want/need to [feature] so that [goal]. E.g. “As a customer looking for a product, I want to choose whether to see the list of available products in a list so that I can compare the available products.”
As the name implies, writing requirements as user stories focus on the user or customer. The conditions expressed as user stories themselves do not have enough information to be tested. User stories should include the acceptance criteria so that the team can know when the story is “done.”
Test-driven development (TDD)
Requirements in TDD are written as unit tests. Unit tests are sometimes called developer testing, as because the developers perform this testing. However, testers have an essential role to play in a TDD approach. Testers can work with the developers to create the better unit tests, applying techniques such as boundary value analysis, risk analysis, and equivalence partitioning; and helps to ensure that the necessary integration and workflow testing occurs. TDD tests are typically written in the tool such as JUnit or VBUnit, and form an essential part of the documentation of the application.
Acceptance test-driven development (ATDD)
In ATDD, user stories and their acceptance criteria become the tests used to show a customer that the application works as intended. Acceptance tests are typically written in collaboration with a team of three that includes a user representative, a developer and a tester. To ensure that the tests are understandable by everyone on the team, they are written in the term “business domain” terms rather than technical terms.
The workflow in ATDD is almost similar to TDD: first, a user story is written, followed by the acceptance tests. The user story is then implemented, and the team repeats the acceptance test to confirm that it passes. Finally, any necessary refactoring is done. A team can practice both TDD and ATDD simultaneously.
Behavior-driven development (BDD)
One way to increase clarity in the requirements is to write them as real examples rather than using abstract terms. The BDD approach is referred to, similar to ATDD, but it uses a specific syntax called Gherkin. User stories are supplemented with examples of creating “scenarios” in BDD. Scenarios for a feature are written together in a feature file that can be used as an executable specification.
BDD scenarios are written using the GIVEN-WHEN-THEN syntax
Tools for Testable Requirements
No special tools are needed to create testable requirements. They can be documented in word processing files or even in note cards. However, tools can make the process more efficient. ATDD tests can be captured and automated using a tool such as FitNesse. For BDD, also there are a variety of language-specific tools to write the requirements in the Gherkin (GIVEN-WHEN-THEN) syntax and then prepare them for automation, including the following –
Concluding Testable Requirements
A most important approach is to ensure that testers and user representatives are included in the definition of requirements early in the process. While testable requirements make it much easier to automate your tests, the aim is to make sure that the entire team shares a clear understanding of the needs.
Overview of End to End Testing
End-to-end testing examines the real-world scenarios of an application from start to finish, touching as many functional areas as possible. End-to-end tests focus on validating the workflows of an application from the end user’s perspective, which makes them highly valued by the management and by customers. This testing is usually performed last in the testing process, following the lower level unit testing, integration testing, and system testing. Despite their value, automated End to End tests can be complicated to build, delicate and challenging to maintain. Consequently, a common approach is to plan a smaller number of E2E tests than unit and integration tests. You can also explore more about ReactJS Unit Testing in this blog.
E2E testing is carried out in as realistic an environment as possible, including the use of back – end services and external interfaces such as network, database and third-party services. As a result, E2E testing can identify issues such as real-world timing and communication issues that may be missed when units and integrations are tested in isolation.
End to End Testing Example
Assume you are testing a web shopping platform that requires a third party to validate payment details. This application may contain E2E tests such as the following –
- User logs on in the application, searches for an item in it, puts the item in the cart, selects payment and the shipping details, and then checks out and logs off.
- User logs on in the application, searches for an existing order that has already been shipped, reviews the tracking information, and then receives a detailed response on the delivery of the order and logs off.
- User logs on in the application, searches for an existing order that has been shipped, requests a return of the order and receive a shipping label to return the item, and logs off.
- User logs on in the application, opens his/her account information, adds a new payment type in it, receives verification that the payment type is valid, and then logs off.
These above-defined tests will access third-party services such as payment verification and shipment tracking, as well as one or more customer information databases, inventory, orders and more.
Best Practices for End to End Testing
A typical E2E test can be complicated, with multiple steps that are time-consuming to do manually. This complexity can also make E2E tests challenging to automate and slow to execute. The following practices help to manage the costs of automated E2E testing while maintaining the benefits.
Keep an end-user perspective
Design E2E test cases from the end user’s perspective, with a focus on the features of the application rather than its implementation. When available, use documents such as user stories, acceptance tests, and BDD scenarios to capture the user’s perspective.
Limit exception testing
Focus End-to-end tests on high-use “happy path” or “golden path” cases that capture typical usage scenarios, such as those listed for the web shopping platform described above. Use lower – level unit and integration tests to check for wrong path / sad path exceptions, such as a user trying to order more of an item currently in inventory or returning an item after the allowable return date.
Apply risk analysis
Given the relative cost of performing E2E tests manually or automating them, concentrate on the high-risk features of your application. To determine a high-risk feature, consider both the likelihood of a failure and the potential impact it would have on end users. For identifying risks, there is a risk assessment matrix tool.
Separate test logic from your UI element definitions
To make your automated End to End tests more stable, separate the test logic from your UI element definitions. We can use an object repository or a page object pattern to prevent your test logic from interacting directly with the user interface. This will make your tests less likely to fail due to changes in the UI structure. It should include an object repository to separate automated tests from the UI element definitions, or we can use the page object pattern when testing the web applications with Selenium.
Test in the right order
When a single unit test fails, it is relatively easy to figure out where the defect occurred. But as tests grow in complexity and touch more components of an application, the increase in potential points of failure makes it harder to debug them when any failure occurs. Running your unit tests and integration tests first allows you to detect errors when they are relatively easy to resolve. Then complete your critical smoke tests first during E2E testing, followed by sanity checks and other high-risk test cases.
Manage your test environment
Make your environment as efficient and consistent as possible. Document the test environment requirements and communicate them to system administrators and anyone else involved in setting up the environment. Include in your documentation how will you handle the updates to the operating systems, browsers, and some other components of the test environment to keep it as similar as possible to the production environment. One solution may be to use a production environment image backup for testing purposes.
Choose the right devices
For the mobile devices, focus on physical testing on most popular versions of iOS and Android devices, and then use simulators/emulators to provide coverage for less popular versions: test both WiFi and cell carrier connections at different speeds from 3G to LTE.
Handle waits for UI elements appropriately
Do not allow an End-to-end test to fail unnecessarily when waiting for the loading of a page or a UI element to appear on the screen. You should add an action that remains for a UI element to exist, for a specified period. If that period is passed, then the test will fail. The wait time should be at least as long as the usual time that it should take for UI element to appear. But do not make it too long. Excessively long waiting times can indicate a problem with the application, interfaces or environment, and are annoying to end-users. Also, allowing long wait times in your automated tests can also affect the overall execution of your End to End testing. In general, set a waiting time that is just a little longer than the usual time it takes for a UI element to appear.
Optimize your setup/teardown process
Use the setup/teardown processes so that your test environment is always ready to go. Create your test data using the automated scripts and then clean it up so that the test environment is restored to its initial state and prepared for the next round of testing.
Concluding End to End Testing
It’s essential to plan manual testing and exploratory testing as part of your End to End testing, to address the aspects that are difficult-to-automate such as usability and user experience. And, also to ensure that you have a complete and well-balanced set of tests, including automated performance testing and load testing, which will be covered in the final article in this series.
Bug Prevention in Software Testing
QA engineers are trained to catch bugs, but a resourceful QA engineer will also consider how to prevent them. Traditionally QA starts testing at the UI level, but an agile method begins with the unit testing and functional testing and then moves to the UI level testing. This approach prevents bugs from entering higher levels of development, which can later cause significant problems later and likely delivery delays.
In the world of software testing teams, with a growing trend towards ‘ shift left ‘ testing, the focus is more on bug prevention than detection.QA teams today participate in activities such as requirement analysis and providing test cases before starting development (TDD), which should ideally help organizations prevent most bugs, and hence this saves time, effort and costs in the development cycles.
A Software Testing Approach
Enabling best Quality Assurance practices will change the way of integrating testing into the development process and also help in automating the deployment process. To Facilitate continuous testing approach as a testing approach we advise taking the following steps –
- Read More About ” Performance Automation Testing “
- Learn More About ” Automated Testing in DevOps “
- Explore Our ” Continuous Testing Solutions “