- Disaster Recovery
- Enable better utilization of hardware assets
- Create demonstration environments for new applications
- Compartmentalize applications for security
- Legacy application support
How can virtualization improve QA?
I am going to talk about utilizing virtualization in an information technology (IT) environment for a specific goal; creating an automated environment to allow test driven development and automated quality assurance of software builds to facilitate automated movement of software into a production environment.
This targeted goal is specifically meant to speed up the testing process for in house developed applications, traditionally web based. While speeding up the testing process, we want to ensure that we do not raise the need for more quality assurance (QA) staff. A goal is to ensure that all tools implemented maintain a strong balance of automation and human interaction, without increasing the time commitment for individual staff or their teams. By properly maintaining a balance of automated development and manual testing, we can ensure development teams can scale to support a growing number of deployed versions in the field, and a constantly growing list of features.
In addition to ensuring a scalable model for growing a QA environment, we should ensure that any QA process enables testing of the entire software stack, including libraries, OS patches, third-party software, application servers and associated databases. This testing should be isolated from any underlying hardware to ensure applications are portable between various hardware platforms. This isolation and integrated testing will be completed by creating a new VM as part of each software build. Each of these VMs will contain all necessary applications, services, libraries, third-party applications and data to complete the testing process. This generation of a single VM for each software build will ensure that testing is completed on the entire stack, and all results are easily reproducible.
Tools necessary for success
The key to ensuring a successful QA environment that can dynamically test new software builds and automatically move them between environments is ensuring that the correct tools are in place to provide staff the appropriate level of visibility into the automated process, without causing an excessive work load to be added to the developers and software testing teams.
Here are the most common tools I see the need for in a QA environment:
- Bug/defect/feature tracking.
- Reporting for patches applied to the source tree and the source of the patches.
- Association of patches and the bug/defect corrected.
- Automated testing framework.
- Association of testing results with specific bug/defect reports.
- Reporting capabilities for number of defects/bugs per line of code.
- Reporting capabilities for number of bugs/defects per developer.
- Tool to show what features and bug fixes will be available in each given release, and the progress towards version completion.
- Tool to show time necessary to complete each new feature, and time necessary to correct reported bugs/defects.
- Tagging of builds with unique identifiers that associate a build with a list of included features, corrected bugs and patches.
In my opinion, most development teams can complete their testing activities by using a system of five separate environments, each with specific purposes and goals. Larger development teams may very well have many more then this, but smaller teams should seldom have fewer as it creates the potential for different stages of testing to overlap in unpredictable ways. I envision the following environments and associated usage patterns, in the order they would be used to complete a final application build:
- Sandbox – New library testing, manual builds, developing unit tests.
- Development – Ensure error free builds, ensure library version matches and test database structure is correct.
- Quality – Test application against unit tests, test application response time and test data level integrity.
- User Acceptance Testing (UAT) – Test user input; both correct and invalid data handling, test application response time, test interactions with outside applications and tools. Limited testing by knowledgeable end users of the application.
- Production – Customer facing application implemented in a way to meet all required Service Level Agreements (SLAs).
Now that we have defined our tools for a consistent, reproducibility build process, lets compartmentalize that within a virtual machine (VM) for complete control and reproducibility. The goal of compartmentalizing it is to completely remove external influences from testing results, these external influences can include varying hardware platforms, inconsistent library versions and updated data models.
This process to safely compartmentalize an application and test it within a VM is three steps:
First – Our first step is to define a clear process for moving a build from one environment to the next in the testing process. This process should include testing for errors and warnings as part of the build process, defining what are acceptable pass and fail ranges for all unit tests and defining what performance benchmarks must be met by the application for each stage of testing. This step will also include defining any manual or management approvals required for moving software builds from one testing environment to another.
Second – After defining the process for properly testing each component, we must clearly define what is part of a build, and what components are external. This will assist us is developing our testing matrix for versions of our application, any outside data models and applications and all associated libraries. This step will include that we properly define the boundaries for the automated testing process, versus what testing will require manual review and intervention.
Third – The final step is developing and testing the process for moving builds from one environment to another, while ensuring that no changes are made between testing phases, and all builds are archived in a way they can be referenced at a later date if necessary. This step is to ensure that both testing from one stage is valid during another, but also ensuring that any testing failures and findings are archived for future analysis and review.
By utilizing VMs to compartmentalize applications during the development and testing cycle you can provide not only isolated, reproducible environments, you can easily roll back to archived versions of the production application if a deployment fails. VMs provide a way to test an entire application stack, including outside software, libraries and data models in an integrated way to ensure stability when deployed in a production environment. As the technology around VMs continues to evolve developers will only get more capabilities around snapshots of VMs, the ability to roll back in time within a VM and better performance modeling and characterization capabilities.
I did not discuss any specific hypervisors because this is meant to be a discussion around the business objectives of a testing environment. Most hypervisors on the market can be automated in a way to deploy this automated testing and movement between environments. Most hypervisors on the market also have associated tools for quickly cloning test environments for analysis, performance monitoring or simple archiving. When reviewing your environment and choosing a possible hypervisor for your development environment, these tools can provide invaluable capabilities to both development and testing staff, as well as your system administrators responsible for the production environments.