This document’s identifier is OPD-MTP-1.0.1. The identifier changes with any significant revision to the document.
This document describes the master test plan for the Deis open source PaaS software platform. It is a living document, continually updated to reflect the current practice and goals of software quality assurance for Deis.
Deis has unit, functional, integration, and acceptance tests that cover essential product functionality. The Deis project relies on contributors to be responsible by exercising these tests themselves, and by including tests when proposing changes. Deis is tested and validated by a 24/7 continuous integration platform, supplemented by intelligent manual testing.
Within the scope of this master test plan are these items:
deisCLI hosted at AWS S3
deisctlCLI hosted at AWS S3
At a high level, the overall features of the platform that are tested are:
While these features are effectively covered in ad-hoc testing and by existing customer usage, they are not specifically tested as part of the test plan yet.
These features are not included in the test plan currently due to resource limitations. Future test automation will move these features into the “to be tested” section.
Deis’ test plan relies on extensive test automation, supplemented by spot testing by responsible developers. Continuous integration tests ensure that the platform functions and regressions are not introduced, and focused manual testing is also relied upon for acceptance testing a product release.
Developers are expected to have run the same tests locally that will be run for them by continuous integration, specifically the test-integration.sh script. This will execute documentation tests, unit tests, functional tests, and then an overall acceptance test against a Vagrant cluster.
As changes are incorporated into Deis and the team plans a product release, maintainers begin acceptance tests against other cloud providers, following the instructions exactly as provided to users. When these platform-specific tests have passed, a final validation test occurs in continuous integration against a tagged codebase. If it succeeds, the release has passed.
Nightly jobs run a subset of test-integration tests against the released CLI installers and the current codebase to detect regressions in product behavior.
Integration testing has passed when no failures occurred in the test-integration.sh script as run by continuous integration. (Each proposed change also requires review and approval by two project maintainers before it is actually merged into the codebase, see Merge Approval.)
Acceptance testing has passed when no failures occurred in the “test-latest.sh” script as run by continuous integration in the “test-latest” job, and when maintainers have completed spot testing successfully, as defined by local release criteria.
Suspension of automated testing occurs when any failure arises in any part of the test suite. There are no optional or weighted failures: everything must pass. Suspension of manual testing occurs when a failure arises that makes further testing unpredictable or of limited value.
Resumption of testing occurs when a test failure has been addressed and fixed such that it is reasonable to assume tests may pass again.
For acceptance testing, test deliverables are generated by a maintainer starting the CLI and test-master jobs when appropriate.
Testing requires a Linux or Mac OS X host capable of running VirtualBox and Vagrant with good network connectivity. Specific environmental needs are outlined in the setup-node.sh script, which should be kept up-to-date with current needs.
A maintainer designated as “QA Lead” for an acceptance test process has the responsibility to execute the test task of starting appropriate CI jobs. The QA Lead is also tasked with overseeing manual testing activities executed by others.
For a patch or minor release, the QA Lead may decide not to execute all aspects of acceptance testing.
The QA Lead may also execute clerical tasks associated with a release as described in the Release Checklist documentation.
As we describe an ongoing, evolving test plan here, there is no fixed project schedule to address, just a repeatable process.
Deis releases early and often. The consequences of a failure in the test process described here are a delay to an expected release date and the restart of the test process once the failure has been addressed.
Automated tests do not yet extend to all cloud providers, and it is possible that manual testing could miss something. We will address this by adding AWS and other testing flavors soon.
Resources are limited, and contention between development needs and testing needs has the potential to slow down the quality assurance process.
The Deis maintainer team as a whole approves this document through our normal pull request and merge approval process. Comments and additions will be made as pull requests against this documentation.