IVI Framework Viewer

QA/Test

B1

Manage and conduct quality assurance and testing activities (e.g. requirements testing, code review, regression testing, user acceptance).

Improvement Planning

Practices-Outcomes-Metrics (POM)

Representative POMs are described for QA/Test at each level of maturity.

2Basic
  • Practice
    Develop test scenarios for IT solutions post-development, automating in some instances.
    Outcome
    Test cases are developed after the IT solutions are implemented, and data generated may undergo limited review.
    Metrics
    • % of solutions with test scenarios developed post release.
    • % of automated unit and acceptance test scenarios.
  • Practice
    Address quality issues for high-stake solutions.
    Outcome
    Quality may not be consistent for all solutions.
    Metrics
    • # of design-related incidents.
    • User acceptance rate (%).
  • Practice
    Use informal user acceptance testing to get input and feedback on test and validation from some stakeholders.
    Outcome
    Many needs are met by informal user acceptance testing.
    Metrics
    • # of design-related incidents.
    • User acceptance rate (%).
  • Practice
    Implement some automation of regression/integration tests and a common set of tools to support testing, such as debugging tools.
    Outcomes
    • Manual tests are replaced by automation and a common set of tools.
    • Tests are recorded and repeatable.
    Metrics
    • % of test effort allocated to manual testing.
    • % of manual testing that is exploratory.
3Intermediate
  • Practice
    Define unit and acceptance tests before solutions are constructed, with input and review from stakeholders.
    Outcomes
    • Efficient and effective unit and acceptance test processes are available after solutions are constructed, with input and review from stakeholders.
    • Tests may include non-functional aspects such as performance, compliance, and security, for both in-house and third-party components.
    Metrics
    • % of automated unit and acceptance test scenarios.
    • % of stakeholders involved in defining unit and acceptance tests.
    • Test execution speed (capacity to meet CI/CD requirements).
    • % of high-risk requirements covered by automated testing.
  • Practice
    Measure code quality using static code analysis tools.
    Outcomes
    • The quality of designs are good.
    • Fault slip-through levels are low.
    Metrics
    • # of design-related incidents.
    • User acceptance rate (%).
  • Practice
    Define and agree acceptance criteria with users.
    Outcome
    Rich user feedback is captured — regarding work as done only if it passes acceptance tests.
    Metrics
    • # of design-related incidents.
    • User acceptance rate (%).
  • Practices
    • Ensure that tests are routinely automated, including many non-functional tests.
    • Record and trace results.
    Outcomes
    • Automated test suites run efficiently to provide results rapidly.
    • Test are traceable back to their requirement.
    Metrics
    • % of test effort allocated to manual testing.
    • % of manual testing that is exploratory.
4Advanced
  • Practice
    Routinely review and update tests to maintain comprehensive coverage and include editorial aspects such as plain language and graphics criteria in test scope.
    Outcomes
    • Efficient and effective test processes including editorial and graphical considerations.
    • Well-defined test data management strategies enable realistic scenario testing.
    Metrics
    • % of automated test scenarios including editorial and graphical considerations.
    • % of tests reviewed for improvement per annum.
    • Test execution speed (capacity to meet CI/CD requirements).
    • % of high-risk requirements covered by automated testing.
  • Practice
    Ensure that test code undergoes review and evaluation.
    Outcome
    Test code is robust and extensible.
    Metrics
    • # of design-related incidents.
    • User acceptance rate (%).
  • Practice
    Continue to refine test processes to reflect user input.
    Outcome
    Tests are more robust — reflecting rich user feedback.
    Metrics
    • # of design-related incidents.
    • User acceptance rate (%).
  • Practices
    • Ensure all tests are routinely automated, and provide rapid results that are traceable.
    • Extend automation to all aspects of the solution including, for example, readability analysis.
    Outcomes
    • Pervasive automated test suites consistently provide results rapidly.
    • Test are fully traceable back to their requirement.
    Metrics
    • % of test effort allocated to manual testing.
    • % of manual testing that is exploratory.
5Optimized
  • Practice
    Continually review and optimize tests using innovative methods, when available.
    Outcome
    Continuously refreshed representative test scenarios using anonymized real-life test data improve automation and test coverage.
    Metrics
    • % of automated test scenarios including editorial and graphical considerations.
    • % of tests reviewed for improvement per annum.
    • Test execution speed (capacity to meet CI/CD requirements).
    • % of high-risk requirements covered by automated testing.
  • Practice
    Ensure frequent graduated releases.
    Outcome
    More frequent smaller releases allow for deeper feedback from active use and shorten the feedback loop, producing a quicker response and excellent quality of solutions.
    Metrics
    • # of design-related incidents.
    • User acceptance rate (%).
  • Practice
    Carry out post-deployment observation and interaction sessions to ensure processes and solutions are working effectively.
    Outcome
    Test processes are working seamlessly to ensure a high-quality solution.
    Metrics
    • # of design-related incidents.
    • User acceptance rate (%).
  • Practice
    Experiment with innovative methods and solutions to improve the level of automation and efficiency of tests on an ongoing basis.
    Outcome
    There is optimal use of automation at all times.
    Metrics
    • % of test effort allocated to manual testing.
    • % of manual testing that is exploratory.