'If it ain't broke, don't fix it' - Bert Lance

Unfortunately, this phrase does not apply to software.  With constant changes in security, software features, regulatory requirements, and the world itself, the system will break if not updated.  When it breaks, it can cause significant downtime and thus money.  Perhaps a more accurate phrase should be...'fix it before it breaks!'

After upgrading a system, especially a complex one, how do we know if the system still works correctly?  In other words, did fixing it before it breaks actually broke it after all?

To answer this question we need to test the system after the upgrade.  The better the tests, the less likely we will run into problems as the new system goes into production.  A Testing Plan is needed.

Why a Testing Plan?

A Testing Plan is a documented sequence of tests, or testing cases, designed to more effectively and efficiently test a system after the upgrade.  The plan is created before you enter the heat of battle.  Long before you snapshot your VM server, copy databases, and deactivate old licenses, the Testing Plan must be written.  A well written Testing Plan will:

  • Facilitate a 'go/no-go' decision - if the upgrade breaks the system, you will want to know ASAP so you can start going back to the backup / VM snapshot.
  • Find problems that need to be fixed - alert the Information Technology and Support organizations of issues and problems before production starts so they have time to react.
  • Identify changes -  the impact of changed or new features can be evaluated.  Any updates in documentation and training affecting standard operating procedures (i.e. SOPs) can be captured before going live.

Creating a Testing Plan

The Testing Script should include:

  1. System tests - to make sure key components are working (services are running, database is accessible, etc.)
  2. General tests - of basic system components.  For an enterprise labeling system, this might  include:
    • Is the browser interface working?
    • Can basic modules open and close?
    • Are label design files accessible and do they render correctly?
    • Can users access the system?
    • Do the correct number of printers and/or user licenses appear?
  3. New functionality tests - review the software Release Notes for the software.  Make sure any new features or functionality that can impact usage now or change usage in the future are tested.
  4. Use Case tests - focus on how the software is most likely to be used in production.  Simulate printing labels based on these use cases.  Find out what your end users will see before they do!

Each test should include:

  • Case # - create a unique reference to the test to make it easier to identify, discuss, and analyze after testing.
  • Instructions - how to perform the test - the more specific, the more valuable the test.
  • Expected results - what does 'success' look like?
  • Pass? - leave a space for the testing user to confirm if the test passed.
  • Notes - highlight any additional issues such as changes needed to SOPs, etc.

Leave a Comment

You must be logged in to post a comment.