Scoring
The Control Rating will be derived from the Operational Effectiveness and the Design Effectiveness of a Control by using the RAG Grid.
The Design Effectiveness will be set manually on the Control Record itself.
The Operational Effectiveness will be based on a percentage average of Test Rating completed within the previous month (unless a check has been provided within the month).
The Test Rating will be made up of the aggregated Team Scores.
The Team Scores will be made up of the aggregated Test Instance Scores (as it will be possible that a test can be performed as frequently as twice daily and we are measuring on a monthly basis).
The activity Instance score is calculated from the elements within it. The rules being as follows (and in order):
- One or more elements not completed and due date expires = Test Instance set to Expired
- One or more elements set to Fail = Test instance set to Fail
- All elements set to Could Not Complete = Test Instance set to Could Not Complete
- All elements pass or a mix of pass/could not complete = Test Instance set to Pass
Elements within an activity will be answered by the appropriate user. The available responses are:
- Pass (the activity was completed)
- Fail (the activity was not completed)
- Could not complete (for some reason, the activity could not be carried out – when this is selected a Rationale will be mandatory)
- Expired (cannot be selected – this is automatically allocated if the due date expires)