This is an old revision of the document!
This is the test suite for version 1.8.0 of ULS.
A number of administrative tasks and entries must be done to set up the necessary settings for all tests. This is also a test of the ULS administration itself.
A number of well-defined basic values containing all supported datatypes are sent to the ULS and the correct transmission and presentation is checked thru a manual basic analysis.
boundary tests are made to verify the correct processing of e.g. very small and very large values.
The general tests are based on values derived from the current number of seconds
These values are specifically converted and calculated to cover the most common scenarios. In combination with threshold definitions they are used to test the firing of notifications for threshold violations The notifications will contain replaced static and dynamically calculated embedded placeholders in their message bodies.
These values will also be used for the check of the aggregation of values.
Reports are used to check the different expected results and to check the report functionality itself.
This section describes all test cases.
Continue here
the correct storage of all value types:
values, thresholds, aggregation, …
detailing how to run each test, including any set-up preconditions and the steps that need to be followed
Test Script is a program written to test the functionality of the application. It is a set of system readable instructions to automate the testing with the advantage of doing repeatable and regression testing easily.
It is the Hardware and Software Environment where is the testing is going to be done. It also explains whether the software under test interacts with Stubs and Drivers.
Create a mock business. Treat it as real and process its data.
Test Procedure is a document with the detailed instruction for step by step execution of one or more test cases. Test procedure is used in Test Scenario and Test Scripts.
Test Log contains the details of test case execution and the output information.
recording which tests cases were run, who ran them, in what order, and whether each test passed or failed
detailing, for any test that failed, the actual versus expected result, and other information intended to throw light on why a test has failed. This document is deliberately named as an incident report, and not a fault report. The reason is that a discrepancy between expected and actual results can occur for a number of reasons other than a fault in the system. These include the expected results being wrong, the test being run wrongly, or inconsistency in the requirements meaning that more than one interpretation could be made. The report consists of all details of the incident such as actual and expected results, when it failed, and any supporting evidence that will help in its resolution. The report will also include, if possible, an assessment of the impact of an incident upon testing.
A management report providing any important information uncovered by the tests accomplished, and including assessments of the quality of the testing effort, the quality of the software system under test, and statistics derived from Incident Reports. The report also records what testing was done and how long it took, in order to improve any future test planning. This final document is used to indicate whether the software system under test is fit for purpose according to whether or not it has met acceptance criteria defined by project stakeholders.