Full Process Walkthrough
System Testing
A complete system testing case study for an ITSM platform upgrade (v3.2). From test planning to production sign-off — every phase shows the real data, what was found, and why acting on each finding prevented costly post-release failures.
Test Planning
Before any testing begins, a structured test plan was prepared covering scope, objectives, test types, resource allocation, schedule, and risk mitigation. The plan was agreed and signed off by the project lead and client representative.
Test Plan Summary — ITSM Platform Upgrade v3.2
| Section | Detail |
|---|---|
| Project | ITSM Platform Upgrade v3.2 |
| Test Scope | Incident module, SLA engine, reporting dashboards, user access |
| Test Types | Functional, Integration, Regression, UAT |
| Total Test Cases | 248 |
| Test Period | 2024-02-05 to 2024-02-23 (3 weeks) |
| Environments | SIT (System Integration), UAT (User Acceptance) |
| Sign-off Required From | Project Manager, Client Lead, QA Lead |
Test Case Design
248 test cases were designed across 6 functional areas using equivalence partitioning and boundary value analysis. Each test case included preconditions, test steps, expected results, and links to requirements. Cases were reviewed and approved before execution began.
Test Case Distribution by Module
| Module | Test Cases | Priority | Type |
|---|---|---|---|
| Incident Creation & Routing | 52 | High | Functional |
| SLA Timer & Breach Logic | 48 | Critical | Functional + Integration |
| Escalation Workflow | 34 | High | Functional |
| Reporting & Dashboard Output | 41 | High | Functional + UAT |
| User Access & Permissions | 28 | Medium | Security |
| System Integration (API) | 45 | Critical | Integration + Regression |
Test Execution — SIT
System Integration Testing was executed over 10 days. 248 test cases were executed, uncovering 19 defects. Critical defects in the SLA breach logic and API integration were immediately escalated. 16 of 19 defects were resolved and re-tested within the SIT window.
SIT Execution Results — ITSM Platform v3.2
| Test Case ID | Module | Priority | Result | Defect Raised | Status |
|---|---|---|---|---|---|
| TC-SIT-001 | Incident Creation | High | Pass | None | Closed |
| TC-SIT-012 | SLA Timer Logic | Critical | FAIL | DEF-007 | Resolved |
| TC-SIT-013 | SLA Breach Alert | Critical | FAIL | DEF-008 | Resolved |
| TC-SIT-045 | Escalation Workflow | High | Pass | None | Closed |
| TC-SIT-089 | API Integration | Critical | FAIL | DEF-015 | Open |
| TC-SIT-112 | Dashboard Output | High | Pass | None | Closed |
| TC-SIT-198 | User Permissions | Medium | FAIL | DEF-019 | Resolved |
| TC-SIT-248 | Regression — Core | High | Pass | None | Closed |
🔍 Finding
19 defects raised. 3 critical: SLA timer miscalculating breach thresholds by 15 minutes, breach alert emails not triggering, and API payload dropping custom fields on handoff. These would have caused incorrect SLA reporting in production.
✅ Why the Team Must Act
Critical defects DEF-007, DEF-008 were resolved by the dev team within 48 hours and re-verified. DEF-015 (API) remained open and was tracked as a known risk into UAT — the dev lead confirmed a fix in the next build cycle.
UAT Coordination
UAT was conducted with 8 end users across 3 departments over 5 days. I coordinated sessions, distributed test scripts, walked users through scenarios, documented their feedback, and tracked all raised issues in a defect log. 6 new issues were raised by users, 4 of which were resolved before sign-off.
UAT Session Log — 5 Days, 8 Users, 3 Departments
| Day | Dept | Users | Scenarios Tested | Issues Raised | Resolved | Sign-off |
|---|---|---|---|---|---|---|
| Day 1 | IT Operations | 3 | Incident creation, routing, escalation | 2 | 2 | Yes |
| Day 2 | Service Desk | 2 | SLA compliance view, breach alerts | 2 | 1 | Partial |
| Day 3 | Management | 2 | Dashboard, reporting, KPI views | 1 | 1 | Yes |
| Day 4 | IT Operations | 1 | Re-test Day 2 open issue + regression | 1 | 1 | Yes |
| Day 5 | All | 8 | Full end-to-end walkthrough | 0 | 0 | Yes — Full |
🔍 Finding
Service Desk users flagged that the SLA breach alert email did not display the ticket priority in the subject line — a usability issue not caught in SIT. This caused confusion in triaging urgent tickets from email alone.
✅ Why the Team Must Act
Email template was updated by the dev team to include priority and ticket category in the subject line before Day 4. This directly improved the team's ability to triage from inbox without opening the portal — saving estimated 3-4 minutes per P1 alert.
Defect Lifecycle
All defects were logged with severity, steps to reproduce, expected vs actual results, screenshots, and developer assignment. A defect board in JIRA tracked status from New → In Progress → Fixed → Re-test → Closed. The full lifecycle for all 25 raised defects was managed and closed before UAT sign-off.
Defect Summary — Full Lifecycle (SIT + UAT Combined)
| Defect ID | Module | Severity | Status | Dev Fix Time | Re-verified |
|---|---|---|---|---|---|
| DEF-007 | SLA Timer Logic | Critical | Closed | 24 hrs | Yes |
| DEF-008 | Breach Email Alert | Critical | Closed | 48 hrs | Yes |
| DEF-015 | API Integration | Critical | Closed | 72 hrs | Yes |
| DEF-019 | User Permissions | High | Closed | 36 hrs | Yes |
| DEF-021 | Email Subject Line | Medium | Closed | 8 hrs | Yes |
| DEF-024 | Report Date Filter | Low | Closed | 12 hrs | Yes |
| DEF-025 | Dashboard Load Time | Low | Closed | 16 hrs | Yes |
🔍 Finding
All 25 defects were closed before final UAT sign-off. 3 critical defects were resolved within 72 hours of being raised. Zero defects were carried into production — a full clean release.
✅ Why the Team Must Act
The defect-free production release was achieved because every critical defect was escalated immediately with clear reproduction steps and business impact documented. This allowed the dev team to prioritise correctly and avoid costly post-production fixes.
Sign-off & Documentation
A full Test Summary Report was produced documenting test coverage, execution results, defect summary, open risks, and sign-off recommendation. The report was presented to the project manager and client lead. Formal sign-off was obtained, clearing the build for production deployment.
Test Summary Report — Final Metrics
| Metric | Value |
|---|---|
| Total Test Cases Designed | 248 |
| Total Test Cases Executed | 248 (100%) |
| Pass on First Run | 229 (92.3%) |
| Defects Raised (SIT + UAT) | 25 |
| Critical Defects | 3 (all closed) |
| Defects Open at Sign-off | 0 |
| UAT Sign-off | Approved — all 8 users |
| Production Release Status | Cleared for deployment |
🔍 Finding
100% test execution coverage was achieved with a 92.3% first-run pass rate. All 25 defects were resolved and closed. The project delivered a defect-free production release on schedule.
✅ Why the Team Must Act
The structured approach — test plan agreed upfront, critical defects escalated immediately, UAT coordinated with real end users — is what enabled zero defects at go-live. Skipping any of these steps would have created rework and production risk.