Accessibility Testing
Verifying that a product can be used by people with disabilities. Legal requirement in many jurisdictions (EU EAA, UK Equality Act, US Section 508, ADA).
Automation Debt
The accumulated cost of shortcuts taken in test automation — and how to pay it down.
BDD and Gherkin
Behaviour-Driven Development bridges the gap between business requirements and automated tests. Requirements are written in natural language (Gherkin), then automated using step definitions.
Benefits Realisation in QA Process Improvement
Discipline of quantifying whether a QA improvement initiative delivered its promised outcomes — covering baseline measurement, target-setting, benefits tracking, and 12-month post-implementation review.
Bug Lifecycle
The process a defect goes through from discovery to closure. A well-defined lifecycle ensures bugs don't get lost, misunderstood, or marked done before they're verified.
Change Management for QA Transformation
How to apply ADKAR and Kotter's models to drive and sustain QA transformation on a client site, covering coalition-building, resistance patterns, capability building, adoption measurement, and post-disengagement sustainability.
Compliance Testing
Verifying software meets legal, regulatory, and standards requirements. Failing compliance isn't just a bug — it's a regulatory risk.
Continuous Testing
Testing integrated throughout the entire software delivery lifecycle — not a phase at the end. Shift left to catch bugs earlier; shift right to validate in production.
Cross-Browser Testing
Verifying an application works correctly across different browsers, versions, operating systems, and screen sizes.
Defect Clustering and Hotspot Analysis
Defects are not randomly distributed — they cluster in a small number of modules. Find the hotspots and focus testing there.
Defect Prevention
Finding bugs before they're written is cheaper than finding them after.
End-to-End Testing Strategy
How to scope, design, and maintain E2E tests that provide signal rather than noise.
Exploratory Testing
Simultaneous learning, test design, and execution. The tester uses their knowledge of the system to discover behaviours that scripted tests miss.
Internationalisation and Localisation Testing
Testing that software works correctly across languages, locales, character sets, and cultural conventions.
Mobile Testing
Testing native iOS/Android apps and mobile web. Different from web testing: platform APIs, gestures, device fragmentation, permissions, network conditions, and battery state all affect behaviour.
Negative Testing
Testing what the system does when things go wrong — invalid inputs, failed dependencies, boundary violations.
Non-Functional Testing
Testing how the system behaves under conditions, not just whether it does the right thing.
Pair Testing
Two people testing together — more perspectives, fewer assumptions, faster knowledge transfer.
Performance Test Reporting
The artefacts, formats, and stakeholder communication a Senior Technical Consultant produces during and after a performance testing engagement — from interim run reports through to go/no-go sign-off packs.
Performance Testing (QA Perspective)
QA's role in performance: defining NFR acceptance criteria, running performance tests as quality gates, and communicating results to stakeholders. Distinct from engineering-level load testing tooling.
Process Improvement Model (PIM)
Structured methodology for assessing client test capability against industry maturity frameworks (TMMi, TPI Next), identifying gaps, and delivering a measurable benefits-driven improvements roadmap.
Production Monitoring for QA
QA's role doesn't end at release — production is where quality actually matters. Synthetic monitoring, real user monitoring, and SLO tracking extend testing into live systems.
QA Consulting Toolkit
The QA consulting toolkit is the set of standard artefacts a consultant produces and uses on an engagement — quality strategy, test policy, RACI, assessment templates, and a deliverable register.
QA in Agile
How quality assurance integrates with Scrum and agile delivery. QA is not a phase at the end of a sprint — it's a continuous activity woven through every sprint ceremony and development step.
QA in DevOps
How quality practices integrate with DevOps pipelines. DevOps QA is not a team — it's quality gates, automated checks, and feedback loops embedded into the delivery pipeline.
QA Leadership and Quality Strategy
Operating QA at team and organisation scale — strategy, metrics, team building, and stakeholder communication.
QA Metrics
Metrics make quality visible and improvement measurable. Without them, QA discussions are opinions; with them, they're data-driven decisions. Track metrics to improve processes, not to measure people.
QA Process Improvement Mandate
A QA improvement mandate is the formal organisational authorisation to change testing practices — it requires an executive sponsor, a clear problem statement, defined scope, measurable success criteria, and a realistic budget.
QA Stakeholder Reporting
QA stakeholder reporting transforms raw test data into decision-relevant information — RAG dashboards for executives, defect burn-down for test managers, quality gate status for release managers.
QA Tools
The tools QA engineers use to manage test cases, track defects, and report quality. Covers test management platforms, defect tracking, and supporting utilities.
Regression Testing
Verifying that previously working functionality hasn't been broken by new changes. The primary value of an automated test suite — it prevents known-good behaviour from silently degrading.
Release Sign-Off and Go/No-Go Governance
Release sign-off is the formal quality gate between testing and production. A senior QA consultant owns the no-go authority, designs measurable exit criteria, documents known-defect risk decisions, and communicates quality status to stakeholders who have the power — and the pressure — to override it.
Risk-Based Test Selection
Running the right tests at the right time — not the entire suite on every commit.
Risk-Based Testing
Prioritise testing effort toward areas of highest risk. You never have enough time to test everything — risk-based testing ensures the most critical and failure-prone areas get the most attention.
Root Cause Analysis
Finding why defects happen so they stop happening — not just fixing the symptom.
Security Testing (QA)
QA's role in application security: running automated security scans, coordinating with pen testers, and integrating security checks into the test pipeline.
Shift-Left Testing
Moving quality activities earlier in the SDLC — from deployment back to design — so defects are caught when they're cheapest to fix.
SLOs and SLAs as QA Governance
How QA teams derive, apply, and govern SLOs and error budgets as objective release gates — covering SLI/SLO/SLA vocabulary, error budget policies, go/no-go decision frameworks, and tooling with Prometheus and Datadog.
Smoke and Sanity Testing
Two fast verification techniques used at different stages of delivery. Often confused; both are narrow scope, but serve different purposes.
Test Automation Strategy
The plan for where, when, and what to automate — and what not to. Automation without strategy produces a flaky, expensive test suite that nobody trusts.
Test Case Design
Systematic techniques for deriving test cases from requirements. The goal is maximum defect detection with minimum test cases.
Test Data Management
How to get good test data reliably without coupling tests to each other or exposing production PII.
Test Documentation
The minimum viable paper trail: what to write, how to write it, and what to skip.
Test Environments
The environments through which code travels from developer laptop to production. Environment gaps cause bugs that only appear in certain stages.
Test Estimation and Capacity Planning
Estimation is the skill of turning scope uncertainty into a defensible commitment. Done well it protects the team from over-promise and protects the client from surprise. Done badly it destroys trust in both directions.
Test Planning
The discipline of deciding what to test, how to test it, and what "done" means — before writing a single test.
Test Reporting
Making test results visible, actionable, and historically trackable. Raw test output in a terminal window is not a report — good reporting surfaces trends, assigns ownership, and drives decisions.
Test Strategy
A test strategy defines what to test, how much of each type, and how testing integrates into the delivery process.
Testing AI/LLM Features
QA's role when the product includes LLM-powered features: chatbots, AI recommendations, summarisation, classification.
UAT Governance
The process ownership layer above UAT execution — entry/exit criteria, stakeholder briefings, scope dispute resolution, defect triage with business owners, formal sign-off, and managing the politics that determine whether a UAT phase lands cleanly or collapses.
Usability Testing
Evaluating a product by testing it with real users to find where the interface confuses, frustrates, or fails them.
User Acceptance Testing (UAT)
The final validation before release, performed by business stakeholders or end users. UAT confirms the software meets business requirements and is fit for purpose — not just that it's bug-free.