BeginnerQA Engineer

Design boundary value test cases

For a text field accepting 1-255 characters and a number field accepting 0-999, write the complete set of boundary value analysis and equivalence class partitioning test cases. Identify which cases you would automate, which you would test manually, and which represent the highest risk.

Why this matters

Boundary value analysis and equivalence partitioning are the two techniques that give you the most defect coverage for the fewest test cases. They are the foundation of systematic test case design; not exhaustive testing, but structured coverage. Engineers who apply these techniques find bugs that exploratory testing and scripted happy-path testing both miss.

Before you start

Step-by-step guide

  1. 1

    Define the equivalence classes for the text field

    A text field accepting 1-255 characters has three equivalence classes: below minimum (0 chars, invalid), within range (1-255 chars, valid), and above maximum (256+ chars, invalid). Any value within a class should behave identically; if 1 character is valid, 100 characters should also be valid.

  2. 2

    Apply boundary value analysis to the text field

    For each boundary, test: just below (0), at boundary (1), just above (2) for the lower boundary; and just below (254), at boundary (255), just above (256) for the upper boundary. That is 6 boundary test cases. Add one from the middle of the valid range (128 chars) as a representative case.

  3. 3

    Repeat for the number field

    The number field (0-999) has the same structure. Boundaries: -1, 0, 1 at the lower end; 998, 999, 1000 at the upper end. Add a mid-range test (500). Decide: what should the field do with a decimal input (0.5)? A float string ("9.5")? A non-numeric string? These are additional equivalence classes you must define.

  4. 4

    Label each case: automate vs manual

    Mark each case with a decision. Boundary values are ideal automation candidates; they are stable, precise, and run in seconds. The non-numeric string cases are better as exploratory or manual cases because the expected behaviour may be ambiguous and worth observing rather than asserting.

  5. 5

    Run the cases and record actual behaviour

    Execute each case against the real application. For every case where actual behaviour differs from expected, file a brief observation: input, expected, actual. Even if something is not a bug (the UI truncates at 255 rather than rejecting), the behaviour difference is worth documenting for the team.

Relevant Axiom pages

What to do next

Back to Practice Lab