by Asim Jalis
I have recently been reading Kaner's notes (and books) on
testing. He has a list of 11 testing techniques and claims that
each technique used alone eliminates only certain types of
errors. To create highly reliable software multiple techniques
must be used together in concert.
For more details look at:
Here is a list of the 11 techniques together with my notes on
what each one is. The URL above has notes on all the techniques.
The 11 dominating techniques or paradigms of black-box testing:
- Domain testing: In domain testing you are looking the
individual fields or variables, partitioning the possible
values into equivalence classes (classes of values for which
you expect every value in the class to yield the same result in
a test), and looking at the boundary conditions.
- Risk-based testing: 1. Make a prioritized list of risks. 2.
Perform testing that explores each risk. 3. As risks evaporate
and new ones emerge, adjust your test effort to stay focused on
the current crop. Imagine a problem and then look for it. Harsh
testing for vulnerable areas of the program.
- Scenario testing: In scenario testing, you devise stories of
how a particular user might use the application and the execute
the story. Challenging cases that reflect real use. Real,
motivating, credible, complex use, results are easy to
- Function testing: Define all the functions or features that a
program provides. Determine how you would know if a function
worked. Test each function. Assesses capability rather than
reliability. Does not cover interaction.
- Specification-based testing: Check the product.s conformance
with every statement in every spec, requirements document, etc.
Could also be User documentation testing.
- Regression testing: In regression testing, you're focusing on
previously identified (and fixed) bugs to ensure that they're
- Stress testing: Push system's resource requirements. Increase
load. Increase rate of hitting the system.
- User testing: Determine different roles that users will play.
Use the system as these different users will use it.
- State-model based testing: Create a finite state machine that
represents the system. Create test-cases by exploring all the
paths that the state transitions can take. And then apply these
test cases to the system.
- High volume automated testing: Generate large number of test
cases using automated techniques and either hit the system with
all of them, or use some kind of random sampling, or choose
some intelligently chosen representative test cases from all
- Exploratory testing: Exploratory testing is defined as "any
testing in which the tester dynamically changes what they're
doing for test execution, based on information they learn as
they're executing their tests." To me, exploratory testing is a
"meta-type". While it does require its own mind set (and thus
qualifies as a separate item on the list), any of the other
types can also be done in an exploratory manner.
Next, to flesh these out we apply them printf.
- Domain testing: What are the equivalent classes of inputs and
what are their boundaries? Numbers, strings, floating points.
Empty string, one character string, negative numbers, positive
numbers, MAXINT, -MAXINT, MAXLONG, -MAXLONG. -1, tiny floating
point numbers, large floating point numbers. Vary n, m in n.m,
0 through large values, make n < m, m < n, n = m. Zero args,
one arg, many args. Use null. Pass in incorrect arguments.
- Risk-based testing: The highest risk might be the string
printf, since that risks buffer overruns. Test that first. The
obscure classes might be lower risk. Floats and ints would be
important too. Look for places where buffer overrun might
occur. Look for memory exhaustion. Look for really slow
- Scenario testing: Imagine a physicist uses this to generate a
table, or a report. Use it to generate a report, a table of
- Function testing: Render number, string, floats, etc.
- Specification-based testing: Write a test for each statement in
the man page.
- Regression testing: Automate the tests and run them repeatedly.
- Stress testing: Use very large string. Use really large format
string. Use very large number of arguments. See when failure
occurs. Use null. Pass in incorrect arguments. Pass arguments
in wrong order. How does it indicate failure. Test the return
- User testing: Define user roles. How would people use this
function? A programmer is one user. He might use it to generate
hello world type messages. Or for debugging messages. How fast
does it run. Does it require a flush to print out the debugging
output? What if a program crashes right after it prints.
Another user could be someone developing a website.
- State-model based testing: Single state function.
- High volume automated testing: Run it repeatedly. Run it for
all the numbers. Run it for string from length 0 to length N.
Run it for all permutations of 3 or 4 or 5 or 6 arguments.
- Exploratory testing: Try different things. Impact on memory.
Measure malloc calls. Measure free calls. Count these calls.
Count how much is allocated. Impact of parameter length and
argument count on performance.