[ Perl tips index ]
[ Subscribe to Perl tips ]
A program is secure if you can depend on it to behave as you expect.
A module is not complete, until you are certain that it works correctly. The easiest way to ensure that this is the case, and to protect yourself from introducing bugs in the future, is to create a test suite for your code. Perl has a fantastic testing culture and many great tools to make testing easy.
For further information on testing read the Test::Tutorial documentation.
Writing code without tests, is setting yourself up for failure. Even the best programmers introduce errors into their code without realising it. You've certainly made some mistakes in code you thought should work, just in this course so far.
While it is possible to execute your code a number of times with different inputs, to see whether it is behaving as we expect; writing tests allows us to automate this process. This increases the number of tests we can run, and prevents us from forgetting any.
As a general rule, our test suite will only ever grow. If we write a test that exposes a bug, and then fix that bug, we keep the test, just in case a later change introduces a similar bug.
We can test anything we can run. Typically, however, a lot of your testing will be testing modules. These are easy. We load the module, and then we write tests for each and every subroutine, to make sure that they return the expected result for the inputs we specify.
Testing the outcome of scripts can be a little more difficult, especially if they make changes that may need to be reversed, however the actual testing principles remain the same.
It is possible to write code that is relatively easy to test, or almost impossible to test. As a consequence it can be quite difficult to add tests after the fact. We recommend writing tests at the same time as you write your code, and keeping the following rules in mind:
Keep each subroutine small, doing one task and that task well.
Throw errors on failure. Use Carp to do this appropriately.
Where possible, return values rather than printing content out to STDOUT.
Write each subroutine as independently as possible.
Pass each subroutine all the arguments it needs, rather than have it rely on values from elsewhere in the program, or environment. It's okay if it, in turn, calls other subroutines so long as they too are written independently.
Testing cannot prove the absence of bugs, although it can help identify them for elimination. It's impossible to write enough tests to prove that your program is flawless. However, a comprehensive test-plan with an appropriate arsenal of tests can assist you in your goal of making your program defect free.
When testing there are two typical paradigms for testing, these are as follow:
You have a specification, and you test that the code meets that specification. This doesn't require any knowledge of how the code works. For example, if a date could be input, you might try each of the following:
One year from now
One year ago
One hundred years from now
One hundred, two hundred or three hundred years ago
29 February, on a leap year
29 February, not on a leap year
32nd January, any year
31st April, any year
Different date formats (20-01-2011 vs 20/01/2011).
Different date arrangements (20-01-2011 vs 01-20-2011)
The specification should state which of the above are valid and invalid, and how each should be handled. In either case, so should the documentation. For example, your program may only allow for the date to be valid if it occurred in the last 150 years (in which case you'd test 149, 150 and 151 years ago, as well).
You know how the code works, you need to test the edge cases. For example if you accept someone's address and then store it in a database field that accepts 120 characters then you'd check:
A zero character address
An address containing 1 character
An address consisting of just 1 database meta-character, such as
An address of 119 characters
An address of 120 characters
An address of 121 characters
An address containing 120 characters exactly but which also includes
several meta-characters such as
' which will need to be escaped for the
You'd then verify that the data you retrieve in is exactly the same as the data you stored, or that an error is given as appropriate.
It should be clear to see that both a combination of white box and black box testing strategies is required to give us a robust testing suite. For example, in the white box list above, we're not testing that the address is an address, just that our code handles it correctly. If we were to add address validation, however, then we'd look carefully at what that validates and add extra tests to test that out (and also tests we expect to fail).
If we've created a module with
module-starter we will have a bunch of tests
already created for us. These can be found in our
t/ directory, for example
My-Module/t in this case. Let's look at what tests we have to start with:
00-load.t boilerplate.t manifest.t pod-coverage.t pod.t
that our module can be loaded without errors. This should be the first
test run, hence starting with the number
This test warns the author against leaving boilerplate documentation in the README, Changes and lib/module files.
Initially these tests are marked in a TODO block, to prevent their failure from slowing you down.
This test checks that your manifest is up to date. The manifest contains the list of files your distribution contains, and a list of dependencies that your distribution relies on. This information is essential if you are going to distribute your module to CPAN.
This tests whether your POD in your file is valid.
This tests whether you appear to have provided sufficient documentation for
your code. Every subroutine name which is exported must appear in a
Depending on your version of
Module::Starter you may have additional
tests as well.
We can run these tests via either of the following:
perl Makefile.PL make test # or prove -l lib/ t/
If you are on a Windows machine you may need to use
In either case the output should look almost the same:
t/00-load.t ....... 1/2 # Testing Maths 0.01, Perl 5.010001, /usr/bin/perl t/00-load.t ....... ok t/boilerplate.t ... ok t/manifest.t ...... skipped: Author tests not required for installation t/pod-coverage.t .. ok t/pod.t ........... ok All tests successful. Test Summary Report ------------------- t/boilerplate.t (Wstat: 0 Tests: 4 Failed: 0) TODO passed: 3-4 Files=5, Tests=10, 1 wallclock secs ( 0.11 usr 0.03 sys + 0.64 cusr 0.06 csys = 0.84 CPU) Result: PASS
When we run
perl Makefile.PL it creates a
Makefile for us. Amongst other
things, this records the names of the tests that exist in our distribution.
If you have not added or removed any test files since you last ran
perl Makefile.PL then you do not need to run that command again before
Conversely, whenever you do add or remove test files, you must remember to
perl Makefile.PL before you run
prove always runs the files that are currently in
t/ at the time it
Our next tip will cover writing basic tests with
[ Perl tips index ]
[ Subscribe to Perl tips ]
This Perl tip and associated text is copyright Perl Training Australia. You may freely distribute this text so long as it is distributed in full with this Copyright noticed attached.
If you have any questions please don't hesitate to contact us:
|Phone:||03 9354 6001 (Australia)|
|International:||+61 3 9354 6001|
Copyright 2001-2014 Perl Training Australia. Contact us at email@example.com