Module testing basics

[ Perl tips index ]
[ Subscribe to Perl tips ]

A program is secure if you can depend on it to behave as you expect.

A module is not complete, until you are certain that it works correctly. The easiest way to ensure that this is the case, and to protect yourself from introducing bugs in the future, is to create a test suite for your code. Perl has a fantastic testing culture and many great tools to make testing easy.

For further information on testing read the Test::Tutorial documentation.

Why write tests?

Writing code without tests, is setting yourself up for failure. Even the best programmers introduce errors into their code without realising it. You've certainly made some mistakes in code you thought should work, just in this course so far.

While it is possible to execute your code a number of times with different inputs, to see whether it is behaving as we expect; writing tests allows us to automate this process. This increases the number of tests we can run, and prevents us from forgetting any.

As a general rule, our test suite will only ever grow. If we write a test that exposes a bug, and then fix that bug, we keep the test, just in case a later change introduces a similar bug.

What can we test?

We can test anything we can run. Typically, however, a lot of your testing will be testing modules. These are easy. We load the module, and then we write tests for each and every subroutine, to make sure that they return the expected result for the inputs we specify.

Testing the outcome of scripts can be a little more difficult, especially if they make changes that may need to be reversed, however the actual testing principles remain the same.

Coding with testing in mind

It is possible to write code that is relatively easy to test, or almost impossible to test. As a consequence it can be quite difficult to add tests after the fact. We recommend writing tests at the same time as you write your code, and keeping the following rules in mind:

Testing Strategies

Testing cannot prove the absence of bugs, although it can help identify them for elimination. It's impossible to write enough tests to prove that your program is flawless. However, a comprehensive test-plan with an appropriate arsenal of tests can assist you in your goal of making your program defect free.

When testing there are two typical paradigms for testing, these are as follow:

Black box testing

You have a specification, and you test that the code meets that specification. This doesn't require any knowledge of how the code works. For example, if a date could be input, you might try each of the following:

The specification should state which of the above are valid and invalid, and how each should be handled. In either case, so should the documentation. For example, your program may only allow for the date to be valid if it occurred in the last 150 years (in which case you'd test 149, 150 and 151 years ago, as well).

White box testing

You know how the code works, you need to test the edge cases. For example if you accept someone's address and then store it in a database field that accepts 120 characters then you'd check:

You'd then verify that the data you retrieve in is exactly the same as the data you stored, or that an error is given as appropriate.

Combining these ideas

It should be clear to see that both a combination of white box and black box testing strategies is required to give us a robust testing suite. For example, in the white box list above, we're not testing that the address is an address, just that our code handles it correctly. If we were to add address validation, however, then we'd look carefully at what that validates and add extra tests to test that out (and also tests we expect to fail).

Running our test suite

If we've created a module with module-starter we will have a bunch of tests already created for us. These can be found in our t/ directory, for example My-Module/t in this case. Let's look at what tests we have to start with:

        00-load.t  boilerplate.t  manifest.t pod-coverage.t  pod.t
00-load.t

This tests that our module can be loaded without errors. This should be the first test run, hence starting with the number 00.

boilerplate.t

This test warns the author against leaving boilerplate documentation in the README, Changes and lib/module files.

Initially these tests are marked in a TODO block, to prevent their failure from slowing you down.

manifest.t

This test checks that your manifest is up to date. The manifest contains the list of files your distribution contains, and a list of dependencies that your distribution relies on. This information is essential if you are going to distribute your module to CPAN.

pod.t

This tests whether your POD in your file is valid.

pod-coverage.t

This tests whether you appear to have provided sufficient documentation for your code. Every subroutine name which is exported must appear in a =head[2|3] or =item block.

Depending on your version of Module::Starter you may have additional tests as well.

We can run these tests via either of the following:

        perl Makefile.PL
        make test

        # or

        prove -l lib/ t/

If you are on a Windows machine you may need to use dmake.

In either case the output should look almost the same:

        t/00-load.t ....... 1/2 # Testing Maths 0.01, Perl 5.010001, /usr/bin/perl
        t/00-load.t ....... ok
        t/boilerplate.t ... ok
        t/manifest.t ...... skipped: Author tests not required for installation
        t/pod-coverage.t .. ok
        t/pod.t ........... ok
        All tests successful.

        Test Summary Report
        -------------------
        t/boilerplate.t (Wstat: 0 Tests: 4 Failed: 0)
          TODO passed:   3-4
        Files=5, Tests=10,  1 wallclock secs ( 0.11 usr  0.03 sys +  0.64 cusr  0.06 csys =  0.84 CPU)
        Result: PASS

When we run perl Makefile.PL it creates a Makefile for us. Amongst other things, this records the names of the tests that exist in our distribution. If you have not added or removed any test files since you last ran perl Makefile.PL then you do not need to run that command again before running make test.

Conversely, whenever you do add or remove test files, you must remember to run perl Makefile.PL before you run make test.

prove always runs the files that are currently in t/ at the time it is invoked.

Next tip

Our next tip will cover writing basic tests with Test::More.

[ Perl tips index ]
[ Subscribe to Perl tips ]


This Perl tip and associated text is copyright Perl Training Australia. You may freely distribute this text so long as it is distributed in full with this Copyright noticed attached.

If you have any questions please don't hesitate to contact us:

Email: contact@perltraining.com.au
Phone: 03 9354 6001 (Australia)
International: +61 3 9354 6001

Valid XHTML 1.0 Valid CSS