Metadata-Version: 1.0
Name: pytest-regtest
Version: 0.2.0
Summary: py.test plugin for regression tests
Home-page: https://sissource.ethz.ch/uweschmitt/pytest-regtest/tree/master
Author: Uwe Schmitt
Author-email: uwe.schmitt@id.ethz.ch
License: http://opensource.org/licenses/GPL-3.0
Description: 
        
        pytest-regtest
        ==============
        
        This *pytest*-plugin allows capturing of output of test functions which can be compared
        to the captured output from former runs.
        This is a common technique to start `TDD <http://en.wikipedia.org/wiki/Test-driven_development>`_
        if you have to refactor legacy code which ships without tests.
        
        To install and activate this plugin you have to run::
        
            $ pip install pytest-regtest
        
        from your command line.
        
        This *py.test* plugin provides a fixture named *regtest* for recording data by writing to this
        fixture, which behaves like an output stream::
        
            def test_squares_up_to_ten(regtest):
        
                result = [i*i for i in range(10)]
        
                # one way to record output:
                print >> regtest, result
        
                # alternative method to record output:
                regtest.write("done")
        
        For recording the *approved* output, you run *py.test* with the *--reset-regtest* flag::
        
            $ py.test --reset-regtest
        
        The recorded output is written to text files in the subfolder ``_regtest_outputs`` next to your
        test scripts.
        
        You can reset recorded output of files and functions individually as::
        
            $ py.test --reset-regtest tests/test_00.py
            $ py.test --reset-regtest tests/test_00.py::test_squares_up_to_ten
        
        
        If you want to check that the testing function still produces the same output, you ommit the flag
        and run you tests as usual::
        
            $ py.test
        
        This shows diffs for the tests failing because the current and recorded output deviate.
        
Platform: UNKNOWN
