This repository
provides a script for generating a starting point for automated tests using a YAML input file and Python code. The script generates pytest test functions for each test case specified in the YAML file. Cases for skipping and failing tests can be specified.
Installation
To use this script, you’ll need to install the required dependencies:
pip install -r requirements.txt
Usage
Generate pytest test cases from pytest_input.yaml
positional arguments:
module_name Name of the module to generate test cases for
optional arguments:
-h, --help show this help message and exit
To use the test generator, create a YAML file with test cases and pass the name of the Python module to test as a command line argument when running the pytest_maker.py
script:
python pytest_maker.py my_module
This will generate a test_my_module.py
file with pytest test functions.
Example
As an example, let’s say we have a module called my_module.py
with the following functions:
from math import pi, sin
from random import randint
def add(x, y):
return x + y
def subtract(x, y):
return x - y
def multiply(x, y):
return x * y
def divide(x, y):
return x / y
def concat_str(a, b):
return a + b
def concat_list(a, b):
return a + b
def pi_multiply(a):
return pi * a
YAML Input File
The pytest_input.yaml file is used to specify the test cases that will be generated by the test_maker.py script. Each test case is defined as a key-value pair in the format:
function_name$test_name:
args: $value1$value2...$valueN
equals: equals_output_value
outtype: output_type
skip: message_to_skip
fail: message_to_fail
Where:
function_name
is the name of the function to be tested.
test_name
is a unique identifier for the test case.
args
is a string that represents the arguments to be passed to the function, separated by a dollar sign ($).
equals
is the ‘==’ output of the function for the given arguments.
more
is the ‘>’ output of the function for the given arguments.
moreoe
is the ‘>=’ output of the function for the given arguments.
less
is the ‘<’ output of the function for the given arguments.
lessoe
is the ‘<=’ output of the function for the given arguments.
eval_equals
is the ‘==’ of the eval(output) of the function for the given arguments.
eval_more
is the ‘>’ eval(output) of the function for the given arguments.
eval_moreoe
is the ‘>=’ eval(output) of the function for the given arguments.
eval_less
is the ‘<’ eval(output) of the function for the given arguments.
eval_lessoe
is the ‘<=’ eval(output) of the function for the given arguments.
outtype
(optional) is the expected output data type.
skip
(optional) is a message to skip the test.
fail
(optional) is a message to indicate the test has failed.
Note: only one of skip or fail can be used per test case.
It can also take the following format:
fixture$test_name:
args: $value_to_return
to create custom fixtures:
@pytest.fixture
def test_fixture_test_name():
return value_to_return
Here’s an example pytest_input.yaml file:
fixture$1:
args: $['555']$['666']
fixture$2:
args: $5
add$simple_add:
skip: 'The function should be skipped'
args: $2$3
equals: 5
outtype: int
subtract$simple_subtract:
args: $3$3
equals: 0
less: 7
outtype: int
multiply$1_to_7:
fail: 'The value is not true'
args: $1$7
equals: 8
more: 6
outtype: int
pi_multiply$pi_times_5:
args: $5
eval_equals: 'pi*5'
eval_lessoe: 'pi*10'
outtype: float
pi_multiply$pi_times_math:
args: $fixture_2
eval_moreoe: 'randint(1, 4)'
eval_lessoe: 'pi*sin(90)*20'
outtype: float
concat_list$list:
timeout: 3
args: $[1]$[2]
equals: [1, 2]
outtype: List
concat_list$str:
args: $fixture_1*
equals: ['555', '666']
outtype: List
Then, running python pytest_maker.py my_module
will generate a file test_my_module.py with the following content which then prompts the user to run the tests created or it can be run using the pytest
library:
import pytest
from math import pi
from math import sin
from random import randint
from typing import *
from boom import *
@pytest.fixture
def test_fixture_1():
return ['555'], ['666']
@pytest.fixture
def test_fixture_2():
return 5
@pytest.mark.skip(
reason="The function should be skipped")
def test_add_simple_add() -> None:
result = add(2, 3)
assert isinstance(result, int)
assert result == 5
def test_subtract_simple_subtract() -> None:
result = subtract(3, 3)
assert isinstance(result, int)
assert result < 7
@pytest.mark.xfail(
reason="The value is not true")
def test_multiply_1_to_7() -> None:
result = multiply(1, 7)
assert isinstance(result, int)
assert result == 8
assert result > 6
def test_pi_multiply_pi_times_5() -> None:
result = pi_multiply(5)
assert isinstance(result, float)
assert result == pi*5
assert result <= pi*10
def test_pi_multiply_pi_times_math(test_fixture_2) -> None:
result = pi_multiply(test_fixture_2)
assert isinstance(result, float)
assert result <= pi*sin(90)*20
assert result >= randint(1, 4)
@pytest.mark.timeout(3)
def test_concat_list_list() -> None:
result = concat_list([1], [2])
assert isinstance(result, List)
assert result == [1, 2]
def test_concat_list_str(test_fixture_1) -> None:
result = concat_list(*test_fixture_1)
assert isinstance(result, List)
assert result == ['555', '666']