Planet Pytest

December 09, 2016

pytest-tricks

Load pytest plugins dynamically

There is a way to pass fixtures from python code to tests via plugins argument of pytest.main().

Creating fixture

To create fixture we can use decorator @pytest.fixture:

@pytest.fixture
def info(request):
    return "Information!"

Passing plugin to pytest

To pass something to pytest-executed tests we can use plugin mechanism of pytest. (For not let says that myplugin contains actual plugin, I’ll show later how to create it).

pytest.main([path_to_test, '-v'], plugins=[myplugin])

Please not that plugins takes a list of plugins, not plugin itself

Creating plugin

There are two ways to create plugin:

  • Use stand-alone module
  • Create class or other dotted object (object with members accessible by dot: plugin.something).

Module way

Use a separate file to contains plugin code:

myplugin.py:

import pytest

@pytest.fixture
def info(request):
    return "Information"

actual call of pytest:

import myplugin

pytest.main(['mytest.py', '-v'], plugins=[myplugin])

Fixture usage (mytest.py):

def test_foo(info):
    assert info() == "Information"

Class way

This way I found more preferable, because I have higher degree of freedom during class initialization. I can put all information I want inside class in very natural way.

Plugin and fixture constuction and test call:

import pytest

class MyPlugin(object):
    def __init__(self, data):
        self.data = data

    @pytest.fixture
    def info(self, request):
        return self.data

myplugin = MyPlugin('information')

pytest.main("mytest.py", plugins=[myplugin])

Test is exactly the same as before (mytest.py):

def test_foo(info):
 assert info() == "Information"

More than one fixture

It is very easy to put any number of fixtures into a class:

import pytest

class MyPlugin(object):
    def __init__(self, data):
        self.data = data

@pytest.fixture
    def info(self, request):
        return self.data

@pytest.fixture
    def length(self, request):
        return len(self.data)

myplugin = MyPlugin('information')

pytest.main("mytest.py", plugins=[myplugin])

A bit upgraded version of the test accepts two fixtures (mytest.py):

def test_foo(info, length):
 assert len(info()) == length()

by George Shuklin at December 09, 2016 12:00 AM

November 12, 2016

pytest-tricks

ids for fixtures and parametrize

parameters for tests

pytest comes with a handful of powerful tools to generate parameters for a test, so you can run various scenarios against the same test implementation.

  • params on a @pytest.fixture
  • parametrize marker
  • pytest_generate_tests hook with metafunc.parametrize

All of the above have their individual strengths and weaknessses. In this post I'd like to cover ids for tests and why I think it's a good idea to use them.

Please check out older posts on this blog to find out more about parametrization in pytest, if you haven't already.

test items

To understand how pytest generates ids for tests, we first need to know what test items are.

For pytest a test item is a single test, defined by an underlying test function along with the setup and teardown code for the fixtures it uses. Essentially, everything we need to run a test. Suffice to say that this is a slightly simplified explanation, but that's all we need to know for now.

Here's a code example for clarification:

@pytest.mark.parametrize(
    'number, word', [
        (1, '1'),
        (3, 'fizz'),
        (5, 'buzz'),
        (8, '8'),
        (10, 'buzz'),
        (15, 'fizzbuzz'),
    ]
)
def test_fizzbuzz(number, word):
    assert fizzbuzz(number) == word

There is a single test function in above code example. It calls a function called fizzbuzz with an integer and checks that its return value matches a certain string.

Now when we run this, pytest creates a number of test items under the hood.

  • Item with function: test_fizzbuzz, number: 1, word: '1'
  • Item with function: test_fizzbuzz, number: 3, word: 'fizz'
  • Item with function: test_fizzbuzz, number: 5, word: 'buzz'
  • Item with function: test_fizzbuzz, number: 8, word: '8'
  • Item with function: test_fizzbuzz, number: 10, word: 'buzz'
  • Item with function: test_fizzbuzz, number: 15, word: 'fizzbuzz'

This is exactly what we can see in the test result log when running pytest in normal mode (not quiet or verbose).

what are ids?

Now pytest lets us add ids to test items to make them easier to distinguish in the log, especially in verbose mode.

auto-generated ids

If you leave it up to pytest to generate ids, you will see that most of them are pretty good, but others feel somewhat random. The reason being that non-primitive types such as dict, list, tuple or instances of classes are non trivial to translate into ids.

The following test is parametrized and the id for a test item will be a representation for each individual parameter - joined with a -.

class CookiecutterTemplate:
    def __init__(self, name, url):
        self.name = name
        self.url = url

PYTEST_PLUGIN = CookiecutterTemplate(
    'pytest-plugin',
    'https://github.com/pytest-dev/cookiecutter-pytest-plugin',
)

@pytest.mark.parametrize(
    'a, b',
    [
        (1, {'Two Scoops of Django': '1.8'}),
        (True, 'Into the Brambles'),
        ('Jason likes cookies', [1, 2, 3]),
        (PYTEST_PLUGIN, 'plugin_template'),
    ]
)
def test_foobar(a, b):
    assert True

$ pytest -v produces the following report:

============================ test session starts =============================
collecting ... collected 4 items

test_ids.py::test_foobar[1-b0] PASSED
test_ids.py::test_foobar[True-Into the Brambles] PASSED
test_ids.py::test_foobar[Jason likes cookies-b2] PASSED
test_ids.py::test_foobar[a3-plugin_template] PASSED

========================== 4 passed in 0.03 seconds ==========================

As you can see whenever pytest encounters one of the non-primitives it uses the parametrized argument name with a suffix instead.

For instance a tuple of str and list, such as:

('Jason likes cookies', [1, 2, 3])

is translated to

Jason likes cookies-b2

explicit ids

The good news is that you can define ids yourself rather than leaving it up to pytest to somehow figure them out.

PYTEST_PLUGIN = CookiecutterTemplate(
    'pytest-plugin',
    'https://github.com/pytest-dev/cookiecutter-pytest-plugin',
)


@pytest.mark.parametrize(
    'a, b',
    [
        (1, {'Two Scoops of Django': '1.8'}),
        (True, 'Into the Brambles'),
        ('Jason likes cookies', [1, 2, 3]),
        (PYTEST_PLUGIN, 'plugin_template'),
    ], ids=[
        'int and dict',
        'bool and str',
        'str and list',
        'CookiecutterTemplate and str',
    ]
)
def test_foobar(a, b):
    assert True
============================ test session starts =============================
collecting ... collected 4 items

test_ids.py::test_foobar[int and dict] PASSED
test_ids.py::test_foobar[bool and str] PASSED
test_ids.py::test_foobar[str and list] PASSED
test_ids.py::test_foobar[CookiecutterTemplate and str] PASSED

========================== 4 passed in 0.01 seconds ==========================

Note that passing a list of str values to the ids keyword overwrites ids per parameter combination in a marker and not for individual parameters. See how there is no - in the logged ids?

"Hey, you've talked about tests, items, ids, and now markers?!"

I know...it can be confusing. I hope to make this as clear as possible as we go on. Hopefully in the end of this post, when reading the TL;DR, you know how ids work and how you can use them to make your test suite more maintainable.

markers

As I've mentioned earlier str id values are applied to a specific parameter combination of markers rather than test items or individual parameters. To illustrate this, let's have a look at the following code example.

PYTEST_PLUGIN = CookiecutterTemplate(
    'pytest-plugin',
    'https://github.com/pytest-dev/cookiecutter-pytest-plugin',
)


@pytest.mark.parametrize(
    'a, b',
    [
        (1, {'Two Scoops of Django': '1.8'}),
        (True, 'Into the Brambles'),
        ('Jason likes cookies', [1, 2, 3]),
        (PYTEST_PLUGIN, 'plugin_template'),
    ], ids=[
        'int and dict',
        'bool and str',
        'str and list',
        'CookiecutterTemplate and str',
    ]
)
@pytest.mark.parametrize(
    'c',
    [
        'hello world',
        123,
    ],
    ids=[
        'str',
        'int',
    ],
)
def test_foobar(a, b, c):
    assert True

Above is the same test from the previous section, but uses an additional test parameter c, which is set up with another parametrize marker.

So here's what you see when you would run this in --verbose mode:

============================ test session starts =============================
collecting ... collected 8 items

test_multiple_markers.py::test_foobar[str-int and dict] PASSED
test_multiple_markers.py::test_foobar[str-bool and str] PASSED
test_multiple_markers.py::test_foobar[str-str and list] PASSED
test_multiple_markers.py::test_foobar[str-CookiecutterTemplate and str] PASSED
test_multiple_markers.py::test_foobar[int-int and dict] PASSED
test_multiple_markers.py::test_foobar[int-bool and str] PASSED
test_multiple_markers.py::test_foobar[int-str and list] PASSED
test_multiple_markers.py::test_foobar[int-CookiecutterTemplate and str] PASSED

========================== 8 passed in 0.01 seconds ==========================

As you can see here from the printed ids, for instance [int-bool and str], a string value is taken from each marker and joined with - as it would for the automatically generated ids.

ids callables

Instead of providing str values for test items, you can also pass in a function or method, that will be called for every single parameter and is expected to return a str id. Unlike str ids the function is called for every parameter!

If your callable returns None pytest falls back to the autogenerated id for that particular parameter.

PYTEST_PLUGIN = CookiecutterTemplate(
    'pytest-plugin',
    'https://github.com/pytest-dev/cookiecutter-pytest-plugin',
)


def id_func(param):
    if isinstance(param, CookiecutterTemplate):
        return 'template {.name}'.format(param)
    return repr(param)


@pytest.mark.parametrize(
    'a, b',
    [
        (1, {'Two Scoops of Django': '1.8'}),
        (True, 'Into the Brambles'),
        ('Jason likes cookies', [1, 2, 3]),
        (PYTEST_PLUGIN, 'plugin_template'),
    ],
    ids=id_func,
)
@pytest.mark.parametrize(
    'c',
    [
        'hello world',
        123,
    ],
    ids=id_func,
)
def test_foobar(a, b, c):
    assert True
============================ test session starts =============================
collecting ... collected 8 items

test_markers.py::test_foobar['hello world'-1-{'Two Scoops of Django': '1.8'}] PASSED
test_markers.py::test_foobar['hello world'-True-'Into the Brambles'] PASSED
test_markers.py::test_foobar['hello world'-'Jason likes cookies'-[1, 2, 3]] PASSED
test_markers.py::test_foobar['hello world'-template pytest-plugin-'plugin_template'] PASSED
test_markers.py::test_foobar[123-1-{'Two Scoops of Django': '1.8'}] PASSED
test_markers.py::test_foobar[123-True-'Into the Brambles'] PASSED
test_markers.py::test_foobar[123-'Jason likes cookies'-[1, 2, 3]] PASSED
test_markers.py::test_foobar[123-template pytest-plugin-'plugin_template'] PASSED

========================== 8 passed in 0.02 seconds ==========================

id hook

A new hook called pytest_make_parametrize_id was added in pytest 3.0 that makes it easy to centralize id generation. One of the reasons why you might want to do this, is when you use instances of your custom classes for parametrized tests. I personally don't think it's a good idea to change the classes, by modifying the __repr__ or __str__ magic methods, just so that you get this extra convenience in your tests. Instead I would encourage you to try out this hook.

Let's have a look at how this hook works:

PYTEST_PLUGIN = CookiecutterTemplate(
    'pytest-plugin',
    'https://github.com/pytest-dev/cookiecutter-pytest-plugin',
)


@pytest.mark.parametrize(
    'a, b',
    [
        (1, {'Two Scoops of Django': '1.8'}),
        (True, 'Into the Brambles'),
        ('Jason likes cookies', [1, 2, 3]),
        (PYTEST_PLUGIN, 'plugin_template'),
    ],
)
@pytest.mark.parametrize(
    'c',
    [
        'hello world',
        123,
    ],
)
def test_foobar(a, b, c):
    assert True

The test and the markers are identical to the example from the previous section, except for the ids keyword argument in the markers.

Now what we do instead is implementing the hook in a conftest.py file:

# -*- coding: utf-8 -*-

from templates import CookiecutterTemplate


def pytest_make_parametrize_id(config, val):
    if isinstance(val, CookiecutterTemplate):
        return 'template {.name}'.format(val)
    return repr(val)

val is the value that will be passed into the test a particular parameter and config is the test run config, so you could check for command-line flags etc. if you want to.

We effectively moved the implementation of our id_func to this hook, which means we don't need to set ids in all of the @pytest.mark.parametrize as long as we are happy with the way it generates ids. We can still overwrite them by explictly by setting a str id or passing in a callable.

fixtures

Ids for fixtures are in fact easier to understand as for parametrize markers, as they don't have any edge cases that you need to be aware of. Fixtures always return a single value, which means regardless of whether you use str ids, an id callable or the pytest_make_parametrize_id it will always be applied to this very parameter that the fixture returns.

@pytest.fixture(params=[
    CookiecutterTemplate(
        name='cookiecutter-pytest-plugin',
        url='https://github.com/pytest-dev/cookiecutter-pytest-plugin',
    ),
    CookiecutterTemplate(
        name='cookiecutter-django',
        url='https://github.com/pydanny/cookiecutter-django',
    ),
], ids=[
    'cookiecutter-pytest-plugin',
    'cookiecutter-django'
])
def template(request):
    return request.param


@pytest.fixture(params=[
    'pydanny',
    'audreyr',
    'michaeljoseph',
])
def github_user(request):
    return request.param


def test_template(template, github_user):
    assert True
============================ test session starts =============================
collecting ... collected 6 items

test_markers.py::test_template[cookiecutter-pytest-plugin-pydanny] PASSED
test_markers.py::test_template[cookiecutter-pytest-plugin-audreyr] PASSED
test_markers.py::test_template[cookiecutter-pytest-plugin-michaeljoseph] PASSED
test_markers.py::test_template[cookiecutter-django-pydanny] PASSED
test_markers.py::test_template[cookiecutter-django-audreyr] PASSED
test_markers.py::test_template[cookiecutter-django-michaeljoseph] PASSED

========================== 6 passed in 0.04 seconds ==========================

Conclusion

  1. If you don't set ids, pytest will generate them for you

  2. If you set them explicitly with str values

    a. they are set for a parameter combination in case of @pytest.mark.parametrize

    b. or a value returned by @pytest.fixture

  3. If you use a callable

    a. it will be invoked for each parameter in case of @pytest.mark.parametrize

    b. or a value returned by @pytest.fixture

  4. If you use the pytest_make_parametrize_id hook

    a. it will be invoked for each parameter in case of @pytest.mark.parametrize

    b. or a value returned by @pytest.fixture

So the only thing to keep in mind really is that str ids for pytest.mark.parametrize.

Hope this helps!

by Raphael Pierzina at November 12, 2016 12:00 AM

July 12, 2016

qutebrowser development blog

Sending out qutebrowser and pytest stickers

Last Thursday, I sent out 68 letters to 19 countries, containing the stickers for the pytest and qutebrowser crowdfundings!

I already had the pytest stickers for a while, and recently recieved the qutebrowser ones as well:

qutebrowser stickers qutebrowser and pytest stickers

As I had the data in a (somewhat messy) CSV from Indiegogo with notes added by hand, I wrote a small script to get the data from the CSV and generate LaTeX via jinja2, which then gave me a nice PDF which I could use with window envelopes:

sticker letters

Turns out addressing international mail correctly gets really hard when you have the address as individual parts rather than a free text field! I special-cased the UK and US to (hopefully) match their format, and hoped everyone else used something like "1234 City"... See the script source for details if you're curious.

A bit later, everything was folded, and sorted by pytest/qutebrowser as well as Switzerland/Europe/worldwide:

sticker letters - folded

I originally was told I could get them stamped at the post office as it was more than 50 letters - however, it turned out that's only possible if it's more than 50 letters per postage value, which wasn't the case for me... so I spent around 120 EUR on stamps:

A lot of Swiss stamps!

To make things worse, on most of the letters I had to stick two stamps. At least they were self-adhesive!

Some 20 minutes of sticking on stamps later, things were ready - I also sticked another qutebrowser sticker on the envelope, as I have more than enough (and it made it easier to see what's a pytest and what's a qutebrowser mailing):

Ready to ship!

And a bit later, they made their way into the post box:

Post box

Someone from the UK already told me theirs arrived, so hopefully yours will as well! Please let me know when it does. ;)

If your pledge level also includes a t-shirt, I'll send your stickers together with the t-shirt, which unfortunately still will take a while to arrive. Stay tuned!

by Florian Bruhin at July 12, 2016 09:15 AM

Day 8: More fixing and pytest sprint/training

(This blog post is actually a day late as I was busy packing for the pytest sprint yesterday)

Not too much exciting stuff this time - I mostly continued working on getting basic stuff like scrolling and zooming to work after the refactoring, and added a temporary fix for :navigate and hinting until I get to rewriting them for QtWebEngine.

pytest sprint and training

For the rest of this month, I won't be working on qutebrowser much, as I'll be at the pytest sprint in Freiburg. However, I plan to work on various things I need in qutebrowser's testsuite too!

After the sprint, 27th to 29th, I'll host a professional training for pytest, tox and devpi in Freiburg. If you'd like to join as well, there are still free spaces!

My "office"

Some people wondered if I'm working from home - to avoid distractions, I'm actually using the local hackerspace as my "office". Here's how it looks:

my workplace for qutebrowser

by Florian Bruhin at July 12, 2016 09:15 AM

Day 4: Playing whack-a-mole

Today, I felt like I was forced to play whack-a-mole:

whack-a-mole

(Image cc by-nc-sa by Mommysaurus75 on Flickr)

The good

Everything started nicely: Someone posted a pull request to handle links with (non-standard) spaces correctly with hints (<a href=" http://example.com ">), I told them how to add a test, they did, I fixed a small issue, bam, merged.

The bad

Then I fixed some tests which were failing due to changes from yesterday. The first one was a test which failed reliably on Windows since changing the test to use a real file instead of a mock.

It tests a function getting the last n bytes from a file, with 64 bytes. On Windows, it expected:

 58
new data 59
new data 60
new data 61
new data 62
new data 63

but got:

ew data 59
new data 60
new data 61
new data 62
new data 63

Unfortunately pytest failed to produce a nice readable diff like it usually does:

> assert lineparser.get_recent(size) == data
E assert ['ew data 59\...ew data 63\n'] ==
         [' 58\n', 'new...ew data 63\n']
E   At index 0 diff: 'ew data 59\n' != ' 58\n'
E   Right contains more items, first extra item: 'new data 63\n'

After printing the raw values staring at them for some seconds, I figured the few missing bytes were exactly the count of \r (carriage returns) you'd need to insert.

While Python takes care of the conversion when reading/writing files, when getting the last 64 bytes of a file, you'll get less data on Windows.

I fixed the code to use os.linesep instead of \n, and it still was off by one on Windows, but not in Linux:

# data = '\n'.join(self.BASE_DATA + new_data)
data = os.linesep.join(self.BASE_DATA + new_data)
data = [e + '\n' for e in data[-(size - 1):].splitlines()]

I then figured out I actually had an off-by-one error there - the "-(size - 1)" should actually be just -size. What I actually was missing is a "+ os.linesep" for the final line ending. I guess when originally implemented it I thought I just was off by one while slicing and naively fixed it the wrong way...

With that out of the way, I looked at the other test which was flaky - it reads the history a qutebrowser subprocess, definitely waits until the subprocess has written it, and still sometimes ends up with an empty file.

I let the logs sink in for a bit, but I still have no idea what'd cause it. In the end I just ended up marking the test as flaky using pytest-rerunfailures. This means it'll be run a second time if it fails, and if it passes then, it's assumed to be okay.

This is definitely less than ideal, but it's better than having a test which fails sometimes for no apparent reason, and better than not testing this functionality at all.

The ugly

After all that, I updated to Qt 5.6.1 to check if a segfault on exit was fixed like claimed in the bug report.

Turns out it wasn't, but instead there was a new bogus warning and a weird behavior change I needed to take care of in my testsuite.

Now I hoped I was finally done fixing weird bugs - turns out that was just the beginning. Someone joined the IRC channel and reported how hints often don't work at all for them since the big hint changes from Monday.

I couldn't reproduce the issue, and what we were seeing made no sense at all. See the bug report I opened to see all the gory details. If I tried to write them up here, I'd probably just hopelessly confuse myself again.

They are an experienced Python programmer as well, and after over 3.5h of debugging, we gave up.

I ended up adding a setting which allows to revert back to the old Python implementation. It's less accurate but also faster than the new (JS) one, so some people might prefer that anyways.

And some other changes

Before and after that frustrating debugging session, I also managed to get some other changes in:

I improved the error message when an invalid link is clicked as I stumbled upon it and it confused me.

I also started refactoring the history implementation and adding a few tests to it, as I still need to do a small change to handle redirects nicely before releasing v0.7.0 (what is what I originally planned to do today...).

The refactoring also allowed me to split off the QtWebKit-specific part of it nicely, so that's already a little step closer to QtWebEngine as a nice side effect!

Outlook

The todo list for tomorrow roughly looks like this:

  • Handle redirects in saved history
  • Merge the trivial doc PR for the Debian packages
  • Package and release v0.7.0

I really want to release v0.7.0 tomorrow unless another serious regression is found (fingers crossed!).

And then full-speed towards QtWebEngine support next week!

by Florian Bruhin at July 12, 2016 09:15 AM

Day 2: More pull requests and nicer test output

Better BDD output

Yesterday evening, a contributor had a nice idea of a better output for BDD style tests when they fail.

qutebrowser tests a lot of functionality using end-to-end tests which are written using the Gherkin language with the pytest-bdd plugin. Those look like this:

Scenario: Going back without history
    Given I open data/backforward/1.txt
    When I run :back
    Then the error "At beginning of history." should be shown

BDD tests work by spawning a local webserver, spawning a qutebrowser process, sending commands to it, and parsing its log.

If such a scenario fails however, you only can see what failed in the underlying python code. I improved the output to add this:

improved BDD output

pytest-bdd made this easy by adding a scenario attribute to pytest's TestReport object:

(Pdb++) pp report.scenario
{'examples': [],
 'feature': {
     'description': '',
     'filename': '/home/me/.../tests/end2end/features/misc.feature',
     'line_number': 1,
     'name': 'Various utility commands.',
     'rel_filename': 'features/misc.feature',
     'tags': []
 },
 'line_number': 367,
 'name': 'Focusing download widget via Tab (original issue)',
 'steps': [{
               'duration': 0.05486869812011719,
               'failed': False,
               'keyword': 'When',
               'line_number': 368,
               'name': 'I open data/prompt/jsprompt.html',
               'type': 'when'
           },
           ...
 ],
 'tags': ['pyqt531_or_newer']}

Using pytest's hook system, all I needed to do is adding this to my conftest.py (with colorizing code removed to simplify things a bit, see the full code for details):

@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
    """Add a BDD section to the test output."""
    outcome = yield
    if call.when not in ['call', 'teardown']:
        return
    report = outcome.get_result()

    if report.passed:
        return

    if (not hasattr(report.longrepr, 'addsection') or
            not hasattr(report, 'scenario')):
        return

    output = []
    output.append("Feature: {name}".format(
        name=report.scenario['feature']['name'],
    ))
    output.append(
        "  Scenario: {name} ({filename}:{line})".format(
            name=report.scenario['name'],
            filename=report.scenario['feature']['rel_filename'],
            line=report.scenario['line_number'])
    )
    for step in report.scenario['steps']:
        output.append(
            "    {keyword} {name} ({duration:.2f}s)".format(
                keyword=step['keyword'],
                name=step['name'],
                duration=step['duration'],
        )

    report.longrepr.addsection("BDD scenario", '\n'.join(output))

Hinting improvements

Today I was mostly busy with merging a half-year old pull request with various hint improvements which was missing tests, and the author of it didn't have the time to add them currently.

To make things easier, I reviewed and cherry-picked the individual commits one by one, and then added tests for them. See the resulting merge if you're curious.

This improves a variety of things related to hinting, most of them when using number hints:

  • New hints -> auto-follow-timeout setting to ignore keypresses after following a hint when filtering in number mode.
  • Number hints are now kept filtered after following a hint in rapid mode.
  • Number hints are now renumbered after filtering
  • Number hints can now be filtered with multiple space-separated search terms
  • hints -> scatter is now ignored for number hints
  • Fixed handling of backspace in number hinting mode

Currently it's looking like I have three pull requests left to merge tomorrow, one of them being a trivial doc update about Debian packages which is ready to merge, but I'll merge it shortly before the release.

by Florian Bruhin at July 12, 2016 09:15 AM

July 10, 2016

pytest-tricks

mark.parametrize with indirect

mark.parametrize

In Create Tests via Parametrization we've learned how to use @pytest.mark.parametrize to generate a number of test cases for one test implementation.

indirect

You can pass a keyword argument named indirect to parametrize to change how its parameters are being passed to the underlying test function. It accepts either a boolean value or a list of strings that refer to pytest.fixure functions.

False

If you set indirect to False or omit the parameter altogether, pytest will treat the given parameters as is w/o any specialties.

True

All of the parameters are stored on the special request fixture. Other fixtures can then access it via request.param and modify the test parameter.

List of fixture names

You can choose to only set indirect for a subset of arguments, by passing a list to the keyword argument: indirect=['foo', 'bar'].

Example code

We define two classes that have a few fields to hold information. The tests will access the attributes to check for correctness of our code. For that we create a number of instances of the Sushi class with varying parameters, before we call its property is_vegetarian in the test to see if the Fooshi Bar offers a few vegetarian dishes.

sushi.py

# -*- coding: utf-8 -*-


class Restaurant:
    def __init__(self, name, location, menu=None):
        if not menu:
            raise ValueError

        self.name = name
        self.location = location
        self.menu = menu


class Sushi:
    def __init__(self, name, ingredients=None):
        if not ingredients:
            raise ValueError

        self.name = name
        self.ingredients = ingredients

    def __contains__(self, ingredient):
        return ingredient in self.ingredients

    @property
    def is_vegetarian(self):
        for ingredient in ['Crab', 'Salmon', 'Shrimp', 'Tuna']:
            if ingredient in self:
                return False
        return True

Fixtures

A fixture named fooshi_bar creates a Restaurant with a variety of dishes on the menu.

The fixture sushi creates instances based on a name and looking up ingredients from the session scoped recipes fixture when the test is being run. Since it is created with params it will not only yield one but many instances. pytest will then create a number of test items for each of the test function that uses it.

conftest.py

# -*- coding: utf-8 -*-

import pytest

from sushi import Restaurant, Sushi


@pytest.fixture
def fooshi_bar():
    """Returns a Restaurant instance with a great menu."""
    return Restaurant(
        'Fooshi Bar',
        location='Buenos Aires',
        menu=[
            'Ebi Nigiri',
            'Edamame',
            'Inarizushi',
            'Kappa Maki',
            'Miso Soup',
            'Sake Nigiri',
            'Tamagoyaki',
        ],
    )


@pytest.fixture(scope='session')
def recipes():
    """Return a map from types of sushi to ingredients."""
    return {
        'California Roll': ['Rice', 'Cucumber', 'Avocado', 'Crab'],
        'Ebi Nigiri': ['Shrimp', 'Rice'],
        'Inarizushi': ['Fried tofu', 'Rice'],
        'Kappa Maki': ['Cucumber', 'Rice', 'Nori'],
        'Maguro Nigiri': ['Tuna', 'Rice', 'Nori'],
        'Sake Nigiri': ['Salmon', 'Rice', 'Nori'],
        'Tamagoyaki': ['Fried egg', 'Rice', 'Nori'],
        'Tsunamayo Maki': ['Tuna', 'Mayonnaise'],
    }


@pytest.fixture(params=[
    'California Roll',
    'Ebi Nigiri',
    'Inarizushi',
    'Kappa Maki',
    'Maguro Nigiri',
    'Sake Nigiri',
    'Tamagoyaki',
    'Tsunamayo Maki',
])
def sushi(recipes, request):
    """Create a Sushi instance based on recipes."""
    name = request.param
    return Sushi(name, ingredients=recipes[name])

Tests

We define two test functions that both use the sushi fixture. test_fooshi_serves_vegetarian_sushi also uses fooshi_bar as well as side_dish, which is dynamically created during collection phase via mark.parametrize.

Note how sushi is created with indirect=True. Unlike side_dish it will be passed on to the according fixture function which turns the name of a type of sushi to an actual instance as explained above.

test_sushi.py

# -*- coding: utf-8 -*-

import pytest


@pytest.mark.parametrize(
    'sushi',
    ['Kappa Maki', 'Tamagoyaki', 'Inarizushi'],
    indirect=True,
)
@pytest.mark.parametrize(
    'side_dish',
    ['Edamame', 'Miso Soup'],
)
def test_fooshi_serves_vegetarian_sushi(fooshi_bar, sushi, side_dish):
    assert sushi.is_vegetarian
    assert sushi.name in fooshi_bar.menu
    assert side_dish in fooshi_bar.menu


def test_sushi(sushi):
    assert sushi.name
    assert sushi.ingredients

Test run

When running these tests we can see that test_sushi is run eight times as expected as its only fixture sushi is created with eight parameters.

On the other hand test_fooshi_serves_vegetarian_sushi is run six times combining one value for fooshi_bar, two values for side_dish and three values for sushi!

$ py.test test_sushi.py

test_sushi.py::test_fooshi_serves_vegetarian_sushi[Edamame-Kappa Maki] PASSED
test_sushi.py::test_fooshi_serves_vegetarian_sushi[Edamame-Tamagoyaki] PASSED
test_sushi.py::test_fooshi_serves_vegetarian_sushi[Edamame-Inarizushi] PASSED
test_sushi.py::test_fooshi_serves_vegetarian_sushi[Miso Soup-Kappa Maki] PASSED
test_sushi.py::test_fooshi_serves_vegetarian_sushi[Miso Soup-Tamagoyaki] PASSED
test_sushi.py::test_fooshi_serves_vegetarian_sushi[Miso Soup-Inarizushi] PASSED
test_sushi.py::test_sushi[California Roll] PASSED
test_sushi.py::test_sushi[Ebi Nigiri] PASSED
test_sushi.py::test_sushi[Inarizushi] PASSED
test_sushi.py::test_sushi[Kappa Maki] PASSED
test_sushi.py::test_sushi[Maguro Nigiri] PASSED
test_sushi.py::test_sushi[Sake Nigiri] PASSED
test_sushi.py::test_sushi[Tamagoyaki] PASSED
test_sushi.py::test_sushi[Tsunamayo Maki] PASSED

========================== 14 passed in 0.02 seconds ==========================

Deferred loading of resources with hooks

Since test parametrization is performed at collection time, you might want to set up expensive resources only when the tests that use it are being run.

You can achieve this by using indirect and doing the hard work in fixtures rather than helper functions directly. This is also available from the pytest_generate_tests hook:

def pytest_generate_tests(metafunc):
    if 'sushi' in metafunc.fixturenames:
        metafunc.parametrize(
            'sushi',
            ['Kappa Maki', 'Tamagoyaki', 'Inarizushi'],
            indirect=True,
        )

For more information see the parametrize pytest docs.

by Raphael Pierzina at July 10, 2016 12:00 AM

June 28, 2016

pytest – Blargon7

Sprinting with pytest in Freiburg

IMG_9666On my way to the venue

Last week was the pytest development sprint located in the beautiful town of Freiburg, Germany. I had been really looking forward to the sprint, and being immediately after the Mozilla all-hands in London I was still buzzing with excitement when I started my journey to Freiburg.

On the first morning I really wasn’t sure about how to get to our sprint venue via public transport, and it didn’t seem to be far to walk from my hotel. It was a lovely sunny morning, and I arrived just in time for the introductions. Having been a pytest user for over five years I was already familiar with Holger, Ronny, and a few others, but this was the first time meeting them. We then spent some time planning out our first day, and coming up with things to work on throughout the week. My first activity was pairing up to work on pytest issues.

Breakfast at Cafe SchmidtBreakfast at Cafe Schmidt

For my first task I paired with Daniel to work on an issue he had recently encountered, which I had also needed to workaround in latest versions of pytest. It turned out to be quite a complex issue related to determination of the root directory, which is used for relative reference to test files as well as a number of other things. The fix seemed simple at first – we just needed to exclude arguments that are not paths from consideration for determining the root directory, however there were a number of edge cases that needed resolving. The patch to fix this has not yet landed, but I’m feeling confident that it will be merged soon. When it does, I think we’ll be able to close at least three related issues!

Ducks in the town centreDucks in the town centre

Next, I worked with Bruno on moving a bunch of my plugins to the pytest-dev GitHub organisation. This allows any of the pytest core team to merge fixes to my plugins and means I’m not a blocker for any important bug fixes. I’m still intending on supporting the plugins, but it feels good to have a larger team looking after them if needed. The plugins I moved are pytest-selenium, pytest-html, pytest-variables, and pytest-base-url. Later in the week we also moved pytest-repeat with the approval of the author, who is happy for someone else to maintain the project.

If you’ve never used pytest, then you might expect to be able to simply run

pytest
  on the command line to run your tests. Unfortunately, this isn’t the case, and the reason is that the tool used to be part of a collection of other tools, all with a common prefix of
py.
  so you’d run your tests using the
py.test
command. I’m pleased to say that I worked with Oliver during the sprint to introduce
pytest
  as the recommended command line entry point for pytest. Don’t worry – the old entry point will still work when 3.0 is released, but we should be able to skip a bunch of confusion for new users!

Break day hikeOn Thursday we took a break and took a cable-car up a nearby peak and hiked around for a few hours. I also finally got an opportunity to order a slice of Schwarzwälder Kirschtorte (Black Forest gateau), which is named after the area and was a favourite of mine growing up. The break was needed after my brain had been working overtime processing the various talks, demonstrations, and discussions. We still talked a lot about the project, but to be out in the beautiful scenery watching para-gliders gracefully circling made for a great day.

Hefeweizen!Hefeweizen!

When we returned to our sprint venue on Friday I headed straight into a bug triage with Tom, which ended up mostly focusing on one particular issue. The issue relates to hiding what is at first glance redundant information in the case of a failure, but on closer inspection there are actually many examples where this extra line in the explanation can be very useful.

Unfortunately I had to leave on Saturday morning, which meant I missed out on the final day of the sprint. I have to say that I can’t wait to attend the next one as I had so much fun getting to know everyone, learning some handy phrases in a number of different languages, and commiserating/laughing together in the wake of Brexit! I’m already missing my fellow sprinters!

by Dave at June 28, 2016 03:15 PM

June 06, 2016

qutebrowser development blog

About and Timeline

Introduction

A bit over two months ago, I started a crowdfunding campaign for qutebrowser, with the goal of working full-time on adding QtWebEngine support to it, which will bring more stability, security and features.

I asked for 3000€ to fund a month of full-time work before starting my studies in September. The campaign took off more than I'd have ever imagined and was almost funded in the first 24h.

At the end of the campaign, I got two months of full-time work funded. I'm now close to starting those awesome two months and set up this blog as a work log for what I'm doing, inspired by the one of git annex assistant.

I also submitted this blog to planet python, planet pytest and planet Qt - if you're reading this via one of those, fear not: I have dedicated tags for them, and only will tag posts which actually seem relevant, so you won't see daily posts there.

Timeline

My full-time work is planned to start tomorrow. I have some other obligations until September, so there will be some days in between where I won't be working on qutebrowser, but other things related to either Python or my studies.

This is the tenative schedule:

  • June 6th - 10th: qutebrowser (days 1-5)
  • June 13th - 15th: qutebrowser (days 6-8)
  • June 16th - 29th I'll be in Freiburg for the development sprint on pytest (which qutebrowser is using too), and giving a 3-day training for it.
  • June 30th - July 1st: qutebrowser (days 9-10)
  • July 4th - 8th: qutebrowser (days 11-15)
  • July 11th - 15th: qutebrowser (days 16-20)
  • July 17th - 24th I'll be in Bilbao at EuroPython giving another training about pytest and hopefully learning a lot in all the awesome talks.
  • July 25th - 29th: qutebrowser (days 21-25)
  • August 1st - 5th: qutebrowser (days 26-30)
  • August 8th - 11th: qutebrowser (days 31-34)
  • August 12th I'll be travelling to Cologne for Evoke, a demoparty I'm visiting every year (let me point out this has nothing to do with political demos, go check the wikipedia article :P).
  • August 15th - 19th: qutebrowser (days 35-39)
  • August 22th - September 2nd I'll be busy with a math preparation course of the university I'll be going to.
  • September 5th - 9th: qutebrowser (day 40 and some buffer)

Plans

The work required to get QtWebEngine to run can roughly be divided into four steps:

  • Preparation: Writing end-to-end tests for all important features, merging some pull requests which are still open, doing a last release without any QtWebEngine support at all, and organizing/shipping t-shirts/stickers for the crowdfunding. A lot of this already happened over the past few months, but I still expect this to take the first few days.
  • Refactoring: Since I plan to keep QtWebKit support for now, I'll refactor the current code so there's a clear abstraction layer over all backend-specific code. This will also make it easier to add support for a new backend (say, Servo) in the future. Since this will probably break a lot in the initial phase, this work will happen in a separate branch. As soon as the current QtWebKit backend works fine again, that'll be merged and QtWebEngine support will be in the main branch behind a feature switch.
  • Basic browsing: The next step is to get basic browsing with --backend webengine working. This means you'll already be able to surf, but things like adblocking, settings, automatic insert mode, downloads or hints will show an error or simply not work yet.
  • Everything else: All current features which are implementable with QtWebEngine will work, others will be clearly disabled (a few obscure settings might be missing with --backend webengine for example). See the respective issue for a breakdown of features which will probably require some extra work.

Frequently asked questions

When will I be able to use QtWebEngine?:

This depends on what features you need, and how fast I'll get them to work. Estimating how long the steps outlined above will take is quite difficult, but I hope you'll have something to try after the first week or so.

Also note you'll need to have a quite recent Qt (5.6, maybe even 5.7 which isn't released yet) at least at the beginning, because QtWebEngine is missing some important features in older versions.

Is QtWebEngine ready?:

It certainly wasn't when it was first released with Qt 5.4 in December 2014.

That's also why I spent a lot of time writing tests for existing features instead of trying to start working on QtWebEngine support.

Nowadays with Qt 5.5/5.6/5.7 things certainly look better, and I believe I'll be able to implement all important features, however I'll need to rewrite some code in Javascript as there's no C++ (and thus no Python API) for all the functionality QtWebKit had.

Long story short: It's by no means a drop-in replacement (like initially claimed by Qt) - but most users won't notice any missing functionality which I can't implement at all with a recent enough QtWebEngine, and things are getting better and better.

How is this blog made?:

Using spacemacs, writing ReStructuredText, storing it in a git repo, processing it with Pelican, the Monospace theme and the thumbnailer plugin.

Definitely a better workflow than Wordpress ;)

by Florian Bruhin at June 06, 2016 08:10 AM

April 20, 2016

pytest-tricks

Customize How py.test Collects Tests From Classes

By default py.test collects all classes which start with Test that are found in test*.py or *test.py modules. This behavior can be configured using the python_classes and python_files configuration options.

Also, it will out-of-the box collect classes which subclass unittest.TestCase (regardless of their name), making it easy to run existing test suites.

But thanks to py.test's flexible plugin architecture, one can customize how tests are collected from classes to suit almost any requirement.

Collect All Subclasses of a Certain Class

Recently someone asked in the pytest-dev mailing list how to make py.test collect all classes, that subclass from some specific base class, independently of their name.

Suppose all your test classes subclass from a certain utility class:

class TestingUtils(object):

    def connect_to_database(self):
        ...

    def validate_ui(self):
        ...

And all your tests are written in functions in subclasses of TestingUtils, but that don't follow any particular naming convention:

class BorgTests(TestingUtils):

    def test_borg_creation(self):
        db = self.connect_to_database()
        ...

class ValidationRules(TestingUtils):

    def test_rule_1(self):
        ...

You can implement your own collection rules by implementing the pytest_pycollect_makeitem hook.

Simply add this code to a conftest.py file at the root of your tests' directory and that's it:

import inspect

def pytest_pycollect_makeitem(collector, name, obj):
    if inspect.isclass(obj) and issubclass(obj, TestingUtils):
        Class = collector._getcustomclass("Class")
        return Class(name, parent=collector)

This won't interfere with the normal test collection mechanism, only add to it, so classes prefixed with Test will also be collected as usual.

by Bruno Oliveira at April 20, 2016 12:00 AM

April 08, 2016

pytest-tricks

Show Pytest Warnings

If you are migrating from unittest to pytest, you might encounter warnings when running your tests. No failures, no errors, but pytest-warnings. This may be confusing to you, regardless of whether you are new to pytest or an experienced user.

Pytest

In the following example, pytest displays pytest-warnings at the very end of the test run in the session summary.

$ py.test
=========================== test session starts ============================
platform darwin -- Python 3.5.0, pytest-2.8.7, py-1.4.31, pluggy-0.3.1
collected 2 items

test_foobar.py ..

=============== 2 passed, 1 pytest-warnings in 0.02 seconds ================

Unittest

Running the same tests under unittest does not show any warnings.

$ python -m unittest
..
---------------------------------------------------------------------
Ran 2 tests in 0.000s

OK

Tests

As you can see there are two tests that are collected and both pass without any failures or errors.

Let's have a look at the code:

# -*- coding: utf-8 -*-

import unittest


class Client:
    def get(self, url, *args, **kwargs):
        # Send a real request based on the given parameters
        pass


class TestResponse:
    def __init__(self, method, url, *args, **kwargs):
        if 'foobar' in url:
            self.status = 404
            self.reason = 'foobar'
        else:
            self.status = 200
            self.reason = None


class TestClient(Client):
    def get(self, url, *args, **kwargs):
        return TestResponse('get', url)


class TestScrapingTool(unittest.TestCase):
    def setUp(self):
        self.client = TestClient()

    def test_success(self):
        response = self.client.get('https://github.com/pytest-dev')
        self.assertEqual(response.status, 200)
        self.assertEqual(response.reason, None)

    def test_failure(self):
        response = self.client.get('foobar')
        self.assertEqual(response.status, 404)
        self.assertEqual(response.reason, 'foobar')

At first glance, the implementation may look just fine (albeit admittedly not particularly meaningful). It implements a stub client that inherits from a real client and returns a TestResponse instance as opposed to sending a request over a network connection.

Then there are two unittest tests, one for a valid url and another one for an invalid url respectively. Pytest is perfectly fine with it being unittest.TestCase methods, so that is unlikely to cause the issue.

Warnings

Pytest comes with a -rw command line flag to display internal warnings, such as the one that is reported for our test session:

$ py.test -rw
=========================== test session starts ============================
platform darwin -- Python 3.5.0, pytest-2.8.7, py-1.4.31, pluggy-0.3.1
collected 2 items

test_foobar.py ..

========================== pytest-warning summary ==========================
WC1 /show_pytest_warnings/test_foobar.py cannot collect test class
'TestResponse' because it has a __init__ constructor
=============== 2 passed, 1 pytest-warnings in 0.02 seconds ================

Running the suite with this flag now points us to the source of the problem. Pytest tries to collect TestResponse as its name matches the naming conventions for test classes. However it finds a __init__ method, which it cannot understand.

Solution

In this particular case, there is an simple solution to what one could consider a code smell: Rename the classes which are used in your tests but not actual test cases.

class StubResponse:
    ...


class StubClient(Client):
    ...

Reporting Options

For more information about the various reporting options, please consult the help via $ py.test --help and see the according section:

-r chars              show extra test summary info as specified by chars
                      (f)ailed, (E)error, (s)skipped, (x)failed, (X)passed
                      (w)pytest-warnings (a)all.

by Raphael Pierzina at April 08, 2016 12:00 AM

February 28, 2016

pytest-tricks

Debug Test Failures With Pdb

Tracebacks

Pytest excels at helping you with test failures. After running your tests via py.test, you will see an error report with a detailed traceback for each of the failures, if any. You can change the mode in which output is presented to you via the --tb cli option:

py.test --tb=auto    # (default) 'long' tracebacks for the first and last
                     # entry, but 'short' style for the other entries
py.test --tb=long    # exhaustive, informative traceback formatting
py.test --tb=short   # shorter traceback format
py.test --tb=line    # only one line per failure
py.test --tb=native  # Python standard library formatting
py.test --tb=no      # no traceback at all

Debugger

Have you ever heard of pdb - the Python Debugger? I strongly recommend checking it out, if you haven't used it yet. The standard library's built-in debugger will prove useful for debugging your Python code without installing any external dependencies or setting up some fancy IDE.

If invoked with the --pdb option, pytest will place a debugger breakpoint whenever an error occurs in your tests.

$ py.test --pdb

You can also set a debugger breakpoint yourself with import pdb; pdb.set_trace().

Use commands help, list and pp to inspect the test code.

If you want to get a list of all local variables and their value, it's best to run pytest with the -l option enabled, as locals() may contain internal pytest items if you print it from the breakpoint.

Running Broken Tests Only

An underrated feature of pytest is the --lf option, which tells pytest to only run the tests that failed the last time it was executed.

Once you've encountered any errors in your tests, you want to focus on the failures and get a better understanding of what's causing the problems as opposed to spending time on running tests that are perfectly fine.

Re-order Tests. Failures First

Keep in mind though, whenever you modify your code other tests may break.

If you run $ py.test --ff all tests will be executed, but re-ordered based on whether they've failed in the previous run or not. Failures first, successful tests after.

Stop Tests After N Failures

You can end the test session early by pressing CTRL-C. Although pytest catches the KeyboardInterrupt error and runs the teardown code before closing the session gracefully, it is definitely not the best way to stop the test suite.

Instead use -x to stop after the first failure, or --maxfail=N to stop after the first N failures.

by Raphael Pierzina at February 28, 2016 12:00 AM

February 23, 2016

pytest-tricks

Create Tests via Parametrization

Use Case

Imagine you want to write a test for a particular function, but for multiple input values. Writing a for-loop is a bad idea as the test will fail as soon as it hits the first AssertionError. Subsequent input values will not be tested and you have no idea which part of your code is actually broken. At the same time you want to stick to DRY and not implement the same unittest.Testcase method over and over again with slightly different input values.

Keep in mind why we write unit tests:

We want to know when we break stuff, but also at the same time get as many hints as possible on why the error occurs!

Pytest provides various ways of creating individual test items. There are parametrized fixtures and mark.parametrize (and hooks).

Marker

Using this built-in marker you do not need to implement any fixtures. Instead you define your scenarios in a decorator and the only thing you really need to look out for is to match the number of positional test arguments with your iterable.

import pytest

@pytest.mark.parametrize(
    'number, word', [
        (1, '1'),
        (3, 'Fizz'),
        (5, 'Buzz'),
        (10, 'Buzz'),
        (15, 'FizzBuzz'),
        (16, '16')
    ]
)
def test_fizzbuzz(number, word):
    assert fizzbuzz(number) == word

Fixture

To parametrize a fixture you need pass an interable to the params keyword argument. The built-in fixture request knows about the current parameter and if you don't want to do anything fancy, you can pass it right to the test via the return statement.

import pytest

@pytest.fixture(params=['apple', 'banana', 'plum'])
def fruit(request):
    return request.param

def test_is_healthy(fruit):
    assert is_healthy(fruit)

Example Implementation

Please note that the examples are written in Python3

Sometimes you may find yourself struggling to chose which is the best way to parametrize your tests. At the end of the day it really depends on what you want to test. But... Good news! Pytest lets you combine both methods to get the most out of both worlds.

Some Classes in a Module

Imagine this Python module (foobar.py) which contains a few class definitions with a bit of logic:

# -*- coding: utf-8 -*-

FOSS_LICENSES = ['Apache 2.0', 'MIT', 'GPL', 'BSD']

PYTHON_PKGS = ['pytest', 'requests', 'django', 'cookiecutter']


class Package:
    def __init__(self, name, license):
        self.name = name
        self.license = license

    @property
    def is_open_source(self):
        return self.license in FOSS_LICENSES


class Person:
    def __init__(self, name, gender):
        self.name = name
        self.gender = gender
        self._skills = ['eating', 'sleeping']

    def learn(self, skill):
        self._skills.append(skill)

    @property
    def looks_like_a_programmer(self):
        return any(
            package in self._skills
            for package in PYTHON_PKGS
        )


class Woman(Person):
    def __init__(self, name):
        super().__init__(name, 'female')


class Man(Person):
    def __init__(self, name):
        super().__init__(name, 'male')

Tests in a Separate Module

With only two few lines of pytest code, we can create loads of different scenarios that we would like to test. By re-using parametrized fixtures and applying the aforementioned markers to your tests, you can focus on the actual test implementation, as opposed to writing the same boilerplate code for each of the methods that you would have to write with unittest.TestCase.

# -*- coding: utf-8 -*-

import operator
import pytest

from foobar import Package, Woman, Man

PACKAGES = [
    Package('requests', 'Apache 2.0'),
    Package('django', 'BSD'),
    Package('pytest', 'MIT'),
]


@pytest.fixture(params=PACKAGES, ids=operator.attrgetter('name'))
def python_package(request):
    return request.param


@pytest.mark.parametrize('person', [
    Woman('Audrey'), Woman('Brianna'),
    Man('Daniel'), Woman('Ola'), Man('Kenneth')
])
def test_become_a_programmer(person, python_package):
    person.learn(python_package.name)
    assert person.looks_like_a_programmer


def test_is_open_source(python_package):
    assert python_package.is_open_source

Test Report

Going the extra mile and setting up ids for your test scenarios greatly increases the comprehensibilty of your test report. In this case we would like to display the name of each Package rather than the fixture name with a numbered suffix such as python_package2.

If you run the tests now, you will see that pytest created 18 individual tests for us (Yes, yes indeed. 18 = 3 * 5 + 3).

$ py.test -v
================================== test session starts ==================================
platform darwin -- Python 3.5.0, pytest-2.8.7, py-1.4.31, pluggy-0.3.1
collected 18 items

test_foobar.py::test_become_a_programmer[requests-person0] PASSED
test_foobar.py::test_become_a_programmer[requests-person1] PASSED
test_foobar.py::test_become_a_programmer[requests-person2] PASSED
test_foobar.py::test_become_a_programmer[requests-person3] PASSED
test_foobar.py::test_become_a_programmer[requests-person4] PASSED
test_foobar.py::test_become_a_programmer[django-person0] PASSED
test_foobar.py::test_become_a_programmer[django-person1] PASSED
test_foobar.py::test_become_a_programmer[django-person2] PASSED
test_foobar.py::test_become_a_programmer[django-person3] PASSED
test_foobar.py::test_become_a_programmer[django-person4] PASSED
test_foobar.py::test_become_a_programmer[pytest-person0] PASSED
test_foobar.py::test_become_a_programmer[pytest-person1] PASSED
test_foobar.py::test_become_a_programmer[pytest-person2] PASSED
test_foobar.py::test_become_a_programmer[pytest-person3] PASSED
test_foobar.py::test_become_a_programmer[pytest-person4] PASSED
test_foobar.py::test_is_open_source[requests] PASSED
test_foobar.py::test_is_open_source[django] PASSED
test_foobar.py::test_is_open_source[pytest] PASSED

=============================== 18 passed in 0.02 seconds ===============================

by Raphael Pierzina at February 23, 2016 12:00 AM

February 22, 2016

pytest-tricks

Shared Directory Between Master and Workers With pytest-xdist

pytest-xdist

pytest-xdist is a plugin for distributed testing.

A somewhat common need is how one can make a plugin share data between master and worker nodes?

Shared directory

Here's a simple recipe that provides a shared_directory fixture which points to a temporary directory accessible by both masters and worker nodes.

import shutil
import tempdir


def pytest_configure(config):
    if is_master(config):
        config.shared_directory = tempdir.mktemp()


def pytest_unconfigure(config):
    if is_master(config):
        shutil.rmtree(config.shared_directory)


def pytest_configure_node(node):
    """xdist hook"""
    node.slaveinput['shared_dir'] = node.config.shared_directory


def is_master(config):
    """True if the code running the given pytest.config object is running in a xdist master
    node or not running xdist at all.
    """
    return not hasattr(config, 'slaveinput')


@pytest.fixture
def shared_directory(request):
    """Returns a unique and temporary directory which can be shared by
    master or worker nodes in xdist runs.
    """
    if is_master(request.config):
        return request.config.shared_directory
    else:
        return request.config.slaveinput['shared_dir']

Anything put in node.slaveinput dictionary during the pytest_configure_node xdist hook can be accessed by the worker nodes later. You can put any simple builtin type, like lists, strings, tuples, etc.

The shared_directory fixture can then be used by a plugin or even tests to have a common directory where information can be exchanged. This recipe also shows how one can find out if the code is running in the master or a worker node.

(this recipe appeared originally in this pytest issue)

by Bruno Oliveira at February 22, 2016 12:00 AM

February 21, 2016

pytest-tricks

Run Tests Using a Certain Fixture

Hooks

The hook-based plugin architecture of pytest empowers you to heavily customize the test runners behavior. In this particular case we'll make use of the pytest_collection_modifyitems hook. It is called after collection has been performed and hence provides access to the actual test items - objects which represent a single test.

Fixtures

Hooks have access to pytest's builtin fixtures such as config. We case use it to explicitly run a hook for deselecting test items, namely config.hook.pytest_deselected.

Items

Watch out! This hook does not return the test items. Instead you want to populate the given list as needed: items[:] = selected_items

Implicit Fixture Usage

Pytests modular design allows you to re-use fixtures. The fixturenames attribute of a test item not only holds the fixtures which are specified in its signature but also the ones that are used implicitly.

Example Implementation

# conftest.py

def pytest_collection_modifyitems(items, config):
    selected_items = []
    deselected_items = []

    for item in items:
        if 'new_fixture' in getattr(item, 'fixturenames', ()):
            selected_items.append(item)
        else:
            deselected_items.append(item)
    config.hook.pytest_deselected(items=deselected_items)
    items[:] = selected_items
# test_foobar.py

import pytest


@pytest.fixture
def new_fixture():
    return 'foobar'


@pytest.fixture
def another_fixture(new_fixture):
    return 'foobar'


def test_abc(new_fixture):
    assert True


def test_def(another_fixture):
    assert True


def test_that_does_not_use_fixture():
    assert False

Test Report

$ py.test -v
============================= test session starts ==============================
platform darwin -- Python 3.5.0, pytest-2.8.7, py-1.4.31, pluggy-0.3.1
collected 3 items

test_foobar.py::test_abc PASSED
test_foobar.py::test_def PASSED

==================== 2 passed, 1 deselected in 0.01 seconds ====================

by Raphael Pierzina at February 21, 2016 12:00 AM