I have @pytest.mark.skip's test1()
and @pytest.mark.xfail's test2()
which are both True
as shown below:
import pytest
@pytest.mark.skip
def test1():
assert True
@pytest.mark.xfail
def test2():
assert True
Then, I ran pytest
, then there is the output as shown below:
$ pytest
=================== test session starts ===================
platform win32 -- Python 3.9.13, pytest-7.4.0, pluggy-1.2.0
django: settings: core.settings (from ini)
rootdir: C:\Users\kai\test-django-project2
configfile: pytest.ini
plugins: django-4.5.2
collected 2 items
tests\test_store.py sX [100%]
============== 1 skipped, 1 xpassed in 0.10s ==============
Next, I have @pytest.mark.skip's test1()
and @pytest.mark.xfail's test2()
which are both False
as shown below:
import pytest
@pytest.mark.skip
def test1():
assert False
@pytest.mark.xfail
def test2():
assert False
Then, I ran pytest
, then there is the same output as shown below:
$ pytest
=================== test session starts ===================
platform win32 -- Python 3.9.13, pytest-7.4.0, pluggy-1.2.0
django: settings: core.settings (from ini)
rootdir: C:\Users\kai\test-django-project2
configfile: pytest.ini
plugins: django-4.5.2
collected 2 items
tests\test_store.py sx [100%]
============== 1 skipped, 1 xfailed in 0.24s ==============
So, what is the difference between @pytest.mark.skip
and @pytest.mark.xfail
?
The marks do different things and have different purposes, the output just looks similar in your trivial case.
Tests with the xfail
mark are run but expected to fail, while tests with the skip
mark are not executed at all.
The purpose is different. Skipped tests are usually not executed, because some condition is not fulfilled yet / at the moment. More common are tests with the skipif
mark, which come with a condition and are self-explaining, but skip
marks can for example be used to mark tests that may pass in the future.
A common cause is to skip tests that fail due to a bug that cannot be fixed easily - in this case it is better to skip the test with a respective comment (which can be shown during test execution) instead of just commenting it out. Sometimes tests are written for future features that are not yet implemented - with the same reasoning, to make it visible that something is still missing, and as a kind of specification.
Tests that are expected to fail are probably less frequently used. Your simple case of:
@pytest.mark.xfail
def test():
assert False
is somewhat similar to:
def test():
assert True
so it does not really make sense. There are cases however, where you want to show that a specific test may or shall fail (instead of just invert the condition to make it pass). An example are regression tests, that show that a test fails if some parameter has not been set, and succeed after it is set correctly.
Note that there are essentially two variants of the xfail
marker: strict and not strict, depending on the strict
argument. By default, strict
is False
, meaning that the test never fails (same as with the skip
marker), but the outcome is evaluated and documented as either XPASS
or XFAIL
in verbose mode (e.g. using -v
), or just as 'X' or 'x' in non-verbose mode.
The following:
import pytest
@pytest.mark.xfail
def test_non_strict_pass():
assert True
@pytest.mark.xfail
def test_non_strict_fail():
assert False
@pytest.mark.xfail(strict=True)
def test_strict_pass():
assert True
@pytest.mark.xfail(strict=True)
def test_strict_fail():
assert False
will create the output:
$pytest test_xfail
test_xfail.py XxFx
or in verbose mode:
$pytest -v test_xfail
test_xfail.py::test_non_strict_pass XPASS
test_xfail.py::test_non_strict_fail XFAIL
test_xfail.py::test_strict_pass FAILED
test_xfail.py::test_strict_fail XFAIL
...
The strict mode is useful for cases like the described one, where you want to make sure that the test fails under some conditions (like for the mentioned regression tests).
The default non-strict mode serves more as documentation. You may use it to mark tests that are currently failing due to a bug, but shall succeed later, or to mark flaky tests that should not fail the test suite if failing. You can at least see if the test fails or succeeds, but this will not change the outcome of the test suite.
Note that there are other arguments to the xfail
marker that change the behavior, for example:
run=False
essentially lets it behave like a skip
markercondition
behaves like the condition in skipif
Check the documentation for a full description.
Update:
After revisiting this question due to a comment, I realized that my answer was incomplete and partially wrong. I clarified it and added the description of the strict
argument.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With