Make coverage only count successful tests and ignore xfailing tests
up vote
1
down vote
favorite
I have a number of projects where I use the pytest.mark.xfail
marker to mark tests that fail but shouldn't fail so that a failing test case can be added before the issue is fixed. I do not want to skip these tests, because if something I does causes them to start passing, I want to be informed of that so that I can remove the xfail
marker to avoid regressions.
The problem is that because xfail
tests actually run until they fail, any lines hit leading up to the failure are counted as "covered", even if they are part of no passing test, which gives me misleading metrics about how much of my code is actually tested as working. A minimal example of this is:
pkg.py
def f(fail):
if fail:
print("This line should not be covered")
return "wrong answer"
return "right answer"
test_pkg.py
import pytest
from pkg import f
def test_success():
assert f(fail=False) == "right answer"
@pytest.mark.xfail
def test_failure():
assert f(fail=True) == "right answer"
Running python -m pytest --cov=pkg
, I get:
platform linux -- Python 3.7.1, pytest-3.10.0, py-1.7.0, pluggy-0.8.0
rootdir: /tmp/cov, inifile:
plugins: cov-2.6.0
collected 2 items
tests/test_pkg.py .x [100%]
----------- coverage: platform linux, python 3.7.1-final-0 -----------
Name Stmts Miss Cover
----------------------------
pkg.py 5 0 100%
As you can see, all five lines are covered, but lines 3 and 4 are only hit during the xfail
test.
The way I handle this now is to set up tox
to run something like pytest -m "not xfail" --cov && pytest -m xfail
, but in addition to being a bit cumbersome, that is only filtering out things with the xfail
mark, which means that conditional xfails also get filtered out, regardless of whether or not the condition is met.
Is there any way to have coverage
or pytest
not count coverage from failing tests? Alternatively, I would be OK with a mechanism to ignore coverage from xfail
tests that only ignores conditional xfail
tests if the condition is met.
python pytest coverage.py pytest-cov
add a comment |
up vote
1
down vote
favorite
I have a number of projects where I use the pytest.mark.xfail
marker to mark tests that fail but shouldn't fail so that a failing test case can be added before the issue is fixed. I do not want to skip these tests, because if something I does causes them to start passing, I want to be informed of that so that I can remove the xfail
marker to avoid regressions.
The problem is that because xfail
tests actually run until they fail, any lines hit leading up to the failure are counted as "covered", even if they are part of no passing test, which gives me misleading metrics about how much of my code is actually tested as working. A minimal example of this is:
pkg.py
def f(fail):
if fail:
print("This line should not be covered")
return "wrong answer"
return "right answer"
test_pkg.py
import pytest
from pkg import f
def test_success():
assert f(fail=False) == "right answer"
@pytest.mark.xfail
def test_failure():
assert f(fail=True) == "right answer"
Running python -m pytest --cov=pkg
, I get:
platform linux -- Python 3.7.1, pytest-3.10.0, py-1.7.0, pluggy-0.8.0
rootdir: /tmp/cov, inifile:
plugins: cov-2.6.0
collected 2 items
tests/test_pkg.py .x [100%]
----------- coverage: platform linux, python 3.7.1-final-0 -----------
Name Stmts Miss Cover
----------------------------
pkg.py 5 0 100%
As you can see, all five lines are covered, but lines 3 and 4 are only hit during the xfail
test.
The way I handle this now is to set up tox
to run something like pytest -m "not xfail" --cov && pytest -m xfail
, but in addition to being a bit cumbersome, that is only filtering out things with the xfail
mark, which means that conditional xfails also get filtered out, regardless of whether or not the condition is met.
Is there any way to have coverage
or pytest
not count coverage from failing tests? Alternatively, I would be OK with a mechanism to ignore coverage from xfail
tests that only ignores conditional xfail
tests if the condition is met.
python pytest coverage.py pytest-cov
1
This is a very interesting idea! In the coverage 5.0 alpha, we can track which tests covered which lines. If we get to the point of a pytest plugin to help with that, perhaps it could disable measurement around xfail tests.
– Ned Batchelder
2 days ago
1
Would you mind writing this up as an issue on the coverage.py repo? github.com/nedbat/coveragepy
– Ned Batchelder
2 days ago
@NedBatchelder Will do.
– Paul
2 days ago
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I have a number of projects where I use the pytest.mark.xfail
marker to mark tests that fail but shouldn't fail so that a failing test case can be added before the issue is fixed. I do not want to skip these tests, because if something I does causes them to start passing, I want to be informed of that so that I can remove the xfail
marker to avoid regressions.
The problem is that because xfail
tests actually run until they fail, any lines hit leading up to the failure are counted as "covered", even if they are part of no passing test, which gives me misleading metrics about how much of my code is actually tested as working. A minimal example of this is:
pkg.py
def f(fail):
if fail:
print("This line should not be covered")
return "wrong answer"
return "right answer"
test_pkg.py
import pytest
from pkg import f
def test_success():
assert f(fail=False) == "right answer"
@pytest.mark.xfail
def test_failure():
assert f(fail=True) == "right answer"
Running python -m pytest --cov=pkg
, I get:
platform linux -- Python 3.7.1, pytest-3.10.0, py-1.7.0, pluggy-0.8.0
rootdir: /tmp/cov, inifile:
plugins: cov-2.6.0
collected 2 items
tests/test_pkg.py .x [100%]
----------- coverage: platform linux, python 3.7.1-final-0 -----------
Name Stmts Miss Cover
----------------------------
pkg.py 5 0 100%
As you can see, all five lines are covered, but lines 3 and 4 are only hit during the xfail
test.
The way I handle this now is to set up tox
to run something like pytest -m "not xfail" --cov && pytest -m xfail
, but in addition to being a bit cumbersome, that is only filtering out things with the xfail
mark, which means that conditional xfails also get filtered out, regardless of whether or not the condition is met.
Is there any way to have coverage
or pytest
not count coverage from failing tests? Alternatively, I would be OK with a mechanism to ignore coverage from xfail
tests that only ignores conditional xfail
tests if the condition is met.
python pytest coverage.py pytest-cov
I have a number of projects where I use the pytest.mark.xfail
marker to mark tests that fail but shouldn't fail so that a failing test case can be added before the issue is fixed. I do not want to skip these tests, because if something I does causes them to start passing, I want to be informed of that so that I can remove the xfail
marker to avoid regressions.
The problem is that because xfail
tests actually run until they fail, any lines hit leading up to the failure are counted as "covered", even if they are part of no passing test, which gives me misleading metrics about how much of my code is actually tested as working. A minimal example of this is:
pkg.py
def f(fail):
if fail:
print("This line should not be covered")
return "wrong answer"
return "right answer"
test_pkg.py
import pytest
from pkg import f
def test_success():
assert f(fail=False) == "right answer"
@pytest.mark.xfail
def test_failure():
assert f(fail=True) == "right answer"
Running python -m pytest --cov=pkg
, I get:
platform linux -- Python 3.7.1, pytest-3.10.0, py-1.7.0, pluggy-0.8.0
rootdir: /tmp/cov, inifile:
plugins: cov-2.6.0
collected 2 items
tests/test_pkg.py .x [100%]
----------- coverage: platform linux, python 3.7.1-final-0 -----------
Name Stmts Miss Cover
----------------------------
pkg.py 5 0 100%
As you can see, all five lines are covered, but lines 3 and 4 are only hit during the xfail
test.
The way I handle this now is to set up tox
to run something like pytest -m "not xfail" --cov && pytest -m xfail
, but in addition to being a bit cumbersome, that is only filtering out things with the xfail
mark, which means that conditional xfails also get filtered out, regardless of whether or not the condition is met.
Is there any way to have coverage
or pytest
not count coverage from failing tests? Alternatively, I would be OK with a mechanism to ignore coverage from xfail
tests that only ignores conditional xfail
tests if the condition is met.
python pytest coverage.py pytest-cov
python pytest coverage.py pytest-cov
asked Nov 7 at 14:53
Paul
5,08753062
5,08753062
1
This is a very interesting idea! In the coverage 5.0 alpha, we can track which tests covered which lines. If we get to the point of a pytest plugin to help with that, perhaps it could disable measurement around xfail tests.
– Ned Batchelder
2 days ago
1
Would you mind writing this up as an issue on the coverage.py repo? github.com/nedbat/coveragepy
– Ned Batchelder
2 days ago
@NedBatchelder Will do.
– Paul
2 days ago
add a comment |
1
This is a very interesting idea! In the coverage 5.0 alpha, we can track which tests covered which lines. If we get to the point of a pytest plugin to help with that, perhaps it could disable measurement around xfail tests.
– Ned Batchelder
2 days ago
1
Would you mind writing this up as an issue on the coverage.py repo? github.com/nedbat/coveragepy
– Ned Batchelder
2 days ago
@NedBatchelder Will do.
– Paul
2 days ago
1
1
This is a very interesting idea! In the coverage 5.0 alpha, we can track which tests covered which lines. If we get to the point of a pytest plugin to help with that, perhaps it could disable measurement around xfail tests.
– Ned Batchelder
2 days ago
This is a very interesting idea! In the coverage 5.0 alpha, we can track which tests covered which lines. If we get to the point of a pytest plugin to help with that, perhaps it could disable measurement around xfail tests.
– Ned Batchelder
2 days ago
1
1
Would you mind writing this up as an issue on the coverage.py repo? github.com/nedbat/coveragepy
– Ned Batchelder
2 days ago
Would you mind writing this up as an issue on the coverage.py repo? github.com/nedbat/coveragepy
– Ned Batchelder
2 days ago
@NedBatchelder Will do.
– Paul
2 days ago
@NedBatchelder Will do.
– Paul
2 days ago
add a comment |
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
Since you're using the pytest-cov
plugin, take advantage of its no_cover
marker. When annotated with pytest.mark.no_cover
, the code coverage will be turned off for the test. The only thing left to implement is applying no_cover
marker to all tests marked with pytest.mark.xfail
. In your conftest.py
:
import pytest
def pytest_collection_modifyitems(items):
for item in items:
if item.get_closest_marker('xfail'):
item.add_marker(pytest.mark.no_cover)
Running your example will now yield:
$ pytest --cov=pkg -v
=================================== test session starts ===================================
platform darwin -- Python 3.7.1, pytest-3.9.1, py-1.7.0, pluggy-0.8.0
cachedir: .pytest_cache
rootdir: /Users/hoefling/projects/private/stackoverflow, inifile:
plugins: cov-2.6.0
collected 2 items
test_pkg.py::test_success PASSED [ 50%]
test_pkg.py::test_failure xfail [100%]
---------- coverage: platform darwin, python 3.7.1-final-0 -----------
Name Stmts Miss Cover
----------------------------
pkg.py 5 2 60%
=========================== 1 passed, 1 xfailed in 0.04 seconds ===========================
Edit: dealing with condition in the xfail
marker
The marker arguments can be accessed via marker.args
and marker.kwargs
, so if you e.g. have a marker
@pytest.mark.xfail(sys.platform == 'win32', reason='This fails on Windows')
access the arguments with
marker = item.get_closest_marker('xfail')
condition = marker.args[0]
reason = marker.kwargs['reason']
To consider the condition flag, the hook from above can be modified as follows:
def pytest_collection_modifyitems(items):
for item in items:
marker = item.get_closest_marker('xfail')
if marker and (not marker.args or marker.args[0]):
item.add_marker(pytest.mark.no_cover)
This seems like it's half the solution; conditionally xfailing tests also need to have coverage turned off only if the failure condition is met. I think this probably solves the hardest part of the problem, though.
– Paul
Nov 7 at 22:35
oh, you want coverage to be turned off for tests with anxfail
result (but e.g. left turned on for tests with anxpass
result)? Kind of post-processing of coverage results after the test has finished?
– hoefling
Nov 7 at 22:59
No,xfail
takes a boolean, so you can express things like "this fails on Windows". I always run withxfail
defaulting to strict, but in my experience, tests get thexfail
mark regardless of whether the bool is true.
– Paul
Nov 8 at 0:14
I see what you mean; the condition flag is the first attribute in the marker args, so it's not hard to include that into consideration. I have updated the answer with an example.
– hoefling
Nov 8 at 8:33
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
Since you're using the pytest-cov
plugin, take advantage of its no_cover
marker. When annotated with pytest.mark.no_cover
, the code coverage will be turned off for the test. The only thing left to implement is applying no_cover
marker to all tests marked with pytest.mark.xfail
. In your conftest.py
:
import pytest
def pytest_collection_modifyitems(items):
for item in items:
if item.get_closest_marker('xfail'):
item.add_marker(pytest.mark.no_cover)
Running your example will now yield:
$ pytest --cov=pkg -v
=================================== test session starts ===================================
platform darwin -- Python 3.7.1, pytest-3.9.1, py-1.7.0, pluggy-0.8.0
cachedir: .pytest_cache
rootdir: /Users/hoefling/projects/private/stackoverflow, inifile:
plugins: cov-2.6.0
collected 2 items
test_pkg.py::test_success PASSED [ 50%]
test_pkg.py::test_failure xfail [100%]
---------- coverage: platform darwin, python 3.7.1-final-0 -----------
Name Stmts Miss Cover
----------------------------
pkg.py 5 2 60%
=========================== 1 passed, 1 xfailed in 0.04 seconds ===========================
Edit: dealing with condition in the xfail
marker
The marker arguments can be accessed via marker.args
and marker.kwargs
, so if you e.g. have a marker
@pytest.mark.xfail(sys.platform == 'win32', reason='This fails on Windows')
access the arguments with
marker = item.get_closest_marker('xfail')
condition = marker.args[0]
reason = marker.kwargs['reason']
To consider the condition flag, the hook from above can be modified as follows:
def pytest_collection_modifyitems(items):
for item in items:
marker = item.get_closest_marker('xfail')
if marker and (not marker.args or marker.args[0]):
item.add_marker(pytest.mark.no_cover)
This seems like it's half the solution; conditionally xfailing tests also need to have coverage turned off only if the failure condition is met. I think this probably solves the hardest part of the problem, though.
– Paul
Nov 7 at 22:35
oh, you want coverage to be turned off for tests with anxfail
result (but e.g. left turned on for tests with anxpass
result)? Kind of post-processing of coverage results after the test has finished?
– hoefling
Nov 7 at 22:59
No,xfail
takes a boolean, so you can express things like "this fails on Windows". I always run withxfail
defaulting to strict, but in my experience, tests get thexfail
mark regardless of whether the bool is true.
– Paul
Nov 8 at 0:14
I see what you mean; the condition flag is the first attribute in the marker args, so it's not hard to include that into consideration. I have updated the answer with an example.
– hoefling
Nov 8 at 8:33
add a comment |
up vote
1
down vote
accepted
Since you're using the pytest-cov
plugin, take advantage of its no_cover
marker. When annotated with pytest.mark.no_cover
, the code coverage will be turned off for the test. The only thing left to implement is applying no_cover
marker to all tests marked with pytest.mark.xfail
. In your conftest.py
:
import pytest
def pytest_collection_modifyitems(items):
for item in items:
if item.get_closest_marker('xfail'):
item.add_marker(pytest.mark.no_cover)
Running your example will now yield:
$ pytest --cov=pkg -v
=================================== test session starts ===================================
platform darwin -- Python 3.7.1, pytest-3.9.1, py-1.7.0, pluggy-0.8.0
cachedir: .pytest_cache
rootdir: /Users/hoefling/projects/private/stackoverflow, inifile:
plugins: cov-2.6.0
collected 2 items
test_pkg.py::test_success PASSED [ 50%]
test_pkg.py::test_failure xfail [100%]
---------- coverage: platform darwin, python 3.7.1-final-0 -----------
Name Stmts Miss Cover
----------------------------
pkg.py 5 2 60%
=========================== 1 passed, 1 xfailed in 0.04 seconds ===========================
Edit: dealing with condition in the xfail
marker
The marker arguments can be accessed via marker.args
and marker.kwargs
, so if you e.g. have a marker
@pytest.mark.xfail(sys.platform == 'win32', reason='This fails on Windows')
access the arguments with
marker = item.get_closest_marker('xfail')
condition = marker.args[0]
reason = marker.kwargs['reason']
To consider the condition flag, the hook from above can be modified as follows:
def pytest_collection_modifyitems(items):
for item in items:
marker = item.get_closest_marker('xfail')
if marker and (not marker.args or marker.args[0]):
item.add_marker(pytest.mark.no_cover)
This seems like it's half the solution; conditionally xfailing tests also need to have coverage turned off only if the failure condition is met. I think this probably solves the hardest part of the problem, though.
– Paul
Nov 7 at 22:35
oh, you want coverage to be turned off for tests with anxfail
result (but e.g. left turned on for tests with anxpass
result)? Kind of post-processing of coverage results after the test has finished?
– hoefling
Nov 7 at 22:59
No,xfail
takes a boolean, so you can express things like "this fails on Windows". I always run withxfail
defaulting to strict, but in my experience, tests get thexfail
mark regardless of whether the bool is true.
– Paul
Nov 8 at 0:14
I see what you mean; the condition flag is the first attribute in the marker args, so it's not hard to include that into consideration. I have updated the answer with an example.
– hoefling
Nov 8 at 8:33
add a comment |
up vote
1
down vote
accepted
up vote
1
down vote
accepted
Since you're using the pytest-cov
plugin, take advantage of its no_cover
marker. When annotated with pytest.mark.no_cover
, the code coverage will be turned off for the test. The only thing left to implement is applying no_cover
marker to all tests marked with pytest.mark.xfail
. In your conftest.py
:
import pytest
def pytest_collection_modifyitems(items):
for item in items:
if item.get_closest_marker('xfail'):
item.add_marker(pytest.mark.no_cover)
Running your example will now yield:
$ pytest --cov=pkg -v
=================================== test session starts ===================================
platform darwin -- Python 3.7.1, pytest-3.9.1, py-1.7.0, pluggy-0.8.0
cachedir: .pytest_cache
rootdir: /Users/hoefling/projects/private/stackoverflow, inifile:
plugins: cov-2.6.0
collected 2 items
test_pkg.py::test_success PASSED [ 50%]
test_pkg.py::test_failure xfail [100%]
---------- coverage: platform darwin, python 3.7.1-final-0 -----------
Name Stmts Miss Cover
----------------------------
pkg.py 5 2 60%
=========================== 1 passed, 1 xfailed in 0.04 seconds ===========================
Edit: dealing with condition in the xfail
marker
The marker arguments can be accessed via marker.args
and marker.kwargs
, so if you e.g. have a marker
@pytest.mark.xfail(sys.platform == 'win32', reason='This fails on Windows')
access the arguments with
marker = item.get_closest_marker('xfail')
condition = marker.args[0]
reason = marker.kwargs['reason']
To consider the condition flag, the hook from above can be modified as follows:
def pytest_collection_modifyitems(items):
for item in items:
marker = item.get_closest_marker('xfail')
if marker and (not marker.args or marker.args[0]):
item.add_marker(pytest.mark.no_cover)
Since you're using the pytest-cov
plugin, take advantage of its no_cover
marker. When annotated with pytest.mark.no_cover
, the code coverage will be turned off for the test. The only thing left to implement is applying no_cover
marker to all tests marked with pytest.mark.xfail
. In your conftest.py
:
import pytest
def pytest_collection_modifyitems(items):
for item in items:
if item.get_closest_marker('xfail'):
item.add_marker(pytest.mark.no_cover)
Running your example will now yield:
$ pytest --cov=pkg -v
=================================== test session starts ===================================
platform darwin -- Python 3.7.1, pytest-3.9.1, py-1.7.0, pluggy-0.8.0
cachedir: .pytest_cache
rootdir: /Users/hoefling/projects/private/stackoverflow, inifile:
plugins: cov-2.6.0
collected 2 items
test_pkg.py::test_success PASSED [ 50%]
test_pkg.py::test_failure xfail [100%]
---------- coverage: platform darwin, python 3.7.1-final-0 -----------
Name Stmts Miss Cover
----------------------------
pkg.py 5 2 60%
=========================== 1 passed, 1 xfailed in 0.04 seconds ===========================
Edit: dealing with condition in the xfail
marker
The marker arguments can be accessed via marker.args
and marker.kwargs
, so if you e.g. have a marker
@pytest.mark.xfail(sys.platform == 'win32', reason='This fails on Windows')
access the arguments with
marker = item.get_closest_marker('xfail')
condition = marker.args[0]
reason = marker.kwargs['reason']
To consider the condition flag, the hook from above can be modified as follows:
def pytest_collection_modifyitems(items):
for item in items:
marker = item.get_closest_marker('xfail')
if marker and (not marker.args or marker.args[0]):
item.add_marker(pytest.mark.no_cover)
edited Nov 8 at 13:44
answered Nov 7 at 21:47
hoefling
10.7k42657
10.7k42657
This seems like it's half the solution; conditionally xfailing tests also need to have coverage turned off only if the failure condition is met. I think this probably solves the hardest part of the problem, though.
– Paul
Nov 7 at 22:35
oh, you want coverage to be turned off for tests with anxfail
result (but e.g. left turned on for tests with anxpass
result)? Kind of post-processing of coverage results after the test has finished?
– hoefling
Nov 7 at 22:59
No,xfail
takes a boolean, so you can express things like "this fails on Windows". I always run withxfail
defaulting to strict, but in my experience, tests get thexfail
mark regardless of whether the bool is true.
– Paul
Nov 8 at 0:14
I see what you mean; the condition flag is the first attribute in the marker args, so it's not hard to include that into consideration. I have updated the answer with an example.
– hoefling
Nov 8 at 8:33
add a comment |
This seems like it's half the solution; conditionally xfailing tests also need to have coverage turned off only if the failure condition is met. I think this probably solves the hardest part of the problem, though.
– Paul
Nov 7 at 22:35
oh, you want coverage to be turned off for tests with anxfail
result (but e.g. left turned on for tests with anxpass
result)? Kind of post-processing of coverage results after the test has finished?
– hoefling
Nov 7 at 22:59
No,xfail
takes a boolean, so you can express things like "this fails on Windows". I always run withxfail
defaulting to strict, but in my experience, tests get thexfail
mark regardless of whether the bool is true.
– Paul
Nov 8 at 0:14
I see what you mean; the condition flag is the first attribute in the marker args, so it's not hard to include that into consideration. I have updated the answer with an example.
– hoefling
Nov 8 at 8:33
This seems like it's half the solution; conditionally xfailing tests also need to have coverage turned off only if the failure condition is met. I think this probably solves the hardest part of the problem, though.
– Paul
Nov 7 at 22:35
This seems like it's half the solution; conditionally xfailing tests also need to have coverage turned off only if the failure condition is met. I think this probably solves the hardest part of the problem, though.
– Paul
Nov 7 at 22:35
oh, you want coverage to be turned off for tests with an
xfail
result (but e.g. left turned on for tests with an xpass
result)? Kind of post-processing of coverage results after the test has finished?– hoefling
Nov 7 at 22:59
oh, you want coverage to be turned off for tests with an
xfail
result (but e.g. left turned on for tests with an xpass
result)? Kind of post-processing of coverage results after the test has finished?– hoefling
Nov 7 at 22:59
No,
xfail
takes a boolean, so you can express things like "this fails on Windows". I always run with xfail
defaulting to strict, but in my experience, tests get the xfail
mark regardless of whether the bool is true.– Paul
Nov 8 at 0:14
No,
xfail
takes a boolean, so you can express things like "this fails on Windows". I always run with xfail
defaulting to strict, but in my experience, tests get the xfail
mark regardless of whether the bool is true.– Paul
Nov 8 at 0:14
I see what you mean; the condition flag is the first attribute in the marker args, so it's not hard to include that into consideration. I have updated the answer with an example.
– hoefling
Nov 8 at 8:33
I see what you mean; the condition flag is the first attribute in the marker args, so it's not hard to include that into consideration. I have updated the answer with an example.
– hoefling
Nov 8 at 8:33
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53191930%2fmake-coverage-only-count-successful-tests-and-ignore-xfailing-tests%23new-answer', 'question_page');
}
);
Post as a guest
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
1
This is a very interesting idea! In the coverage 5.0 alpha, we can track which tests covered which lines. If we get to the point of a pytest plugin to help with that, perhaps it could disable measurement around xfail tests.
– Ned Batchelder
2 days ago
1
Would you mind writing this up as an issue on the coverage.py repo? github.com/nedbat/coveragepy
– Ned Batchelder
2 days ago
@NedBatchelder Will do.
– Paul
2 days ago