Should TestRunner implement coverage?
As the title states, I need to know if the TestRunner should implement a coverage statement comparing the tested lines of code to the loaded lines code.
I was reading here: https://reqtest.com/testing-blog/test-coverage-metrics/
And while this metric makes sense I dont see it really adding value to the project. From the perspective of implementation it would take some time adding the correct counters into test runner to gather the line statistics as it works through the code. Also since TestRunner is a library and not a program, it is somewhat outside of the scope of TestRunner to introspect all the files in the program.
That being said, the best way I could think to make this work is to pipe the requires through a loader that can create a count of lines. When the tests are then executed the runner would add up the lines of the functions executed.
My overall opinion is that a lot of work would be invested in this metric without coming away with anything of value to projects actual test coverage. Originally, I was aiming to use the test counts vs pending count and failing count but that doesnt seem to be what the metric is.
I am open to comments on this.