-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Share reporting code between SBT, Gradle and Maven #81
Comments
I tried to implement aggregated reports, but this just functionality does not work in |
@maiflai @gslowikowski I can pick this up this weekend. Question - do we want a cli rather than a library? |
I don't. The goal is to decouple Maven/Gradle/SBT plugin from Scalac plugin to support different Scalac plugin versions from one Maven/Gradle/SBT plugin version. <plugin>
<groupId>org.scoverage</groupId>
<artifactId>scoverage-maven-plugin</artifactId>
<version>1.0.2</version>
<dependencies>
<dependency>
<groupId>org.scoverage</groupId>
<artifactId>scalac-scoverage-plugin_2.10</artifactId>
<version>1.0.3-SNAPSHOT</version>
</dependency>
</dependencies>
</plugin> |
Additionally, I have one nonstandard requirement - html report by default must be created inside On the other hand xml reports (scoverage and cobertura) should be somewhere inside |
That's not my goal - my goal is to remove the dependency on Scala, and reduce the quantity of code contained within the gradle plugin to a minimum. Specifically I would like to remove knowledge of the internals of the plugin:
Note that this snippet is only appropriate for single-module projects, multi-module is then a different approach. |
This goal is OK, but I see two problems with it:
|
Are you suggesting that the simple generation of a report will require explicit memory options and remote debugging? I'm asking for a trivial wrapper over a scala API. I don't agree with your assessment of the path problem. I think there are more interesting problems relating to multi-module than the physical location of the files in an HTML report: scoverage/gradle-scoverage#33 (comment) |
How would you debug report generation in forked process? |
From my experience, every trick or workaround will sooner or later lead to more serious problems. That's why I always want to fix things asap. |
Memory problems - #89 |
Why would you debug report generation from inside Maven? If I really really want to debug it, then I will launch the CLI in a debug session from my scalac-scoverage-plugin project. It will give me a more focused debug session. Re: Memory issues, I think if you re-read the stacktrace you will see
This indicates to me that he has run out of memory during the instrumentation phase, not the reporting phase. |
As to debugging, you are right. I must rethink it. #89 - you are right too, ignore my last comment. |
The memory issue is fixed in #89 |
I think it makes most sense for the scalac plugin to have some reporting code, so you just need to go:
Then you don't need to know how the scalac plugin handles the instrumentation files. Plus you can put XML / HTML reports in separate locations, wherever you want / whatever makes sense in your plugins. The "locationOfData" would just be some /target or /build folder where the scoverage temporary files go (currently in sbt its target/scoverage-data) Is this acceptable? |
Sounds reasonable for the moment, and would support a trivial CLI over the top of it. On a related note - do you think it would be possible to configure the data directory at runtime rather than compilation? I think that the current arrangement makes the separation (and reporting) of 'unit' coverage and 'integration' coverage difficult. |
I don't think so, because when you run the tests, the instrumented code On 24 January 2015 at 10:41, maiflai [email protected] wrote:
|
But then both my unit tests and my integration tests write to the same directory, unless I recompile my source code for each set of tests. If you consider a multi-module project, then each module writes to its own data directory, and I lose the ability to separate the direct project coverage from coverage generated by dependent projects. I would like to retain this separation, since I value unit test coverage more highly than integration tests. The For my use-case I would need to:
But then the Of course, it might be suggested that multi-module projects are the real problem, and that you can solve this with a microservice approach, but I think that there can still be value in dividing a microservice into its components. |
You would have to put the integration tests into another submodule and do This is why it cannot be done at runtime, and why it must be the same for On 24 January 2015 at 11:56, maiflai [email protected] wrote:
|
But I'm not looking for coverage of my test code, I'm looking for coverage of my source code? And the source code should be compiled once only. Why can't the Invoker look for a system property to determine the data directory root? |
I wrote scoverage, so I'm aware of this ;) If the source code is compiled
A property might not be propagated to forked processes on all platforms? It On 24 January 2015 at 12:07, maiflai [email protected] wrote:
|
:-) I think that Maven, SBT and Gradle all support setting system properties when running JVM-based unit tests. |
And you've just reminded me how hard that was with SBT :-( In the end it encouraged me to use |
(and pass in environment variables) |
We need to ensure that the system property can be set for forked processes Setting env vars in Java is a proper hack, and I've got unit tests that On 24 January 2015 at 12:18, maiflai [email protected] wrote:
|
Sadly I agree it's a little more complicated. It's not guaranteed to fork and therefore you can't be sure that the property will be passed through.
I think a data directory override capability would be useful, but we would then need a way to locate the measured statements files at report time, and also a way to identify the exact statement. Currently it assumes one statement file per data directory. I don't think this is compatible with running with multiple instrumented jars. Now that I think about it, I imagine that both Maven and Gradle only support one instrumented project at the moment anyway, since the test classpaths will be composed of:
Configuring these classpaths to include the instrumented classes of other projects would require a bit of thought. |
The statement file won't change between compilations anyway, so having one What I think you want is this: compile your classes with instrumentation That can be done with a system property, but whether its propagated or not On 24 January 2015 at 13:38, maiflai [email protected] wrote:
|
Yep - and that means that the report generation is no longer a function of the data directory and the output directory, but it now also requires the statement file (since we have separated the statement file from the measurement directory). (and assuming that we do not want to measure more than one project concurrently, which seems fair for a first step) |
The statement file can be set by a property too though - or in fact just On 24 January 2015 at 14:08, maiflai [email protected] wrote:
|
Hi, I read your discussion, experimented with https://github.com/scoverage/gradle-scoverage-sample/tree/master/separate-tests (thanks for this project) ported to Maven and have some thoughts and ideas.
|
Ad. 3. Forget about resource. This was wrong idea. I'm in favor of system property. It allows defining many test executions with different scoverage data locations for different reports in one module (if someone would want/need it). |
re: 1 I confirm that Gradle tests instrumented main classes for the subproject under test, and un-instrumented classes for any dependent projects (even though instrumented classes may exist). I imagine that SBT works in the same way, because of the way that dependencies are specified in these tools. It sounds as though this is different to the Maven plugin, in that I can see https://github.com/scoverage/scoverage-maven-plugin/blob/master/src/main/java/org/scoverage/plugin/SCoveragePreCompileMojo.java#L240 and I think this would place all the instrumented classes on the classpath, regardless of which module they were in. However, I wonder if this is actually working as we might expect; I agree that all the instrumented classes are presented, and new measurements are taken. I suspect though that the coverage will be crystallised at the time that the submodule is reported, which means that code covered by a downstream module will have no impact on the coverage statistics, because those measurements were taken after scoverage.xml was created. I'll try to produce a test project tomorrow, and look forward to proving this hypothesis is wrong. |
I've confirmed to myself that the reports are written module-by-module; i.e. code only covered by a dependent module is not included in the report. This means that Maven behaves in the same way as Gradle, despite the inclusion of instrumented classes throughout the reactor projects. @gslowikowski - do you agree? It seems we have two separate use-cases (i.e. 'direct' coverage and 'indirect' coverage). |
I've compared Maven and Gradle working with your simple multimodule project https://github.com/scoverage/gradle-scoverage-sample/tree/master/separate-tests. They work internally completely differently (to achieve the same goal). I will check SBT later. Executed commangs:
My findings are:
Maven log is more verbose, but the tasks executed for every module are:
Differences:
Of course in Maven non-instrumented classes can be available (in I learned Cobertura Maven plugin implementation before writing Scoverage plugin, but Cobertura is different. It does not tweak compilation. After compilation non-instrumented compiled classes are copied from I don't like this difference between Gradle and Maven, but don't know what to do with it yet. As I wrote, I will also test SBT. All I know now is that it does not compile instrumented classes to different location, but to |
re: Gradle compiling the main sources twice - I think this supported excluding files from coverage before the feature was available in the scalac plugin. It can probably be removed now. I think Gradle classes and testClasses are broadly the equivalent of the Maven 'compile' and 'testCompile' lifecycle phases. Gradle does not differentiate between a phase and an execution, they are all tasks. |
"I think this supported excluding files from coverage before the feature was available in the scalac plugin." - I don't understand, can you clarify? If you remove regular compilation, non-instrumented classes will not be available as dependencies in other modules (like in Maven now). What about |
Excluding source files from instrumented compilation allows the class loader to fall back to the plain uninstrumented class (both output directories are included on the test class path). This requires that you compile everything at least once, likely twice. A better solution to the problem 'I do not want to instrument everything' is now available as a property that is passed to the Scoverage compiler plugin on the command line. There is no 'instrumented' jar file produced at the moment, but it is a one-liner to add to your gradle file if you want it. The existing jar task includes plain class files, and the other modules use those too. |
Hi, I tried the solution with system property for data directory and it works! I have not enough time tu publish everything now. Required changes are:
val writer = files.getOrElseUpdate(dataDir, new FileWriter(measurementFile(dataDir), true))
writer.append(id.toString + '\n').flush()
ids.put((dataDir, id), ()) to val resolvedDataDir = Option(System.getProperty("scoverage.dataDir")).getOrElse(dataDir)
new File(resolvedDataDir).mkdirs();//TODO-check result
val writer = files.getOrElseUpdate(resolvedDataDir/*dataDir*/, new FileWriter(measurementFile(resolvedDataDir/*dataDir*/), true))
writer.append(id.toString + '\n').flush()
ids.put((resolvedDataDir/*dataDir*/, id), ())
WDYT? If you agree, it would be good to apply this change, build and deploy 1.0.5-SNAPSHOT version of scalac plugin to Sonatype snapshot repository. This would allow implementing necessary changes in SBT/Gradle and Maven plugins and testing them before next release. |
I'm not sure that is sufficient. I think that the current relationship is that a single compilation run produces a single measured statements file. The compiled classes then write to a single measurements directory. Statements are identified by an integer, generated from a sequence that is reset at compile time. You cannot mix classes from separate compilations and have them write to a single directory, you will get identity clashes. This must not be allowed. If you allow a user to specify where a single location where measurements should be written, then you have to have a unique identifier for each statement. If you have more than one compilation step (i.e. a multi-module project) then you need something more unique than an integer sequence. I think that with the current system it will be difficult to explicitly state the location where each compilation run should write its measurements at invocation time. |
@maiflai is right in his analysis. The compilation run uses an integer sequence for uniquely identifying the statements. It is reset each time, as the code has probably changed anyway. What is the issue that seperate compilations would solve? |
I think that in a multi-module project each module is compiled separately. It might be a gradle peculiarity that a new process is forked for each compilation? The original problem was 'how do I see what code was covered in my integration tests' (which are testing code from multiple instrumented modules). |
(I'll double check the forking behaviour with the ant compiler, but I think the zinc compiler is initialised separately for each project) |
But isn't the behaviour of the CoverageAggregator designed to work around this? |
Scalac compiler plugin needs additional, string type, optional parameter uniquely identifying every module (in Maven for example "groupId:artifactId"). In runtime - additional version of Generally speaking, if we want to get the coverage of integration tests of multimodule project, these tests must operate on scoverage-instrumented classes from all modules. Nobody writes integration tests for single module. |
I think that doing so exposes the internal workings of the plugin downstream; I prefer that the plugin should generate a globally unique identifier. re: integration tests - some might consider that tests accessing external systems are 'integration' tests. I thought this was normally done with the failsafe plugin in maven, as part of a single module's lifecycle. |
You are right, scalac plugin will expose the internal workings to Gradle/SBT/Maven plugin, but not to user project. That's not bad, but of course it would be better to not expose it at all. This was just the first idea. Integration test can test integration between modules in multi-module system/application too. From coverage point of view it's not important by what plugin they are run. For example, I have multi-module Play! Framework web application with Selenium integration tests. I would be happy to see their code coverage of the whole application, not only one module per report. |
Hi,
I think a CLI for generating reports would help me to avoid duplicating
scoverage.ScoverageSbtPlugin#loadCoverage
andscoverage.ScoverageSbtPlugin#writeReports
in the gradle plugin.The gradle plugin copies this functionality at the moment, but does not support multi-module aggregation and it would be nice if I could re-use the existing code by moving it from the SBT plugin into the parent module.
@gslowikowski - does the Maven plugin have its own duplicate of these functions, and if so, would a CLI help?
Thanks,
Stu.
The text was updated successfully, but these errors were encountered: