10

I have read several posts on stackoverflow that stated, that the usage of sonar as a pre-commit analysis tool is inefficient, due to the fact that it has to run compilation of the whole project, run its analysis etc.

However, the manual for sonarqube state that there is a sonar.inclusions property for setting the list of files to run analysis on. So I was thinking about running analysis on files that have been changed/modified as a pre-commit hook and failing the commit in case too many issues were added.

As I understood, it is possible to fetch the list of modified and added files through svnlook; there is also the ability to point sonar analyzer to a concrete .properties file (say, the file pointing to a configuration that has only coding rules and cyclomatic complexity and LCOM4 metrics).

However, I fail to understand how to obtain the result of Sonar analysis within the pre-commit hook script and provide, say, a link to the analysis result. Is it at all possible? Are there any real-world, or at least remotely relevant examples of such practices?

Thanks in advance.

jiallombardo
  • 121
  • 1
  • 4

2 Answers2

5

However, I fail to understand how to obtain the result of Sonar analysis within the pre-commit hook script and provide, say, a link to the analysis result. Is it at all possible?

From pre-commit, no. At least not if you want your commit to complete. If pre-commit returns anything to the client, the commit is rejected.

pre-commit should only be used for checking the a commit to validate that requirements have been met - check that a commit message has been supplied, if you integrate with a bug tracker make sure that a valid bug ID has been entered, maybe do some security checks that the built-in path-based authorization can't handle.

All hook scripts should be as short and efficient as possible. A long-running pre-commit especially will hold up both the committer and anyone else trying to commit behind him.

For your usage, a post-commit hook may work (except it can't send feedback to the client, so you won't be able to provide a URL), but a better solution would be to use a continuous integration server. This tool will monitor the repository for changes and perform actions you tell it to each time a qualifying commit happens. Use that system to perform your checks and send an email with the results.

alroc
  • 27,574
  • 6
  • 51
  • 97
  • 1
    Indeed, integrating into CI process is a good idea; however, what I wanted was a system that would fail commits due to coding rule compliance. So, basically, if the analysis didn't hit a certain threshold - I don't need to return anything; however, if, say, there are 3 critical issues in the committed code, I'd like to fail the commit and link the committer to a page with results. I know these scripts have to be lightweight; if they fail this criterion - ok, but I want to know if it's possible at all to implement such behaviour. – jiallombardo Sep 29 '13 at 20:05
0

I dont know if this thread is still alive, but working on a similar situation and hence responding.

Issues Report plugin enables generation of an html which could then be parsed (a separate process - I am using a shell script and a regex, as part of a jenkins job) to identify if new issues have been reported and if yes, could return a failure.

MadDevel
  • 1
  • 1