Reading Time: 9 minutes
Welcome to the second part of the code coverage ecosystem review. In the first part, I introduced the concept, in case the term was new for you or you needed a bit of a refresher, and then worked through a series of code coverage libraries for PHP, Python, Java, Ruby, and Go.
During that process, we saw some of the functionality on offer, examples of the reporting that the tools provide, and how to install them. In this, the second part, we’re switching gears and looking at four online services.
Unlike the code libraries, which we saw in the first part of this series, these services bundle up code analysis into a set of professionally developed user interfaces which let you get started straightaway.
You don’t need to couple anything together, though you may need to do a bit of light-touch configuration after your account has been set up.
That said, you will quickly learn about the quality of your code and know where it can be improved, both through individual metrics, as well as the concept of a code GPA (Grade Point Average). If you’re not familiar with the term, a code GPA, like a school GPA, represents the average value of the collected code analysis metrics.
That said, by reviewing these tools, you will be able to find a service that best fits your needs. To keep the analysis and comparison as fair as possible, I’m going to use the same code repository with each service.
It’s an old Go learning experiment repository that I created some time ago, called “Learning Beego.” That way, you can see how each report metrics, what they find, what they don’t, and how each tool assesses the code.
One last thing before we dive right in: I’m not receiving any remuneration for covering any of the listed services. All of the services are included based on my personal experience with them and feedback from the community. However, for complete transparency, I do write for Codacy.
With that said, let’s get started.
What’s more, over the next few months they are also launching support for Go, Swift, Scala, Objective-C, and TypeScript. Given that, they have a broad range of support for the majority of languages that developers are likely to be using today.
As an added plus for them, when I contacted them during the research for this article, the team was kind enough to give me access to their Go beta program, so that I could use the same code repository.
Code climate provides both a hosted option, which costs $16.67 per seat, per month, and an on-premise plan. What’s even better is that they also offer a free, hosted plan for open-source projects.
Regardless of the option chosen, you’ll get:
- Support for quality, correctness, security, and style issues
- Quality, test coverage, PR evaluation integration with GitHub
- Integration with a range of services, including GitHub, Bitbucket, Asana, Beanstalk, Campfire, HipChat, and Pivotal Tracker
- Integration with Atom and VIM
- A command-line interface, API, and browser extension
- Extensive documentation
- Repository badges
There are several extra features in the enterprise (on-premise) edition, such as SLAs, priority support, and training. I’m not focusing on them in this review though.
The Code Climate UI offers a simple yet effective experience, where at first glance, it’s easy to know the quality of each repository. From any repository, you can then drill down and gain a deeper understanding of how each repository is assessed.
You can see repository statistics, such as code smells, code duplication, code maintainability, and test coverage. This drill-down process can in turn be applied to each and every file in the repository, where you can see trends on technical debt, lines of code, and churn versus maintainability.
You can also drill down to individual code issues and from there create tickets to fix the identified issues. This aspect of the UI does a good job of providing documentation on issues that have been identified. I found this especially handy when I was first learning about code coverage and static analysis. It’s one thing to have an issue reported, but if you don’t know why, they’re of little value.
Overall, the Code Climate UI is well thought out and well designed, easily one of the most professional that I’ve yet seen.
!Sign up for a free Codeship Account
Now for Code Beat. Created as an internal tool by the developers at codequest, a software development firm based in Warsaw, Poland, with a quirky sense of humor, Code Beat is now a fully fledged standalone service.
Why Code Beat? Well, in their own words:
Our approach is unique in a way that deemphasizes scores and graphs and focuses on an ongoing quest for excellence. As we keep detecting more and more code smells, graphs and GPAs will invariably change. However, we’re perfectly fine with that. Codebeat is written by developers for developers, so we won’t let graphs and scores get in the way of our learning and improvement.
What’s more, unlike other tools that are (at least in part) wrappers around existing tools, such as Golint and Govet, Code Beat “created our own algorithms and an extensible software analysis framework from scratch.“
The primary difference between the free and private plans is that the private plan supports private repositories and public ones and offers dedicated support. There is no GitHub, Bitbucket, GitLab, or Slack integration for the free plan. Regardless of the option chosen, you’ll get:
- Team management functionality, supporting access levels such as owner, admin, and regular user
- An API for team management
- Code quality badges
- Function-level and namespace-level metrics such as assignment branch condition, cyclomatic complexity, lines of code, and code duplication
- Breakdown of code by class, namespace, file, and package
- Extensive documentation
If you read through the documentation, you’ll get the feeling thatthe team behind the project genuinely cares about code and is, as they say, designed by developers for developers.
That said, one thing I found odd about the service is that it lets you import projects using languages that don’t use supported languages and doesn’t show UI messages. Strange. Perhaps it’s just my newness to the service, and I misunderstood something.
A real plus for the service though, is that its project analysis is quite quick.
Like Code Beat and Code Climate, they offer a free plan for open-source projects, a pro plan for $15 per user, per month, and an enterprise (self-hosted/on-premise) option. The distinction between their plans appears much more natural to discern, at first glance.
All plans offer:
- Unlimited open-source repositories
- Integration with GitHub, Bitbucket, and GitLab
- Comments on Pull Requests
- Code quality badges
- Support for configuration files
- Use a range of open-source validation engines, including Brakeman, Checkstyle, CoffeeLint, CSSLint, ESLint, Golint, Govet, PHPCPD, and Pylint
- Standard checks, including code duplication, churn and complexity, and project statistics
- Notifications about new issues
- Performance analysis
- A history dashboard, allowing analysis of code quality over time
- Multi-branch analysis
- Refactoring suggestions
The pro plan builds on this base, by including:
- Private repositories
- Support and faster analysis
- Code coverage tracking
- Integration with Slack, HipChat, Jenkins, JIRA, and YouTrack
- Linter configuration files
- Organization and team management
- Commit and pull request review.
What I like about Codacy is that the UI, while quite feature rich, is very well laid out. It provides everything that you need in a dashboard, yet without overwhelming you in the process. The navigation, which you can see on the left-hand side in the screenshot below, is both clear and intuitive.
Moreover, in the dashboard, there are multiple places where you can click to drill down for a more detailed analysis of a given issue. However, the feature that most stood out for me is the ability to create goals.
While it is one thing to know how your code stacks up, Codacy takes you further by encouraging you to set goals around improving the quality of your code. These are based on improving the quality of either a file or category.
A file improvement is self-explanatory. Category goals help you improve areas such as security, performance, and coding standards, which is a nice touch.
CodeFactor offers more detailed (or nuanced) pricing plans than the other three. They have a free plan, which supports unlimited public repository and one private repository.
There’s a starter plan for $19 per user, per month, which supports five private repositories, unlimited public repositories, and offers premium support.
Then there are the team and agency plans. These are $39 per user, per month and $79 per user, per month and support 10 and 20 private repositories respectively.
There doesn’t appear to be a feature limitation between any of the plans, except premium support is only available on paid plans. Like the others, they offer a sophisticated UI that provides:
- A Grade Score (GPA)
- Analysis of issues, problematic files, pull requests, branches, new and fixed issues
- Technical debt analysis (such as code complexity, code duplication, code style violations, churn, and lines of code) with inline documentation and supporting examples on why the issue was reported, along with links to further information
- Automatic commenting on commits and pull requests
- A knowledge base of common problems, boot camp, and common tasks that you may need to know more about
- Ability to create manual comments on GitHub and Bitbucket
- Issue tracking
- The configuration of what is analyzed
- Code quality badges
On the whole, while the dashboard seems more straightforward or less feature-rich than some of the others, it is well laid out and provides an excellent user experience.
A nice touch is that the top-level of the dashboard provides:
- The ability to rapidly filter by company, team, repository, and author over predefined time periods
- A contribution guide, similar to GitHub’s
- A code quality summary showing current changes and changes over time, which can be adjusted by clicking on points in the contribution graph
- A leaderboard, which shows the developers who have both improved and declined the most. If you’re competitive or enjoy friendly rivalry in your team, this is a handy feature to have.
I feel that the documentation is a bit light, at least compared to the depth of documentation provided by other services. Also, the dashboard doesn’t appear as full-featured as the others. However, it does an excellent job of organizing the information so that it’s readily accessible and does an excellent job of presenting the information at hand.
It’s hard to provide a verdict on which service is the best. Each of them offers robust static code analysis in well-thought-out dashboards that are feature-rich and provide a professional user experience.
CodeFactor appears to be the simplest of the four, yet is also not overloaded in additional functionality.
Code Beat has a unique point of difference in that they don’t use or wrap a range of open-source tools as some of the others do. They created their own analysis engine. Doing so gives Code Beat the advantage of offering a unique insight into code quality that the others cannot provide. Yet at the same time it provides less transparency which would be available, if you know the other tools well, how they work, and what they offer.
Codacy provides the dashboard that I like the most. Plus, I like the goals functionality. As someone who needs goals to stay focused and motivated, this is a win for me.
That said, all of them are compelling services, with free and quite competitively priced plans.
This has been a rapid introduction to four of the biggest and most notable code coverage services available today. Each of them is professionally designed and priced, and clearly make it easy to stay informed of the changing quality of your code base over the course of time.
Which one is for you? Well, only you can determine that. I strongly encourage you to review each one and decide for yourself which, if any, best suits the needs of yourself, your team, and your projects.