Reading Time: 3 minutes
When you practice Test-Driven Development, most of the time you need to run only a small number of tests to validate your recent code changes. Unfortunately things change once you start refactoring. Refactoring models implicates your entire application. So in order to keep things from going down the drain, you’ll need to run all (or most of) your tests constantly. And that’s where things get tedious.
At Codeship we develop our Ruby on Rails web app test-driven with Rspec. Most of our specs are high level request specs (in a way integration tests) which are slow compared to controller or model specs. One of the reasons for this slowness is that request specs test the whole application stack. Another reason is that we changed the default capybara driver from Rack::Test to Poltergeist.
“Poltergeist? But that’s so much slower!”
Right! But we chose Poltergeist because it tests a web application more like a user experiences it. While Rack::Test only considers the HTML (or whatsoever) response from your application, Poltergeist launches a headlessWebKit browser, provided by PhantomJS.
Because of this additional safety we decided on making Poltergeist the default capybara driver.
Development vs. Codeship CI
Using test-driven development will eventually take you to the point of misery, when running all your specs slows down your productivity unbearably. Poltergeist lets us reach this point at record speed. So what to do now? On the one hand we wanted to retain the quality of our web app, on the other hand we wanted to continue developing without feeling the need of poking our eyes out while waiting for the specs to succeed.
Running our spec suite on my computer with this setup took 8:45 minutes.
The solution was Codeship itself: We would speed up the specs in development as much as possible and let the Continous Integration server do all the cumbersome work.
This little change reduced the execution time to 5:00 minutes.
Step two: Skip slowest specs
There were some specs that took up to 40 seconds to run. These were used to check some quite complicated procedures that rarely changed. We decided to skip all specs for development that took longer than 10 seconds.
Running our specs with
rspec --tag ~speed:slow
reduced the execution time by half: 2:29 minutes. Yei!
Step three: Skip remote specs
In Codeship we integrate a couple of external services like Github. Of course we also needed to verify that the communication with these services worked. But a developer’s life is hard: Sometimes you are on a train, on a plane or there’s simply no network reception. And all of a sudden many of your specs fail.
Therefore we tried to remove the dependencies on external services for as many specs as possible. We tagged all specs that still required internet access with
and skipped them in development. Executing
rspec --tag ~@remote --tag ~speed:slow
now finished in: 1:28 minutes.
By changing our test setup and skipping time-consuming tests in development we were able to cut down test execution time by more than 83%. This way we could stay productive during development and still perform extensive checks on our web application using the Codeship continous integration server.
Here’s a final overview of the results after each optimization step:
|number of specs||execution time (minutes)|
|All tests with Poltergeist:||128||8:45|
|All tests with Rack::Test/Poltergeist:||128||5:00|
|Without slow specs (> 10s):||107||2:29|
|Without slow and remote specs:||99||1:27|
Here are some more blog posts that explain how to speed up your test suite: