7 Tips for Continuously Deploying Single Page Apps

Development

Reading Time: 5 minutes

Single page apps deliver fantastically rich user experiences, and they open up an entirely different avenue for continuous deployment. Separating out a front-end application from the server is a sound strategy for breaking up the responsibilities of the team. Maintaining a separate front-end code base allows teams to iterate on features quickly and interact through formalized contracts in the form of an API.

Not everything about delivering static assets is so rosy though. There are hosting and delivery pitfalls that your team should be aware of before embarking on continuously deploying static assets. Here are some tips for effectively deploying statically hosted applications iteratively, safely, and most importantly, efficiently.

1. Package and Deploy Using State of the Art Tools

If your team has decided to deploy client and server code independently, there’s a good chance that the server isn’t written in Node. That doesn’t stop you from using Node and NPM to build and package your application! You’re free to use state of the art tools for packaging and development, regardless of your server-side framework.

Once your build and testing process is independent of a server framework, it frees up the delivery process as well. After the front-end application passes integration testing, the CI server can build a production release (see tip number two) and deliver it directly for distribution (see tip number five).

2. Minification, Compression, and Source Maps Are Not Optional

Deploying a single page app means more than uploading concatenated code to a server. It deserves all the byte-saving care and attention you would give to the assets served up by a production-grade web framework. That means it should be minified, compressed, and most certainly includes source maps.

Any of the popular JavaScript build tools along with a tiny bit of scripting will let you deliver perfectly optimized packages.

3. Optimize Code and Style Delivery

This may be slightly controversial given the recent trend toward declaring styles along side view components, but there’s a trade off to bundling styles along with code.

Typically a browser can download the CSS and JS files in parallel, lowering the time until first paint after load. That performance boost isn’t possible when all of the assets are bundled together. Instead, all of the styles and code are smashed together in a single large file, and clients end up staring at a blank screen while they wait for assets to download.

It complicates the delivery process slightly to have multiple files, but the size and performance benefits are worth the trouble.

Sign up for a free Codeship Account

4. Deliver Separate Bundles

Unless you are an ultra purist, every packaged application is composed of both library modules and application code. Chances are that your application code changes much more frequently than library modules. When you serve up a giant concatenated bundle, the client is forced to download everything fresh with every minor change, no matter how small it is. Application bundles routinely push a 3MB payload, which is a lot of code to download again just because a few lines of application code changed.

To avoid this issue, you should separate your application into at least two bundles: one for concatenated library code and another for application code. In the bright future of HTTP/2 connection parallelism, individual files may be served up in parallel, and this sort of planning won’t be necessary. For right now, a bit of asset bundle splitting will speed up the experience for your users on every release.

5. Get Friendly With a Content Distribution Network

Serve static applications from a content distribution network. This allows clients to keep pointing at the same URL while maintaining caching semantics. It also allows you to perform invalidations when you release code, despite the lack of asset fingerprinting. An invalidation updates the cached version of the application that’s held at each edge server, the servers that actually serve the application to clients.

Be warned, invalidations can be slow, taking 10 minutes or more on Amazon CloudFront. This unpredictable asynchronous behavior is part of why extra care has to be taken around versioning and releases.

6. Continuity Knows No Version

Don’t rely on users reloading their browser. Assume that some users will be running older versions of the app, and be prepared to handle requests from deprecated features. Consider releases as a continuum of changes and decide how long your release cycle is.

At a certain point it isn’t practical to support every old release and the bugs they may have contained. Unless you are deploying to a kiosk with an especially infrequent update cycle, you can safely assume users will reload once a week.

7. Roll Features Out Gradually

Use feature flags to roll features out gradually. Ember is a stellar example of shipping code with features available, but it’s disabled by default. The code is live and in production, but most people aren’t using it. Once it has been vetted in the wild, with staff or with a fraction of your users, you can release a new version with the feature enabled.

The same approach is often used when releasing server side code, but the stakes are higher with statically hosted single page apps. A gradual approach is crucial because rolling code back can only be as fast as your CDN’s invalidation period. That means you could have a botched release in production for 10 minutes or more without being able to revoke it.

Deploying single page applications can be as simple and robust as deploying application assets bundled with server code. More so, you gain the power of native JavaScript tooling, regardless of your application framework. At its core, a server/browser relationship is a simple distributed system. By deploying single page apps separately from the server, your team gains all of the flexibility, focus, and prioritization of a micro-system architecture.

Subscribe via Email

Over 60,000 people from companies like Netflix, Apple, Spotify and O'Reilly are reading our articles.
Subscribe to receive a weekly newsletter with articles around Continuous Integration, Docker, and software development best practices.



We promise that we won't spam you. You can unsubscribe any time.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.

  • Pingback: 2 – Tips for Deploying Single Page Apps()

  • Pingback: WEB OPERATIONS WEEKLY NO.46 – ENUE()

  • Robert Bak

    Point (2) seems weird. Why would you want to include source maps for anything on production? Also zlib compression is done by the server hosting the files, so no build tools will help with that.

    • The primary purpose of a tool like uglify is to compress the code, not to mangle it. You’ll really appreciate being able to debug something in production when it isn’t feasible to pull the entire database into development, or if you are unable to connect to a production database from dev.

      When it comes to compression it will really depend on how the assets are being served. In the SPA situation that this post is discussing the files are being served by a static system like S3, which won’t perform any compression for you. The only way to serve up gzipped content is to do it before the file gets uploaded.

      • The compression aspect probably depends on the provider you’re using, but for the vast majority it’s a server config issue, I think even Amazon CloudFront now allows to just set gzip compression on and be done with it.

        The source maps issue doesn’t matter all that much, you can upload them, but it doesn’t feel like a “must do” especially for SPA, since the data source is most likely accessible from any browser (unless it’s some intranet thing). Also worth noting that you’d want to upload just the source maps and not the source files (although uploading both might be more convenient in some cases). Uglifying has an extra value of removing comments, which then don’t need to be safe for general public.

  • Cory Maloy

    No one should ever deploy sourcemaps to production. Use those on your QA server for debugging and that is it. It can more than double the size of your file. This also applies to css. Always minify your css files and do NOT use sourcemaps unless it is the QA server. This is the standard best practice in enterprise applications…

    • Brad

      Source maps can be output as seperate files which though referenced in the minified CSS/JS file won’t be downloaded by the browser unless they are in developer/inspector mode.

  • santacruz

    Hey @parkerselbert:disqus , do you have any suggestion on how to sync deployment of separated backend and frontend code? If a feature change involves both changes in backend and frontend code in separate repositories. How do you make sure that both have finished their build processes / deployment at the same time? Would love your ideas on that. Cheers!

    • @disqus_GfoDoGkfZE:disqus If it is critical to deploy backend and frontend changes together then they should be released together, from the same server. You can still front the frontend with a CDN and get all the benefits of edge caching.

      If packaging everything together isn’t an option then you’ll need to release the backend changes first in a backward compatible way. Once that is verified you can release the frontend. It is just like maintaining a public API with a short versioning cycle.

  • Robert Fletcher

    Point 6 sounds like a recipe for a code maintenance headache. I’ve been thinking about this and I don’t really want to have to maintain support for arbitrary legacy paths. Instead I’m thinking of adding a client side `version` tracker that checks in with the server to see if it needs to be updated. Probably a good use-case for websockets, actually.