Reading Time: 4 minutes
Thankfully HTTPS isn’t needed for local development very often — it can add a fair bit of complexity and overhead that doesn’t often add much value during the local development phase of a project.
Of course it’s an absolute must in production, but generally if I’m only accessing my app via localhost while developing it, I don’t need or want to fuss with certificates and configuration and all that. In production, my apps run behind load balancers that handle SSL termination so even then I don’t usually have to configure much SSL myself; I simply set up the listeners properly on the load balancers and let them do the heavy lifting.
However, recently we were implementing 2-Step Verification on our corporate single-sign-on, and we wanted to support Yubikeys using the FIDO U2F specification. This standard mandates HTTPS and there is no “dev mode” that allows use of HTTP instead. As a quick and dirty solution, we just used ngrok for a while, but that understandably upset our network security folks and was a bit overkill, seeing as we didn’t need public access to our local dev environment. So we needed a better solution.
By now hopefully everyone is aware of Let’s Encrypt, the free and automated certificate authority for publicly trusted SSL certificates. Let’s Encrypt works by using automated techniques for verifying that you own the domain you’re requesting a certificate for.
A Let’s Encrypt client calls their API to request a cert for one or more domains, and the API replies with a unique challenge. That challenge must either then be put in a place where Let’s Encrypt can make an HTTP request in order to read the challenge for verification, or it must be stored in a DNS record that Let’s Encrypt can resolve for verification. The HTTP challenge method does not work for local development but is convenient for publicly accessible hosted environments.
That leaves us with the DNS method for verification. If you use a cloud DNS provider like Amazon, Cloudflare, or dozens of others, the process of creating the DNS record can be automated as well to really simplify everything.
And if you don’t use a cloud DNS provider, it’d be worth the time and money to just register a domain for dev work and set it up with Cloudflare for free to take advantage of local HTTPS if you need it.
We use Docker to run our apps in containers during development as well as production. And as I mentioned earlier, we run our production containers behind load balancers that handle the HTTPS, so we needed a solution that would mimic that arrangement locally.
Fortunately there is a really great modern web proxy solution called Traefik. Traefik has built in support for both Let’s Encrypt as well as integration with major cloud DNS providers like Cloudflare. So with a fairly simple configuration, Traefik can handle all the interactions with Let’s Encrypt and Cloudflare for you so you only need to provide a couple values like your email address and the domains you want certs for.
Here is a minimal (fewer than 40 lines)
traefik.toml configuration file that will request a SAN certificate from Let’s Encrypt for domains
local.fillup.io and use Cloudflare for verification. It will proxy requests sent to
local.fillup.io to the host
http://app:80, which in my use case is another Docker container.
Missing from the configuration are the Cloudflare API credentials, which are provided via environment variables. You can consult the Traefik documentation for what environment variables are needed for your DNS provider.
defaultEntryPoints = ["http", "https"] [entryPoints] [entryPoints.http] address = ":80" [entryPoints.http.redirect] entryPoint = "https" [entryPoints.https] address = ":443" [entryPoints.https.tls] [acme] email = "email@example.com" storage = "/cert/acme.json" entryPoint = "https" [acme.dnsChallenge] provider = "cloudflare" delayBeforeCheck = 60 [[acme.domains]] main = "fillup.io" sans = ["local.fillup.io"] [file] [backends] [backends.backend1] [backends.backend1.servers] [backends.backend1.servers.server0] url = "http://app:80" weight = 1 [frontends] [frontends.frontend1] entryPoints = ["http", "https"] backend = "backend1" passHostHeader = true [frontends.frontend1.routes.default] rule = "Host: local.fillup.io"
Pretty simple, right? In order to make this a drop-in solution for us, we created a Docker image that uses an entrypoint script to replace placeholders in the config file with real values. This way, as we work on projects, we can just add a simple chunk of code into our
docker-compose.yml file and, presto, have local HTTPS support proxying traffic to our application.
Here’s an example of how to use it in a
proxy: image: silintl/traefik-https-proxy ports: - "443:443" volumes: - ./cert/:/cert/ env_file: - ./local.env
And in the local.env file include (change values as appropriate of course):
DNS_PROVIDER=cloudflare CLOUDFLARE_EMAIL= CLOUDFLARE_API_KEY= LETS_ENCRYPT_EMAIL= LETS_ENCRYPT_CA=staging TLD=domain.com SANS=local.domain.com BACKEND1_URL=http://app:80 FRONTEND1_DOMAIN=local.domain.com BACKEND2_URL= FRONTEND2_DOMAIN=
FRONTEND2_DOMAINare only needed if you want to support HTTPS locally on two domains routing to two different backend containers.
You do not need to be using Docker to use Traefik to proxy traffic to your application on your localhost, but it does make it easier. You can also download and run Traefik locally on your computer, as it’s just a single Go binary file.
Another valuable use for this container is with Facebook login. Recently Facebook changed their policies for apps/sites that use Facebook login which requires the use of an HTTPS URL for the OAuth redirect URL.
I have a side project that uses Facebook login and when I originally wrote it, HTTPS was not required, and it was no problem to use HTTP during local development. But when I went to work on it a couple weeks ago, I found I could no longer log in locally to work on it. Thankfully I already had this
traefik-https-proxy Docker image so I was able to drop it into my docker-compose file, update URL settings in Facebook, and get back to development using HTTPS locally in no time.
I hope this helps some of you and saves you a bit of hassle by not having to manually configure or manage HTTPS locally.