In my last article, we went through the reasons why changing from macro to microservices might be a good idea. In this one, we get to the real stuff: How a facade proxy should function to start replacing your old services and/or introducing new ones the easy way, and then how to deploy it to AWS inside a Docker container.
Taking a jump into new technologies can seem overwhelming at first; I know it was to me when I started examining different solutions, libraries, and other projects that spawned from Docker, as well as container management solutions such as Kubernetes and Amazon EC2 Container Service (AWS ECS). I felt that trying to make informed choices on any of these made me dive deeper and deeper into information, being overwhelmed with choices and new things to learn.
So I will walk you through with what I saw as the correct choices as well as my reasoning behind them. We will also move at a slow pace when learning the basics to avoid getting lost in techno babble.
Using Go for a Facade Proxy
Initially I had chosen to build my facade proxy with Node.js, having some experience in making simple backend APIs with it. After a friend repeatedly recommended that I check out Go, I compared the two and found out that Go was faster by several fold, consumed fewer resources, and that even the original creator of Node.js, Ryan Dahl, has jumped onboard the Go train.
The syntax looked a bit mysterious at start, but somewhat recognizable as a C-like language with some extras. After taking the dive, I started getting the hang of it.
Go is a free open-source language developed by Google and released in 2009, reaching version 1.0 in 2012. So it’s a fairly fresh but still mature language. The latest release (at the writing of this article) is 1.8 and there are a lot of resources from videos, books, and repositories available to make learning it relatively easy and (at least for me personally) fun.
The language of your choice might not be Go, so I won’t dwell too much on the specifics. Using other languages for your facade proxy — such as C#, Java, or PHP –won’t make that much of a difference. It’s about having less overhead in response times; in that, languages like Go, or frameworks like Node.js, excel.
My proxy application adds a fraction of a millisecond to requests when performing full logging, so I reasoned it’s fast enough for its intended purpose.
Our facade proxy’s design
Initially it’s a good idea to just build a proxy that does nothing else but relay information. This means taking in the request, sending it to the actual endpoint, reading the response and then relaying it to the sender of the original request. For now, it doesn’t need additional logic or to parse the data it transmits; that won’t come until later on.
I made my sample application to redirect every request except for “GET /healthcheck”, which just returns a string. This was because I wanted to be able to test that the proxy endpoint is alive in case the actual requests wouldn’t function as expected. It would be easy to add some real functionality to this, such as return N lines of logs or other stats such as loads, diskspace, etc. It also gives you a reason to learn more about the language you’ve chosen (if you’re not yet that familiar with it).
I highly recommend that you also add different levels of logging to your proxy (such as info, debug, error, as they are in Java) so debugging your future services is easier. Go provides simple logging that I found to be inadequate. After going through several possibilities, I ended up being happy with Simon Eskildsen’s Logrus. It provides different logging levels and there are even lots of additional hooks, making it possible for your application to log to Slack, for example.
The other library my application uses is Patrick Crosby’s Jconfig. My example comes with a configuration file so you don’t have to touch the code in order to change the API endpoint or local port.
Also when making a build to production, your deployment should contain a phase where it replaces the development configuration file with the production one. It isn’t safe to keep your production passwords and such in the same repository as your code files.
Running the proxy
My facade proxy code can be found at https://github.com/CSTeea/facade-proxy. Getting it to your system is easy with Git; as for installing Go, there are guides available for different operating systems at https://golang.org/doc/install.
Once you have Go installed and the files downloaded, you can edit the
config.json file to point to your own API. My code example uses the two aforementioned libraries, which need to be installed by running
go get github.com/Sirupsen/logrus and
go get stathat.com/c/jconfig.
After this, the actual proxy can be launched with
go run facade.go. It will direct your port 8080 to the endpoint you defined.
Note that I did not include any code to rewrite possible XmlHttpRequest (XHR) in my example. Depending on your client and server applications, you might get errors notifying you that the origin and end destination of the request are not the same.
Isolate Your Application into a Container
There are a lot of alternatives to application isolation. Some are very resource intensive such as VMWare or other virtual machines, which can make creating additional instances a slow process. If we want our microservices to be light and fast, that’s hardly a route to follow.
Docker, on the other hand, doesn’t virtualize its host environment, but simply adds a layer between your application and the operating system underneath. This is much faster in terms of deployment and also enables different containers (of the same image) to share the same libraries and resources. This leads to your system being very flexible and fast in terms of deploying more endpoints to respond to increased demand, as well as the ability to suspend them when not needed. All this helps make your system more cost efficient. After weighing in different options, I saw Docker as the way to go.
Since maintaining our running environment with Docker is a key part of creating microservices that function well in the future, my application project includes a Dockerfile which configures your container to install the required parts. It uses a default Debian image with Go installed. It also installs the required configuration and logging libraries, defines the port we use to be exposed outside the container, and starts the proxy.
To start our own proxy inside the Docker container, we first need to create an image of it. This is done by running
docker build -t facade-proxy ., which creates a new image based on the Dockerfile in the current directory and tags it as ‘facade-proxy’.
After the image is created (you can see it by running
docker ps -a), we can start it in the background by running
docker run -d -p 8080:8080 facade-proxy. This starts the Docker container image as a background job, redirecting local port 8080 to the container port 8080.
Note that if you defined your API endpoint to point to
localhost in the configuration file, that localhost will now point inside the container and not to your own actual localhost.
Deploy to Amazon EC2
There are various services that different hosting and cloud services provide. My choice was Amazon because that’s where our API is located. So while I used AWS in this part, it is only to a virtual server. Since we aren’t using any kind of container manager yet, you should be able to replicate this to a provider of your choice.
Since my facade proxy was a proof-of-concept, I didn’t want to spend any company money on it and deployed it to a free tier on Amazon AMI EC2 instance. For a simple proxy service, this provides more than enough resources. So deploying it is as easy as starting a new EC2 instance, accessing it, and installing Docker. You don’t have to install Go or anything else; those are contained within the container.
After installing Docker (and making sure the daemon is running), you can just copy the facade-proxy files there (or install Git, pull them, and edit your configuration), and then again create the image with
docker build ... and then start the proxy with
docker run ....
If your previous steps worked correctly, so should this one. You now have a proxy endpoint from which to expand and start building new features to your application as microservices (in their own containers) as well as separating them from your macroservice when needed.
The next article in my Macro to Microservices series will deal with adding some features to our proxy as external services, as well as deploying them to a container management service, where you can start new instances of your services when needed.