Reading Time: 3 minutes
Continuous integration (CI) can seem like a waste of time or added work that does not push features forward. But when you think about your project’s “health” and how you foresee future processes working with other developers, it leaves an underlining issue. Processes need to be automated to save us time! In the last few years, continuous integration and deployment developed as industry best practices when working with teams of engineers to deplete bad production pushes.
What is out there?
Picking which service to use depends on your requirements, type of tech stack and how much of your workflow you want to handle every week.
Here are some of the providers and open source projects that do CI and focus on continuous deployment/delivery options.
Concepts to memorize!
The environment is defined by the type of enabled consumers, processes and differently designed running builds. The most common types of environments is setup as Production, Staging, Local, Develop, Demo, etc.
Normally, a trigger is setup when your team pushes a commit or creates a pull request; it will trigger a process to start to compile the source code and build the project in the virtual environment.
Each continuous integration has phases that the process goes through. One of the simplest processes that a phase may go through is: Git Clone, Compile/Build, Unit Test, Package and then Deploy.
When we setup our environment; there will be jobs that setup the container and project correctly – it emulates how the job will be deployed. A job might be to install certain dependencies, add environment variables, run migrations for the database, run unit tests, etc.
I think it’s safe to say that containers can be confusing topic to discuss at first. Although the reality is that it makes it easier for engineers to know that their software will run wherever deployed. Containers can run all kinds of applications and can be managed on Amazon AWS, Heroku, Docker, Kubernetes, etc.
There are a few types of testing that are out there but the main two that I feel that are important are: behavioral-driven development and test-driven development, which lay out the best practices to writing great tests.
Once you have your testing framework setup, you can begin to separate your tests into groups and call a group into a certain pipeline.
Pipelines. Pipelining is a group of apps that share the same codebase. Each app represents stages like development, staging, production to be a part of the continuous delivery workflow so that an environment can easily be promoted.
VCPU. This is what we call a virtual central processing unit. Normally a few vCPUs are assigned to every virtual machine within a cloud environment and represents a portion of a physical CPU.
Continuous delivery + deployment
Once you have completed setting up your continuous integration, we can now set up the trigger to promote the new build in our remote environments.
A common rule is to deploy to the staging environment before promoting to production. This allows for additional QA time and review by the owner’s party.
It’s evident that continuous deployment and delivery have developed as industry best practices. Understanding these concepts are key to successful projects. There are other ways to implement CI/CD but I wanted to give a baseline of simple methods to get you started!