A great infrastructural revolution in Ackee
This year is a year of quality. Despite a considerable investment of time, we attempted to move our technologies forward by giant leaps, which in the case of our infrastructure development resembled something closer to a bloody revolution than a quiet evolution.
Continuous integration and delivery has been a characteristic of Ackee for some time, but our technology stack consisted of several different options. We started with PHP, deployed through ssh using handwritten Jenkins jobs. The PHP was gradually exchanged for Node.js, which worked more fluidly with the new architecture of micro-services. Before the #yearofquality (#rokkvality), all back-ends were deployed on a test and production virtual server.
At the end of 2015, instead of a monolithic server, we began using the Bluemix IBM platforms. This offered a relatively comfortable device, thanks to the Cloud Foundry environment where programmers could set the parameters and performance requirements of single applications using manifest files and build-packs. However, this also meant a moderate fragmentation when it came to recording the source codes in our git (e.g. for testing), and also when using the cf IBM Bluemix tool. A greater integration with Jenkins never occurred because we ended up reaching the limits of this solution.
Primarily, it consisted in the Cloud Foundry restriction on a purely applicable framework. Though logs are available, the application gives the programmer (and especially the manager) the impression of a black-box. A possible solution to this problem (especially on the production side) was reduced to it either working or not working, which we still can’t explain.
We decided to move from the Cloud Foundry to virtualization technology using containers, specifically the Docker. The Docker offers a much more universal approach, making it possible to define the environment of the virtual OS, which is where the application runs, through Docker files, while also using (dividing) an environment created by a community. Because the solution is built upon the Linux OS, there exists the amount of official Docker containers (e.g. for database devices). With these containers, it is possible to work covering all bases, as it is with any other OS; it is possible to connect online, install other packages and debugging tools, etc.
It was here we encountered the limits of the IBM Bluemix, finding that the platform lacks any reasonable tool for the orchestration of containers. From the orchestration tools we chose the Kubernetes tool, which as an open source (allows deployment on its own iron if necessary) is perfectly integrated into the Google Compute Engine. We chose a combination of Kubernetes + GCE. Another significant advantage is pricing, because you pay for each container at the IBM Bluemix, which greatly discourages the strict separation of applications as separate micro-services. In contrast, in the case of the GCE, you are charged for cluster iron (i.e. the amount of containers created by a user), leaving it up to them until system resources are sufficient.
We managed to traverse the Kubernetes with Jenkins. The applications deployment, as before, runs uniformly through our git. We traverse from classic deploy jobs in the new Jenkinsfiles, making it possible to maintain and update versions in the git. Once again, the git became the main instrument from where the process is controlled to where (we have several clusters) and what resources for the given application are loaded. Programmers create the entire configuration using one Jenkinsfile, one Dockerfile, and one Kubernetes deployment file.
Now, our infrastructure provides both a cheap environment for development at the time of micro-services as well as a robust tool for production deployment and management of even the most demanding applications.