You must be familiar with the basic concepts of Kubernetes for more specialized use of ArvanCloud Cloud Containers. We will discuss the main concepts in general throughout this section. You can refer to more specialized articles after reading this guide for more detailed information on each of the basic cloud container concepts.
Introduction
Having been involved in the software deployment process before, you are well aware of the various steps, complexities, and difficulties involved. As an example, if you want to deploy the backend application you had written using Laravel PHP, you must buy a server. Once the operating system is installed, you should install your software requirements like php-fpm. Route traffic to your software by installing nginx or apache. Configure fpm and nginx to serve user requests. In the end, you will buy the domain and configure it on the server’s IP to direct users’ traffic to the server.
Software deployment is the most important part of the ArvanCloud Cloud Container, which will help you speed up your work.
Because the ArvanCloud Cloud Container was developed based on Kubernetes (or, more precisely, OpenShift), we will explore its general concepts in this article and how these concepts are related to the traditional software deployment process.
What Is Kubernetes and How to Use It?
For a better understanding of Kubernetes, we will look at the traditional software deployment process.
1- Software Deployment
With this process, you need to run your software on the server. Generally, this is done by uploading the software files to the server. If you have a Laravel PHP project, for example, you will transfer the files to the server; or if your project is written in Golang, you will submit the compiled project file to the server.
This is exactly the same process in Kubernetes, except that your software has to be put in a container. The container is, in fact, the same file(s) that your software runs in, which is located in an archive file (tar file) and will be executed on the server after the transfer.
There is another difference between containers and direct execution on the server: Having your software run in a container (using Docker, for example) on the server provides much more limited access to it, allowing you to even limit its resource consumption concerning the total system resources. Out of eight processor cores, for example, you can allocate only one core to it or even only 0.5 cores. As a result of the many limitations on running software inside the container, your software will not even have access to the operating system libraries. This is why you must include all the necessary libraries in your software container as you create it.
Temporary and ephemeral storage is another difference between running software inside a container and the traditional method. It means that no matter how much data is written to the hard disk after running the software inside the container, all the files of this written data will be lost as a result of restarting the container or stopping it in any other way. Because of this, you have to connect a permanent disk to your container if you want to run software such as databases.
Remember that software in Kubernetes runs in a pod, not a container, though from a user’s perspective, you can consider the two to be the same in most cases.
2- Software Deployment Management
In case you have ever deployed software on a server, chances are you have encountered the fact that you will always require a service to run your software again when your software crashes or the server is reset. For example, you may have used systemd or upstart to do this. (When you are using a service like php-fpm to run the software, systemd will manage the execution of php-fpm, and your software will be executed by php-fpm).
The service you have defined for systemd, in this case, will guarantee that your software is always running on the server.
As an example, the sshd software, which is in charge of accepting SSH requests to the server on Linux, has a service such as the following in /etc/systemd/system/sshd.service:
The file above and starting the sshd service using the systemctl start sshd.service command will guarantee that the service is always available and will run again should the software crash and stop.
Likewise, in Kubernetes, we have the concept of deployment with a very important difference. The systemd services or similar services provide you with the ability and the guarantee that your software will always be running on a server, whereas the deployment guarantees that your software will be running on one of the servers in the Kubernetes cluster, and you typically have no control over which server the software is running on, and in fact, you are not even aware of it.
Actually, this is a crucial feature of Kubernetes because whether or not you know the status of your server, you will be sure that your software will be running on another server within the cluster even when the server on which your software is located stops working for any reason.
Additionally, using this important feature, you can simply specify how many copies of your software should run on the cluster (by specifying the number of replicas), and without you knowing, your software container will be running on any number of servers.
In general, Yaml files are used to define any kind of entity in Kubernetes, including deployment. These files, which you need to create, will determine where your deployment will receive the software container from, how many copies of your software it will run, or how many processing and storage resources it will allocate to your software. As an example, below you can see the yaml file for deploying an nginx software:
Since there are each of these different types, ArvanCloud’s panel shows them all as an application on the first page, with the model of that type listed as the subtitle of each of your software. If you want to learn more about Deployment, please check its article.
3- Network and Load Balancing
Usually, when we install software on the server, we use services such as nginx or haproxy to route the incoming traffic to the software. With this method, we can distribute the incoming traffic to multiple servers, or by employing health-check on haproxy, we can make sure that our users’ traffic will be redirected to a healthy server when the server is down.
Considering the nature of the deployment, we explained that our software could run on a different server at any given time. Depending on the settings we have specified for deployment, it is possible to run multiple versions of the software on cluster servers. We do such load balancing inside the Kubernetes cluster with the help of the service concept. You must define a service for each of your backend software, and for their supporting software like caches and databases, and for each deployment. For a service like Redis, for example, we have to specify the service after defining the deployment as follows:
The use of service has a very important advantage. It eliminates the need to use the IP address for communication between your software, and you can easily access any of your other software in the cluster by using the name of the service. For instance, in your PHP software, you may request the address redis-service:6379 to connect to Redis defined above.
For more information about services and how to use them in ArvanCloud’s cloud container, you can refer to the respective service article.
As we explained before, services are used only for communication inside the cluster, but what if we need to transfer the user traffic from outside the data center to our software?
Using the traditional method, it is necessary to define a domain and then transfer HTTP and HTTPS traffic using a service such as Apache or nginx. In Kubernetes, this is handled by Ingress and in OpenShift by Route. The Route simply specifies the service that incoming traffic to the cluster for a given domain will be forwarded to. After defining the service for your software deployment, you can simply route traffic from outside the cluster to inside. Below is an example of how to define a Route:
Please refer to the related Routes article for more information about routes and how to configure them to accept traffic from outside the cluster.
4- Software Configuration
Most likely, to produce any software, you will need one or more configuration files to allow the software to read information such as the address and password for accessing the database from these files. Such information may be highly confidential, such as database access or non-confidential, for example, the domain in which the software is being deployed.
Under the traditional deployment model, config information is usually set in a file or files on the server, and the sysadmin or server manager modifies the software settings by applying changes to that file.
However, in Kubernetes, the configuration file could be placed inside the container, although this is not recommended for security reasons and will complicate the process of modifying the file. There are Secret and ConfigMap concepts for this purpose, both of which are readable as a file by being attached to your software container, also providing a higher level of security for your sensitive settings. (Generally, there is not a big difference between Secret and ConfigMap, and it is only recommended to specify confidential information for the software using Secret).
You can find more detailed information about these concepts in the related article about Secret.
Conclusion
In general, Kubernetes can be considered as a person in charge (agent) that continuously monitors the current state of the cluster (Current State) and always tries to bring the cluster to the favorable state (Desired State) defined by you. If there is a difference between these two states, for example, a new deployment is defined, or a server is out of service, and there is a decrease in the number of active versions of your software, then the agent will take action to change the Current State to the Desired State.
The cloud container and Kubernetes are infrastructures aimed at simplifying the software deployment process and facilitating the management of hardware resources. The deployment of software on these platforms will enable you to move your software to the full deployment stage in the fastest possible time and with minimal concerns and also manage your software without any worries regarding the infrastructure. Refer to the Cloud Container Guide section for tutorials on implementing the aforementioned concepts.