![](https://static.wixstatic.com/media/9a3369_cd71dd5a57ed4d6aaf78286ae0c6f034~mv2.jpeg/v1/fill/w_980,h_464,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/9a3369_cd71dd5a57ed4d6aaf78286ae0c6f034~mv2.jpeg)
Node.js applications are easy to develop, debug, and run on an individual's PC. Problems start to pop up as soon as we start integrating and deploying our applications on servers. The old saying "It is working on my PC" is still very common. Things get intricate in large-sized projects involving multiple teams working on different microservices. Automating DevOps tasks become a little bit more difficult as far as continuous integration is concerned.
Non Functional Requirements (NFRs) are gathered differently today. Almost a decade back, NFRs were very specific, today they are more generic. It is not relevant for a developer to know during the development phase, what will be the final environment for his application. Developing code is easy, packaging and deployment are a bit difficult as we have different developers from different physical locations using different operating systems for a development, with so many OS, so many cloud providers to choose from.
In this blog, I will first try to explain what it takes to deploy, monitor, and scale our application on the cloud using the conventional deployment approach, and then we will try to do the same using Docker and Kubernetes.
So historically we have three types of deployments
![](https://static.wixstatic.com/media/9a3369_5c9237ca51864f1ab6d7d3db246a4451~mv2.jpeg/v1/fill/w_932,h_332,al_c,q_80,enc_avif,quality_auto/9a3369_5c9237ca51864f1ab6d7d3db246a4451~mv2.jpeg)
Physical Deployments, where physical servers are used to deploy our applications. We all know the issues and challenges associated with this, so I am not going to explain this further.
Virtual Deployment, where we deploy our applications on virtual servers and scale up or down on demand, which is also known as cloud-based deployment.
Container Deployments, where VMs are providing application isolation with lighter footprints. The container has its own file system, CPU share, memory, process space, and more.
Typically if we are trying to deploy our Node JS application on a cloud platform (Virtualized Deployment) from scratch, here are the steps we should follow
Considering deployment on AWS, we start off by spinning a new EC2 instance
Either use a prebuilt image or create a new one and install the required software on your server, you will have to look for deployment guidelines provided by the development team
Copy your .js source files on to the server and use npm install to install all dependencies of your project listed in the package.json file
Start application using node index.js command. If everything goes well, assuming developers used the same software versions then your application will start without giving any issues, think about porting this to some other OS version or to a different cloud provider all together?
As you know node.js runs on a single thread which in the background uses multiple threads to perform asynchronous operations. An unhandled error in your application will eventually bring down your entire node.js application, We can surely use the forever command-line utility to monitor your application which will automatically start your application and make sure it is up and running. Again you need to install forever on your EC2 server, which needs to be done separately as it is not part of your package.json dependency file
Some of the issues we face with the above approach are
Packaging & Deploying
Application monitoring
Application Restarts
Efficiently utilizing all cores of your VM
Collecting application performance statistics & logs
Load balancing & Scaling
Release Rollbacks to previous versions
Auto-scaling up and down based on user load
What is Docker?
Docker is around for quite some time now and most of us are already aware of what Docker is and the benefits of containerizing your application using docker. Will try to explain this quickly.
Docker provides a set of PaaS products that use OS-level virtualization to deliver software in packages called containers. Containers are independent units of code that package up all your software code and all its dependencies so that the application runs seamlessly on different computing environments. To run a docker container all you need is a docker engine running on your target environment.
Deploying Application on Docker
Deploying your application on docker is pretty simple, you just need to create a simple docker file with contents as below
# 1. We will use node as base image for this container which is based on Linux Alpine version. Alpine is super efficient Linux version.
FROM node:13-alpine
# 2. Change your working directory to the root directory of your application
WORKDIR /myapp
# 3. Copy your package.json file which contains your application dependencies
COPY ./documents/src/package.json /myapp/package.json
#Install application dependencies using npm command
RUN npm install
# 4. Copy application source code to the docker image
COPY ./src/ /myapp/
#5. Finally, Start your node application
CMD node ./myapp
That's it, now your docker file is ready which contains commands to setup your docker container. Once ready, it can be used by other developers or administrators to spin a new container instance in no time.
Before we run our application inside a container we need to build the container using the following build command
docker build - t myapp:v1
![](https://static.wixstatic.com/media/9a3369_d5c8a5ad6c314e679391bf238132c633~mv2.jpeg/v1/fill/w_271,h_326,al_c,q_80,enc_avif,quality_auto/9a3369_d5c8a5ad6c314e679391bf238132c633~mv2.jpeg)
Running docker build command builds an image file if there are no build errors. While building an image, Docker uses a layered approach, where each statement is executed and cached as one separate docker layer. Later on, if you change your app source code files, docker will build only relevant application layer and will utilize cached contents for other layers.
Once the build is complete we are ready to launch our docker container with the following command
docker run -p -rm 3500:3500 myapp:v1
This will run the docker image in the background while mapping our application port 3500 to docker port 3500. This means that this application is now accessible via port 3500 from a developer's machine.
That's it, you have successfully deployed your Node.js application on docker.
You can now share your docker image using the docker registry. You can use the public as well as private registry. Following command pushes this docker image to a docker registry so that administrators can download the image directly and run it
docker push myreg/myapp:v1
Some of the advantages of using docker containers are
Abstraction: Total abstraction from the underlying OS, technology, versions, patch. No need to worry about installation and version conflicts. Administrators can just have a look at the docker file to know dependencies if required
Efficiency: Efficient usage of system memory and resources
Portability: Once created and tested on a local machine, docker guarantees to run on any environment
Agility: Suits with Agile development model very well and supports Microservice-based architecture pattern
Isolation: Multiple application can run in isolation under the same VM/server using different docker containers
Scalability: Docker containers can be created quickly and scaled easily
Using Kubernetes with Docker
So now you have your application up and running inside a docker container, how to monitor your application? How to check if server resources are full? how to check if the application is not responding? So there are a lot of questions that need an elegant solution with minimal human intervention.
What is Kubernetes?
As per the definition provided on kubernetes.io, "Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available".
In a nutshell, you can think of Kubernetes as a container orchestration layer which is designed to automate the deployment, scaling, and management of containerized applications.
Using Docker and Kubernetes, a typical load balancing environment will look like
![](https://static.wixstatic.com/media/9a3369_fe66d2f7a2fe4aeab9feeb78fb3e6cb3~mv2.jpeg/v1/fill/w_433,h_622,al_c,q_80,enc_avif,quality_auto/9a3369_fe66d2f7a2fe4aeab9feeb78fb3e6cb3~mv2.jpeg)
At the core of Kubernetes is the cluster. A Kubernetes cluster is a set node that runs containerized applications. In a typical load balancing environment, you will have more than one node in a cluster. Each node runs an agent process called kubelet which is responsible for managing the state of the node.
On top of these Kubelets is Kubernetes Master which is responsible for the management of these individual Kubelets. Master is the access point for administrators to configure, manage, and deploy containers as per the desired policy.
Ideally, Master is configured on a separate running VM which communicate with cluster nodes via API Server.
The basic scheduling unit is a Pod which actually contains your container instance. On a multicore server, you can choose to run multiple replicas of Pod so as to create a virtual load balancer with multiple instances of the container running parallelly.
Kubernetes deployment is based upon YAML file that defines deployment strategy for each kublets, below is the snapshot of such YAML file
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myreg/myapp:v1
ports:
- containerPort: 80
livenessProbe:
httpGet:
path:/healthcheck
port: 3500
initialDelaySeconds: 3
periodSeconds: 5
restartPolicy: OnFailure
This file tells the master to pull a docker image from registry myreg/myapp:v1. replicas signify 2 replicas of your application providing virtual load balancing eventually running two instances of your docker container which in turn runs two instances of your node applications inside them.
livenessProbe can be used to check health of your running application. periodSeconds 5 tells kubernetes to probe health after every 5 seconds. You will need to implement /healthcheck API in your application which will return status 200. Kubernetes will automatically call healthcheck API after 5 seconds and restart your application if the desired response is not received.
Kubernetes comes with web user interface called Dashboard. You can use dashboard to deploy containerized applications to a Kubernetes cluster, troubleshoot your containerized application, and manage the cluster resources. Dashboard will also provide overview of your running application along with providing health information. More details can be found https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
In my next blog I will try to provide details on how to autoscale your application using Kubernetes, till then
Happy Reading...
Bình luận