During heavy traffic scenario, slow server response creates a huddle while surfing popular websites is a very common issue. To overcome this problem, the server load balancer comes in picture. A Server Load Balancer is a hardware or virtual software appliance that distributes the application workload across an array of servers, ensuring application availability, elastic scale-out of server resources and supports the health management of backend servers and application systems. This blog will provide information about how and by which algorithms we can solve this server reliability issues during peak hours.
One of the popular company Amazon might be deploying lots of servers in the backend for the UI path. Different servers are deployed in the Amazon infrastructure which is running the same code on the Amazon UI. Now, how the Requests get into each of the servers? For example, a user using the Amazon website. How does Amazon know to re-direct a user to an instance? And how the new requests need to go to the new instance?
That is when SLB’s are useful. SLB helps to clear the traffic on servers and requests every single user to reach to the destination. SLB does the routing, for example, the user’s request 1 goes to server 1. Request 2 goes to the different server and so on.
Fig: Flow of Requests to different servers
All these requests can be from different users or the same user. So, this is server load balancing.
SLB consists of the following components:
SLB instance is a running load balancing service that distributes the incoming traffic to backend servers. To use the SLB service, one has to create the SLB instance and then configure the instance with at least one listener and two backend servers.
SLB listener checks the client request and proceeds the request to backend servers according to the configured rules. SLB listeners also perform the health checks (checks whether the two backend servers work properly or not) on the backend server.
Backend servers are the ECS instances added to the SLB instance to process the distributed requests. One can add the ECS instances to the default server group, a VServer group or an active server group for better management.
Fig: Overview of SLB Components
Three different commonly used algorithms that companies prefer for setting up an SLB.
The request of every client goes in a sequence manner where the requests are re-directed to the different services in the Round Robin fashion. So, let’s say there are 4 servers in the network as shown in the below diagram. The first request goes to the first server and second goes to the second and so forth. The same fashion is followed for all requests in the feature cycle. So, this is a Round Robin fashion where every server gets a new request every time.
Fig: Flow of Round Robin
2. Least Connections: Request sent to the least used server in the network.
To do this, the SLB needs to know which processor which server is having the least number of resources. It might take a while for the SLB’s to identify which are the servers, which are having the least connections for the least resources. So, it needs to compute, or it needs to get meta-data information from all the servers. So, this might be a little costlier compared to Round Robin fashion.
Fig: Flow of Least Connection
3. IP Hash: Request sent to the server based on Client IP.
IP hashing is useful when a client sent a request and the client’s request needs to be dedicatedly going to a set of servers i.e. when IP hashing is used. Re-direction can be done based on the client’s IP address and only those servers which are specific to that Client’s IP will be redirected to that network. This can be another strategy where some specific servers need to be given preference over others.
A typical example could be: if you see the IRCTC website from the client’s or general people will be re-directed to a bunch of server farms however the request which is coming from the internal IRCTC receptions that will be going to a different server, that way they will have more performance.
Fig: Flow of IP Hashing
To understand the sticky sessions in SLB, Amzon.com is one of the best examples where a user using an Amazon cart.
The user added lots of information into the cart and this cart information is stored in the session which is stored in the client site browser. The browser usually caches all the session information and keeps it and every time a new request is made this session information is shared.
The browser keeps the session ID which the server has generated the previous time when a user logged in. The same session id is given every time the request is sent.
If the requests are redirected to different servers, then the session id is tending to be lost. Because the first session which was generated by the server 1 might not be visible for the server 2. So, the session information is lost, and users’ cart will be disappeared.
This is the problem where a user has sessions which are persisted in their browsers. This is the general use case where a user will have to handle at their application end.
Fig: Load Balancer with and without Sticky Sessions
To overcome the problem of session persistence a user can use the common cache, for example, Redis Cache to store their information or need to allow the SLB to go to the same instance every time.
If there are servers which are unavailable or offline then those servers will be removed from the server load balancing setup and the request will not be re-directed to those servers.
So internally the SLB maintains the health check of all the available servers, and the requests are redirected only to the available servers.
For example, let’s say Amazon wants to scale up for a great Indian sale or universal sale, which is happening for some special occasion. They know the number of requests which are going to come for the UI is going to be more, so they can scale up and scale down based on the demand. So SLB helps in easily configuring by adding more servers with just a simple configuration change.
When millions of users are accessing the same website during specific time intervals, they face the problems of slow server loading which eventually leads to data loss again and again. In those tricky situations, SLB address this issue distributes all the incoming traffics wisely by creating replicas of servers & overcomes the above-said issues. The features of SLB like reliability, time management, data security, data flow, cost-effectiveness makes SLB more popular to solve server reliability problems.
Businesses and organizations are moving their workload to the cloud for better agility, performance, and security. Cloud computing is the bellwether of hosting applications and databases to improve the overall efficiency of business processes. The disparity between the on-premise workload and cloud environment workload often results in businesses migrating the workload to the cloud. Cloud technology shows impeccable growth contributing to the adoption of cloud-based computing across many businesses and organizations.
Centaurus, a next-generation cloud for the telecom sector, is an open-source project for building cloud infrastructure platform that can be used to build and manage public or private clouds, edge computing, and edge device datacenter. It is a solution to address key challenges of large-scale clouds such as system scalability, resource efficiency, multi-tenancy, edge computing, and the native support for the fast-growing modern workloads such as containers and serverless functions. Centaurus helps in creating multiple nodes, managing infrastructure, containerizing the environment, managing the pods, and many more.
Cloud computing is picking up the pace replacing the traditional methods of storing the data, accessing, and running the applications. Cloud migration is the process to shift the existing data, applications, and other business elements from data centers or one cloud to other cloud environments for better scaling. Organizations are shifting their businesses on cloud for more speed and agility. Migration to cloud gives the organization limitless computing resources.
Click2Cloud support enterprises throughout their cloud infrastructure deployment process and empowers them with private cloud offerings, a cloud framework for compute, storage, and network services. To get the benefit of hybrid cloud and Edge computing an enterprise can accelerate the use of OpenStack, Apsara Stack, and Azure Stack services to transform technologies into cloud and deep industry growth to deliver abiding value.
This blog will give you a complete understanding on billing, budget, and cloud cost optimization. Cloud cost management is as important as knowing your bills. It is important to know your cloud spends and which cloud offers you a good service at minimal costs. Click2Cloud also offers you a semi-automated assessment platform “ CloudsIntel” to support you with assessment and migration plan
7 Temasek Boulevard, #12-07, Suntec Tower One, Singapore 038987
No 5, 17/f, strand 50 50 bonham strand, Sheung wan, Hong Kong