In our previous post, we learned the difference between scaling our application vertically and horizontally. This article will examine how and what it means to decouple our applications, designing one aws decoupled architecture. We’ll begin by peeking at tight coupling architecture and how this can generate issues and bottlenecks inside our architecture. Then we’ll perform an analysis of how loose coupling can solve some of these problems. Later we will wrap it all up with some exam tips.
So, what’s incorrect with the diagram below?
The first impression, it looks fine, right?. The users are putting orders and generating traffic through that web server, which then forwards that traffic along to the backend, and everything sounds good until it doesn’t. In this case, what happens if the webserver goes fail?
No user will be able to send orders. This is a diagram of a tightly coupled application. It means that the user is directly impacted by the EC2 instance operating as that frontend. That same EC2 instance is directly affected by the single EC2 instance that’s working as a backend server. So we can understand that Tight coupling implies that we have one instance speaking straight to another EC2 instance.
What is Decoupled Architecture?
So, we like to ensure that we are never tightly coupling applications. While it is much easier, it guides to a lot of issues. So, how do we fix this problem? First, I would say we need to decouple our architecture and application.
How to decoupled architecture
Now, in this case, in the diagram above, the end-user obtains the same result. Their request is handled through the application load balancer, which is spread out to a group of EC2 instances serving as the frontend, which then passes all HTTP requests to that load balancer in the backend and spreads it out to those EC2 instances. So with that architecture, where we decoupled the application, if one instance or more instances fails on the load balance health-check from either the frontend or the backend or both, it doesn’t matter, will not affect the user experience.
Why Decoupled Architecture
Because the load balancer sends the traffic to the healthy EC2 instances, it means that we can have more than one instance running at the same time. With that architecture, the frontend doesn’t need to be aware of anything about that backend server except send it to the load balancer. The load balancer then guarantees that the user traffic reaches healthy instances. So as long as we keep one instance in that frontend and one instance in that backend up and running, we’re fine.
Decouple Inner Application
It is always important to analyze the whole architecture and identify where we need to decouple at any application level. For example, our last diagram above could improve even more. Let’s imagine that the chart above represents our online store, where the user should log in to create orders. In the current scenario, if one EC2 instance goes offline for some reason, the user will be redirected automatically by Load Balance for the following health instance. That is fine. However, the user session was created and stored on the lousy instance, which means the user will need to log in again. So, how to solve that issue? First, the user session should live outside the instance. Nowadays, AWS provides a collection of products to build a centralized cache quickly.
Let’s see the diagram below:
We have one Redis server that will hold onto all user sessions. So, with that architecture, regardless of what happens with EC2 instances, when they come up again, or even for a new instance, the user can continue to navigate the application. Also, the user will not be aware that bad things happen behind the scenes to improve the user experience.
On this architecture, you can either use Redis or Memcached, see the difference in our article.
AWS Decoupling Services
This is an important topic you need to comprehend: loose coupling is more reasonable at just about every architecture than tightly coupling applications. We always need to avoid one EC2 instance speaking directly to another EC2 instance. We always need to design an architecture that is scalable, highly available and managed service through those resources.
Now another vital piece of information, load balancers aren’t always the solution. Occasionally and in specific scenarios, we don’t desire to keep a straight line of communication from that web server to the backend through the load balancer.
We might desire to keep something that perhaps could keep the message received until that backend server is ready to retrieve it, instead of keeping the backend server up and running 24/7 to process that request.
Let’s analyze three services that help us decouple our application in those scenarios.
Simple Queue Service (SQS)
The first one is the SQS, a fully-managed, highly-available messaging AWS product that we can utilize to decouple applications. It can place between that web server and the backend server and substitute the load balancer. So, the web server will leave messages in that queue, and then the backend server can retrieve that queue peeking for that message whenever those instances are ready to process. So, still, permit the application never to communicate directly with each other, but it doesn’t need to keep the connection alive that the load balancer would need.
Simple Notification Service (SNS)
The SNS is another AWS product and enables us to push out notifications. If we desire to take one message and proactively deliver it to an endpoint instead of leaving it in a messaging queue, SNS is the right product for you.
The API Gateway permits us to place a secure, scalable, highly-available front entry to our applications. So, we can manage what users speak to our resources in AWS.
The first exam advice is that we never desire to couple our applications tightly. So, on the exam, you must keep in mind that you must ignore an answer containing tightly-coupled resources. It would help if you consistently concentrated on loose coupling. For example, guarantee that you never have that EC2 instance speaking directly to another EC2 instance. Also, you always have a load balancer or one messaging queue in between.
Before you go, check our related post:
Every level of our application needs to be loosely coupled. From those users arriving through Route 53, via those load balancers, to the inner parts of our application, regardless that’s a load balancer or SQS. Just because we have loosely coupled the frontend side of our application doesn’t indicate that we have automatically loosely coupled the whole architecture. It would help if you guaranteed we have no EC2 contact directly to another EC2. Also, there’s no one single solution. Sometimes load balancers are the proper choice. Other times, SQS might be a better one.