Skip to content

Toro Cloud Dev Center


Three-tier network architecture featuring high-availability

What's a three-tier network architecture?

Learn more about the specifics of this design by reading Simple Three-Tier Network Architecture.

A high-availability (HA) architecture is one that aims to prolong the uptime of systems, allowing resources to stay available and services to operate over long periods of time despite the occurrence of errors and/or high loads. This type of design puts an emphasis on redundancy, ensuring that resources can failover during downtime and/or be served with load-balancing.

This page discusses one such way of setting up Martini on top of a high-availability architecture featuring failover and load-balancing. Paired with the right services, this can provide a very stable infrastructure.

Implementation

There are plenty of ways to achieve high-availability. The example on this page is just one of many. This, in particular, runs on top of a three-tier network architecture – a design which works well with redundancy.

High availability and TORO Cloud

The example on this page is the same set-up employed by Martini instances running on TORO Cloud.

Under the three-tier network architecture, the following tiers will be present in the set-up (whose scopes are defined by closed dashed orange lines in the diagram below):

  • Tier 1, the Presentation tier

    This tier is where web servers are located; they are in charge of distributing traffic. In this implementation, we'll be using NGINX instances to act as load balancers and reverse proxies. Additionally, these NGINX instances will also serve as the network's first line of defense, in a demilitarized zone (DMZ).

  • Tier 2, the Application tier

    This is the layer where applications are deployed. In this set-up, different applications required by Martini such as ZooKeeper, Solr, ActiveMQ, as well as Martini itself will be deployed to multiple application servers. These redundancies will be done to ensure availability.

  • Tier 3, the Data tier

    This tier contains the database servers which will be configured to a master-slave set-up.

In an attempt to provide a more concrete example, this page will further discuss this set-up in terms of Amazon Web Services (AWS)1.

AWS terms ahead!

As you read on, you'll encounter plenty of AWS concepts. To learn more about AWS, please refer to their documentation.

A diagram illustrating Martini on top of a highly-available, three-tier network architecture

Different resources will require different AWS services in order to be hosted2 and though this might be the case, all resources in this set-up will still be anchored to the same AWS region3.

AWS regions and availability zones

In AWS, you may host your resources in multiple locations world-wide4. These locations are composed of regions and availability zones (AZ). Each region is a separate geographic area, completely independent from other regions. The closer the region (where your resources reside) is to the client, the faster data can be served. Each region has multiple, isolated locations known as availability zones connected through low-latency links.

For tiers 1 and 2, this set-up will have two availability zones provisioned (B and C), each of which replicates the services of the other5. The same goes for the data tier, whose RDS instances use multi-AZ deployments. This page assumes that a number of two AZs are configured, as well for the RDS instances – one zone containing the master RDS and the other containing the slave RDS. Having redundant deployments of the same service or resource paves the way for load-balancing, and fail-over. Also, having multiple, duplicate AZs ensures that not all copies are down when a specific availability zone gets used.

To manage all resources under a certain AWS region, this set-up requires an Amazon VPC6 to be provisioned. A VPC is there to put all services (including those belonging to different availability zones) under a unified scope. A space such as this makes it easier to configure the load-balancing and failing-over capabilities of the system. Work under the VPC includes tasks such as configuring which EC2 instances will share loads (and how), handling the sharing of data across servers for consistency, implementing security, and arranging the behavior of servers in the event that a fatal error occurs.

In addition to the Amazon services mentioned above, this set-up will also make use of the following services:

Automatically configure EC2 resources

You can use AWS OpsWorks to automate the configuration of your Amazon EC2 resources, like those described in this set-up.

To summarize, this model set-up will use up the following AWS resources:

Tier Amazon Service and Instance Type Number of Resources
Tier 1 EC2, t2.micro 2
Tier 2 EC2, t2.micro 10
Tier 3 RDS, db.t2.micro 2

Of course, you may have to scale up depending on your business needs. For shifting requirements, you may want to look at what Amazon's scale-on-demand offers.


Now that it's been described how all components of the architecture would work together, this page will now go through each tier in more detail:

Tier 1

In tier 1, this set-up will require two NGINX instances provisioned, one running on each of the Availability Zones B and C. In this set-up, only one of these NGINX instances runs at a time. The other instance is only intended to take over if the primary NGINX server goes down.

As stated earlier, NGINX will act as a reverse proxy and a load balancer. It will distribute traffic between the application servers in tier 2, across all availability zones. This tier is also the face of your network and will act as an extra layer of security. You can learn how to configure Martini on top of NGINX in this this guide.

Tier 2

Tier 2 is comprised of this architecture's application servers. In particular, they are to host the following applications:

  • Martini
  • ZooKeeper
  • Solr
  • ActiveMQ

Each application will have multiple servers; although costly, this ensures availability. These applications require file storage and that will be provided by mounting them with Amazon EFS. This enables a single, common file system for applications with multiple instances.

Setting up ActiveMQ, SolrCloud, and ZooKeeper

The following pages can help in learning more on how to configure ActiveMQ, SolrCloud, and ZooKeeper:

Tier 3

Tier 3 is comprised of databases. This set-up has two Amazon RDS instances deployed in multiple availability zones. These instances will have a master-slave set-up. In this configuration, only one can read (master) and the other one may only write (slave). Two cannot write at the same time because of the possibility of conflicts; the roles need to be segregated to ensure consistency in data. In the event that the master RDS instance crashes, the slave RDS instance will be promoted as the master RDS.


  1. Cloud service providers other than AWS may be used. 

  2. Amazon RDS for relational databases; Amazon EC2 for the NGINX servers, ZooKeeper, Solr, ActiveMQ, and Martini; EFS for file storage. 

  3. Depending on your business needs, one or multiple AWS regions may be required. This specific set-up assumes regions are independent and do not require interaction with each other. Inter-regional data or service sharing is possible although this will require a slightly modified architecture. 

  4. Not all services may be tied to a specific region or AZ, in which the service is said to be global

  5. Although not identical in number. 

  6. Whose scope (at least, according to the intended architecture) is limited to the region's resources it is configured to manage. 

  7. This page is an introduction to SolrCloud and an overview of the requirements you would have to fulfill if you want to use Martini with SolrCloud (instead of embedded Solr). The following pages will discuss how to configure SolrCloud and ZooKeeper