Skip to content

Toro Cloud Dev Center


Three tier network architecture

A three tier network architecture is a client-server based architecture in which user interfaces, data access, data storage, and different logical processes are developed and administered as independent modules on separate platforms.

Systems running on a three tier architecture are comprised of the following layers:

  1. Presentation tier (client), which receives input and receives (and potentially displays) output
  2. Application tier (server), which is in charge of processing logic and making necessary calculations
  3. Data tier (server), which stores and manages data

By employing a three tier network architecture, your system gains the advantages of:

  • Logical segregation of data
  • Server availability

    Redundancy of services in the application layer, which results in higher server availability. Multiple application servers in the second tier decreases the likelihood of downtime because the hosting of services may easily be interchangeable depending on how you configure your devices.

  • Improved scalability and flexibility in terms of:

    • Easier migrations of specific instances or servers
    • Easier additions or upgrades
    • Easier removals or replacements
  • Enhanced security

    The application and data layers can be restricted to only internal network resources that require access.

However, this type of architecture also has some disadvantages, such as:

  • More complex architecture
  • Requires technical expertise to setup and manage
  • Takes more time to setup and configure

High availability?

You can use the three-tier network architecture to implement a highly available system for Martini.

Implementation

Martini and its family of related services can be configured to work on top of a three tier network architecture, and this page discusses one way of doing this using Amazon Web Services (AWS)1.

Sample simple three tier network architecture applied on Martini

In this setup, the infrastructure and its tiers are contained in a VPC2 belonging to a chosen region3, and configured in an availability zone4. The servers in this diagram take their form in Amazon EC2 instances5 which are configurable virtual servers you can use to host your applications and services. The instances in tier 1 and tier 2 are configured with Amazon Elastic File System (EFS)6. Our setup also assumes that the deployment will occur on Linux-based servers.

Multiple implementations

There are plenty of ways on how your organization can implement a three tier architecture for Martini. You can opt to use a different cloud service, operating system, or use a slightly modified architecture.

The steps for configuring this setup will be divided by tier types and will be discussed in this order: Application tier, Presentation tier, and Data tier. The Application tier needs to be configured before the Presentation tier. This will allow the ability to configure and test endpoints that are available later, while configuring the second (Presentation) tier.

Configuring the Application tier

The Application tier is the logical tier from which the Presentation tier pulls data. It controls application functionality by performing detailed processing. This is where your Martini instances will be deployed.

To support scalability and redundancy, you will need a minimum of two, tier 2 servers. Within these servers, you will need to install your Martini instances.

Multiple ways to install

There are multiple ways to install Martini. But before installation, make sure your machine fulfills all hardware and software requirements.

You can also configure separate servers to run ActiveMQ7 and Solr8. For maximum scalability, these applications should be installed on separate servers and be configured either with a hot spare (ActiveMQ) or in a cluster (SolrCloud).

Configuring the Presentation tier

The Presentation tier occupies the top-most level and displays information to related services available on a website. This is the tier that communicates with the Application tier and presents the data to the users. TORO recommendeds using NGINX as a proxy server for the Presentation tier.

A proxy server is an intermediary server that forwards requests for content from clients to servers across the internet. It allows a client to make an indirect network connection to a network service.

A client connects to the proxy server, then requests a connection, file, or other available resources from a server. The proxy provides the resource either by connecting to the specified server or by serving it from a cache.

There are many types of proxy servers like open, anonymous, transparent, distorting, reverse proxies, etc. but this page focuses on reverse proxies for this topic.

Reverse proxies serve the server rather than the client. They provide an additional level of abstraction and control to ensure the smooth flow of network traffic between clients and servers.

Reverse proxies also have the ability to mask the existence or the characteristics of an origin server or servers; and this can protect the identity of these servers and act as an additional line of defense against security attacks. They can also distribute the load of incoming requests to multiple servers, compress inbound and outbound data, and use a cache for commonly requested content.

NGINX is an open source reverse proxy server, and can be used to enhance performance, scalability, and availability. It can also be used for URL rewriting and SSL termination.

NGINX in front of Martini instances

In this three tier architecture, NGINX will be the agent (or mediator) that delivers, manages, and accepts data to and from the user and the server, whilst protecting and masking the application servers.

Installation

It is fairly easy to install NGINX on Linux systems. For this example, it is assumed that you are using a machine with CentOS 7 installed. To install NGINX on CenOS7, follow the steps below:

  1. Add the EPEL repository.

    You will need to add the CentOS 7 EPEL repository. Open your terminal and enter the following command:

    1
    sudo yum install epel-release 
    
  2. Install NGINX using the following command:

    1
    sudo yum install -y nginx
    
  3. Enable and start NGINX using the command:

    1
    sudo systemctl enable nginx && sudo systemctl start nginx
    
  4. Allow HTTP/S requests.

    If your firewall is running, you must execute these commands to allow HTTP and HTTPS requests:

    1
    2
    3
    sudo firewall-cmd --permanent --zone=public --add-service=http
    sudo firewall-cmd --permanent --zone=public --add-service=https
    sudo firewall-cmd --reload
    
  5. Verify.

    Proceed to verify if you can access your web server by opening your web browser and typing in the IP address of the NGINX machine in the address bar.

Having issues?

If you encounter difficulties in accessing your NGINX server, try and check the selinux10 configuration of your machine. You can check this by using the command:

1
get enforce

You can also temporarily (until next reboot) set selinux machine to permissive with the following command:

1
set enforce 0

To disable selinux permanently, add SELINUX=disabled to the /etc/sysconfig/selinux file, and reboot.

For installation of NGINX in other Linux distributions or operating systems, or other methods of installing NGINX, you can refer to their installation page.

Configuration

The next step is to create the configuration files for your application servers in the Application tier.

  1. Create NGINX directories sites-available and sites-enabled.

    Go to the base NGINX directory:

    1
    cd ${nginxPath}
    

    And then begin to create the directories by typing and entering:

    1
    mkdir sites-available && mkdir sites-enabled
    
  2. Create the configuration files using the text editor of your choice.

    1
    vi  ${nginxPath}/sites-available/${WebServiceName}.conf
    
  3. Edit the configuration files to specify the settings you want to use for your server.

    Basic configuration

    Below is a sample, basic NGINX configuration you may want to experiment with and apply when you're not looking for something too advanced:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    upstream ${WebServiceName} {
    server 192.168.21.56:8983 fail_timeout=0;
    }
    server {
    listen          80;
    server_name    ${WebServiceName}.com;
    access_log /var/log/nginx/${WebServiceName}_acces.log;
    error_log /var/log/nginx/${WebServiceName}_error.log;
    location / {
            proxy_pass http:/${WebServiceName};
            proxy_set_header X-Forwarded-Host $host;
            proxy_set_header X-Forwarded-Server $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_connect_timeout 240;
            proxy_send_timeout 240;
            proxy_read_timeout 240;
            }
    }
    

    Below are parameters you may use in your configuration:

    Term Definition
    upstream This module is used to define groups of servers that can be referenced with their proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass, and memcached_pass directives. Directives define a group of servers. Servers can listen on different ports. In addition, servers listening on TCP and UNIX-domain sockets can be mixed. The benefit of the upstream block is that you can configure more than one server/port/service as upstream and distribute the traffic on them.
    server Defines the address or identifiers of a server. The address may be identified as a domain name or IP address, optionally along with port numbers. If no port is specified, port 80 will be used.
    fail_timeout Sets the time during which the specified number of unsuccessful attempts to communicate with the server should happen to consider the server unavailable.
    listen Sets the address and port for IP, or the path for a UNIX-domain socket on which the server will accept requests.
    access_log Sets the path, format, and configuration for a buffered log write.
    error_log Sets the path where to output error logs.
    location Sets configuration depending on a request URI. A location can either be defined by a prefix string, or by a regular expression. It is used to match expressions and create rules for them. The matching is performed against a normalized URI, after decoding the text encoded in the “%XX” form, resolving references to relative path components “.” and “..”, and possible compression of two or more adjacent slashes into a single slash.
    return Stops processing and returns the specified code to a client.
    proxy_pass Sets the protocol and address of the proxied server and an optional URI to which a location should be mapped. As a protocol, “HTTP” or “HTTPS” can be specified.
    proxy_set_header Allows redefining or appending fields to the request header passed to the proxied server. The value can contain text, variables, and their combinations.
    proxy_redirect Sets the text that should be changed in the Location and Refresh header fields of a proxied server response.
    proxy_connect_timeout Defines a timeout for establishing a connection to a proxied server. It should be noted that this timeout cannot usually exceed 75 seconds.
    proxy_send_timeout Sets a timeout for transmitting a request to the proxied server. The timeout is set only between two successive write operations, not for the transmission of the whole request. If the proxied server does not receive anything within this time, the connection is closed
    proxy_read_timeout Defines a timeout for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response. If the proxied server does not transmit anything within this time, the connection is closed.
    ssl Enables the HTTPS protocol for the given virtual server.
    ssl_certificate Specifies a file with the certificate in the PEM format for the given virtual server.
    ssl_key Specifies a file with the secret key in the PEM format for the given virtual server.
  4. Create symbolic links.

    After drafting your configuration files for the tier 2 services, you may proceed to creating a symbolic link for these configuration files.

    1
    ln -s ${nginxPath}/sites-available/${WebServiceName}.conf ${nginxPath}sites-enabled/${WebServiceName}.conf
    

Considerations

If you would like to test this within your network, it is suggested that you first modify your host file to test if you can access your web services locally.

You need to register your domain if you’d like your web service to be available publicly.

More NGINX?

You can also run Martini with NGINX and SSL.

Configuring the Data tier

The Data tier houses the database servers where information is stored and retrieved. This tier can be accessed through the application layer and consists of data access components to aid in resource sharing.

If deploying on AWS, you may deploy an RDS instance11 or an EC2 instance with MySQL12 or any other database permitted by your license. But for now, the assumptions are that you will be using a machine with a CentOS 7 operating system.

Different OS? No problem!

If you would like to install MySQL on a machine with a different operating system, please refer to MySQL’s installation page.

Database support

The databases you can use with your instance is dependent on the type of license you have.

Installation

Using Wget, you can execute the following steps to install MySQL on your machine.

  1. Retrieve the MySQL repository via Wget, then update it. Doing this will enable you to install MySQL via yum.

    1
    wget https://dev.mysql.com/get/mysql57-community-release-el7-11.noarch.rpm
    

    Then proceed to type:

    1
    sudo rpm -ivh mysql57-community-release-el7-11.noarch.rpm
    

    Followed by this yum command:

    1
    yum update
    

    Latest repository

    You can check for the latest MySQL yum repository here.

  2. Install MySQL via yum:

    1
    sudo yum install mysql-server
    
  3. Enable and start MySQL.

    The next step will be enable MySQL to run on startup. You can exchange enable with disable if you decide to revert these changes in the future. The start command on the other hand just starts the service.

    1
    sudo systemctl enable mysqld && sudo systemctl start mysqld
    
  4. Secure your MySQL server.

    The following command should be executed to address security concerns by hardening your MySQL server. You will be asked to change your MySQL root password, disable, or remove anonymous user accounts, disable remote logins for the root user, and remove test databases. You can answer these according to your preference but you may want to check MySQL’s reference manual for more information.

    1
    sudo mysql_secure_installation
    

AWS RDS

You can also use the AWS Relational Database Service (RDS) to deploy a relational database which you can administer and scale using AWS.

Configuring a network file share

Shared resources such as configuration files and Martini packages can be written to a network file share. The instructions below explain how to configure an NFS server and NFS client on your network. Alternatively, if you are configuring your three tier architecture on AWS, you may prefer to use AWS' Elastic File System (EFS) instead.

An NFS or a network file system is a protocol that involves a server and a client. NFS enables you to mount directories to share and directly access files within a network.

In the steps below, we'll be configuring an NFS server and client setup for CentOS 7.

Configuration for the NFS server

  1. Determine the NFS server. You should determine the instance which will act as the NFS server for all the files you’d like to be universally available on your network.

  2. Install NFS packages using yum:

    1
    yum install nfs-utils
    

    nfs-utils package

    The nfs-utils package contains the tools for both the server and client which are necessary for the kernel's NFS abilities.

  3. Create or determine the shared directory for your NFS setup.

    For this set up, it's recommended to create or target a directory in /datastore, but you may always choose differently or create another directory according to your preference.

    1
    mkdir /datastore
    

    If this directory already exists, you may skip this step.

  4. Set the permissions of the NFS directory.

    Once you have created or determined the NFS directory on the NFS server, you can set the permissions of the folder using the chmod and chown commands.

    1
    chmod -R 755 /datastore
    

    After modifying the directory permissions, specify the owner of the directory using:

    1
    chown toro-admin:toro-admin /datastore
    
  5. Start and enable the services.

    After the packages have been installed and the directories have been determined, created, and configured, you will be able to enable the services to set up the NFS server.

    1
    2
    3
    4
    5
    6
    7
    8
    systemctl enable rpcbind
    systemctl enable nfs-server
    systemctl enable nfs-lock
    systemctl enable nfs-idmap
    systemctl start rpcbind
    systemctl start nfs-server
    systemctl start nfs-lock
    systemctl start nfs-idmap
    
  6. Sharing the NFS shares over the network.

    This section shares the NFS directory so clients can access it. First, open the /etc/exports file and edit it with your preferred text editor.

    1
    vi /etc/exports
    

    Next, input the following:

    1
    /datastore  <nfs-client-ip>(rw,sync,no_root_squash,no_all_squash)
    

    The server and client must also be able to ping each another.

  7. Start the NFS service.

    1
    systemctl restart nfs-server
    
  8. Configure the firewall for CentOS 7.

    1
    2
    3
    4
    firewall-cmd --permanent --zone=public --add-service=nfs
    firewall-cmd --permanent --zone=public --add-service=mountd
    firewall-cmd --permanent --zone=public --add-service=rpc-bind   
    firewall-cmd --reload
    

Setting up the NFS client

After you have set up the NFS server, you can set up the client side of NFS.

  1. Install NFS packages.

    Similar to the second step when configuring the NFS server, install the NFS packages with yum.

    1
    yum install nfs-utils
    
  2. Create the NFS directory mount point(s).

    1
    mkdir /datastore
    
  3. Mount the directory.

    1
    mount -t nfs <nfs-server-ip>:/datastore /datastore
    
  4. Verify the NFS mount.

    1
    df -kh
    

    You should see a list of file systems and among the items in that list should be your NFS configurations from earlier.

    After verifying your mounts exist, you may now populate your NFS directory.

Setting up a permanent NFS mount

By default, you will have to remount all of your NFS directories after every reboot. To make sure it is available on reboot, you can follow the steps below:

  1. Open the /etc/fstab file with the text editor of your choice.

    1
    vi /etc/fstab
    
  2. Add the configuration options to the file to have the mount points automatically configured after reboot.

    1
    <nfs-server-ip>:/home/datastore /datastore  nfs   defaults 0 0
    

  1. Amazon Web Services, often abbreviated as AWS, is a cloud service provider that offers services including compute power, database storage, and more. 

  2. Amazon Virtual Private Cloud (Amazon VPC) is a service which enables you to provision an isolated section in Amazon, which is where you can create and assign AWS resources, as well as manage and modify your network configurations and more. 

  3. AWS regions are separate geographic areas which contain availability zones. 

  4. Availability zones are isolated locations within data center regions from which public cloud services operate and originate. 

  5. Amazon EC2 instances are virtual servers in Amazon's Elastic Compute Cloud where you can host your applications. 

  6. Amazon Elastic File System is a service that provides scalable elastic storage capacity that can be used or mounted on EC2 instances to accommodate processes. 

  7. ActiveMQ is a message oriented middleware that lets two applications or components communicate with each other. It also processes all the messages in queues to ensure that the interactions between the two applications are reliably executed. 

  8. SolrCloud is a running mode in Solr, an open source search platform. It is capable of index replication, fail-over, load balancing, and distributed queries with the help of ZooKeeper. 

  9. ZooKeeper is an open source server application that can reliably coordinate distributed processes, maintain and manage configuration information, naming conventions, and synchronization for distributed cluster environments. 

  10. selinux stands for Security-Enhanced Linux. It is a module of the Linux kernel that you can configure to manage and ensure security and access control. 

  11. Amazon RDS instances are relational databases hosted, operated, and setup in the Cloud. 

  12. MySQL is an open source database management system.