Skip to content

Toro Cloud Dev Center


Deploying your app within a cluster using Docker swarm mode

Docker swarm is a clustering tool for Docker containers. By deploying Martini in a Docker swarm cluster, it will take less effort to manage multiple Docker instances. Ease of scaling, automatic recovery of failed instances, and network load balancing are just some of the benefits of using Docker in swarm mode. Since Docker swarm natively exists with the Docker engine, setup should be fast and easy; you no longer need to setup a third-party application to manage your cluster.

In this guide, it's assumed you already know how to setup a Docker swarm cluster. Also, since this guide will configure the cluster with a shared file system, knowledge in setting up an NFS server is a requirement. In addition, the guide will be implementing a simple topology only. This setup is not recommended for production environments. The goal of this guide is only to demonstrate deployment within a Docker cluster.

Read the simple deployment guide first

If you haven't read the standard Docker deployment guide, you should refer to it before deploying Martini using this guide.

Topology

The diagram below illustrates a simple infrastructure example, used on this page:

Simple docker swarm topology

In this guide, a directory in the NFS server called /datastore will be shared by the Docker servers (Server 1 and Server 2). It is mounted as /datastore in both Docker nodes.

Behaviors

Before you continue, there are some behaviors you should be aware of when using Docker swarm:

  • Docker swarm doesn't ensure that your requests will be served constantly by the same server. This means endpoints using a session stored in the application itself might not work as expected.
  • Martini requires a license for each virtual machine used in this setup. As a result, the license setup page may pop-up on your browser upon starting Martini for the first time.
  • Schedulers will be executed twice or equal to the number of Martini package replicas distributed in all instances. In this case, it's advised to configure scheduled endpoints on a separate package, and deploy it to a dedicated Docker container (which has it's own datastore).
  • This setup will not work if you're using the embedded Hypersonic database as it only allows one machine to use the database at a time.
  • In production, this setup must be configured with external applications – external Solr server, external ActiveMQ, and external database sources.

Procedure

In this setup, you will need to create the required data directories in the NFS server's /datastore directory to make all data files and directories available across all servers. By doing so, you will no longer need to worry about which Docker node Martini's container will be deployed to as the data will be available on all servers.

  1. Create the data directories. Ensure you have created them on the shared file system.

    1
    mkdir -p /datastore/apps/martini/{data,packages,logs,.java}
    
  2. Once the directories have been created, provision Martini as a Docker service. Execute the command below to start a new Docker service. Don't forget to change the value of the environment variables.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    docker service create \
    -p :8080 \
    -p :8443 \
    --restart-condition on-failure --restart-max-attempts 5 \
    --env JAVA_XMS=${JVM_XMS_MEMORY}m --env JAVA_XMX=${JVM_XMX_MEMORY}m \
    --mount type=bind,source=/etc/localtime,destination=/etc/localtime,readonly \
    --mount type=bind,source=/datastore/apps/martini/data,destination=/data/data \
    --mount type=bind,source=/datastore/apps/martini/packages,destination=/data/packages \
    --mount type=bind,source=/datastore/apps/martini/logs,destination=/data/logs \
    --mount type=bind,source=${HOME}/.java,destination=/root/.java \
    --name martini \
    toroio/martini-runtime
    

    Best practice

    It's considered good practice to mount the directory /etc/localtime as read-only in the container to ensure that the server and container time are synced.

  3. Verify that the service has been created and replicated. Run the command below to show all the Docker services.

    1
    2
    3
    # Listed docker services via `docker service ls`.
    ID              NAME              MODE          REPLICAS    IMAGE                     PORTS
    sogsv6m99rw8    martini           replicated    1/1         toroio/martini-runtime    *:30004->8080/tcp,*:30005->8443/tcp
    

    In the replica column, the value should be 1/1, which means one task has been created for this service. To get more details about your service you can run the command below:

    1
    docker service ps martini
    
    1
    2
    ID             NAME               IMAGE                    NODE      DESIRED STATE   CURRENT STATE           ERROR   PORTS
    sadi53w0h73z   martini.1          toroio/martini-runtime   docker2   Running         Running 3 minutes ago
    

    As seen in the NODE column, the task has been created in the docker2 server.

  4. Scale the service to two tasks. By executing the command below, the Docker service will create an additional instance of Martini and it will automatically load balance the requests directed to our application.

    1
    docker service scale martini=2
    
    1
    2
    3
    4
    5
    martini scaled to 2
    overall progress: 2 out of 2 tasks
    1/2: running   [==================================================>]
    2/2: running   [==================================================>]
    verify: Service converged
    

    This setup is good if you have stateless APIs that should always be ready for high traffic load. Try to check the service now to see where the instances are deployed:

    1
    docker service ps martini
    
    1
    2
    3
    ID             NAME               IMAGE                    NODE      DESIRED STATE   CURRENT STATE            ERROR               PORTS
    sadi53w0h73z   martini.1          toroio/martini-runtime   docker2   Running         Running 15 minutes ago
    j29lbr3dumul   martini.2          toroio/martini-runtime   docker1   Running         Running 4 minutes ago
    

    In this example, the second instance has been deployed under the docker1 server. Docker should automatically choose the best server to deploy the second instance of Martini.

To access Martini, you can use any of the Docker servers' IP address with the mapped port. Docker should automatically route you to the server running Martini.

Specify cluster.* properties when deploying Martini behind another application

When another application is configured to run in front of your Martini instance, like NGINX or Docker, it is highly recommended to set the cluster.instance-ip-address, cluster.instance-port, and cluster.instance-ssl-port application properties. These properties define Martini's reachable IP address, HTTP port, and SSL port respectively.

Without the cluster.instance-ip-address setting, the current machine IP address is used instead; which may not be exposed publicly and thus, can cause problems when executing service registry related operations. The same is true for both cluster.instance-port and cluster.instance-ssl-port, which default to the values of the server HTTP and SSL port. If you do not wish to use SSL in your cluster, set the property to -1, or do not set it at all.

Upgrading to the latest version

With a simple Docker setup, you usually would just need to pull the new image from the Docker registry then re-create the Docker instance. In a swarm setup, however, it should be much easier as you only need to tell your Docker service to force update all its tasks.

Execute the command below to tell Docker to update all service tasks:

1
docker service update --force martini

Upon execution, all nodes should automatically download the newest release from the Docker Hub and recreate all of their tasks.

What else should you know?

  • As always, it's best to read the upgrade notes of each release before upgrading your instances.
  • Docker swarm supports different types of deployment strategies to lessen service interruption. See this link to know more about these.

That's it! We hope this will help you get started on deploying Martini as a Docker service.