Skip to content

Toro Cloud Dev Center


Measuring Martini's web service performance

This page is intended to explain the impact of Tracker and invoke monitor through a performance test. Given a simple RESTful web service, we will measure the amount of throughput by the number of processed transactions per second when (a) both invoke monitor and Tracker are on, (b) when invoke monitor is on but Tracker is off, (c) when invoke monitor is off but Tracker is on, and (d) both invoke monitor and Tracker are off.

This test was conducted on a Martini v1.0 instance

Previous and future releases may produce different results.

Test environment

A controlled and isolated environment is vital so as to prevent external unrelated factors from affecting the result. For this test, Martini and its family of related services are configured as follows:

JVM

Three virtual machines were provisioned at AWS for this test; one for each of Martini, ActiveMQ, and Solr. These VMs were deployed through Amazon's EC2 service and are equally using c3-xlarge instances.

Specification Value
CPU 4 cores
Storage 40 GB SSD
RAM 7.5 GB
Operating System Amazon Linux

Martini's JVM will be using the default JVM configuration provided out-of-the-box by the Martini start-up script except we'll be modifying the Java heap size; both the -xmx and -xms parameters will be set to 2G.

Tomcat

We will be using the default Tomcat configuration therefore there will be no parameter modifications for the Tomcat container.

Core applications

ActiveMQ and Solr will be configured as stand-alone instances on independent virtual machines as mentioned beforehand. Both instances will be configured with a 3 GB JVM heap size, leaving the rest of configuration options to their default values.

Likewise, Martini will be sitting on a different VM as well. It will contain only one Martini package. This package will contain the script exposing the RESTful web service which our performance measurement tool will consume later on.

We will not be using an external database for this test; therefore, all core databases will reside in an embedded HSQLDB database.

Procedure

In our performance test, we will be sending requests to a RESTful web service exposed by Martini through a widely-used benchmarking tool, Apache Bench. This RESTful web service is exposed via Groovy code; simply accepting GET requests with no required parameters and returning a "Hello, world!" JSON response. We will be hitting the aforementioned endpoint as much as possible whilst:

  • Both Tracker and invoke monitor are on
  • Invoke monitor is on while Tracker is off
  • Invoke monitor is off while Tracker is on
  • Both Tracker and invoke monitor are off

Turning off the invoke monitor...

Unlike Tracker which can be turned off through application properties, the invoke monitor on the is only turned off internally given certain conditions such as the absence of monitor rules.

Below is the RESTful web service-exposing script:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
import io.toro.martini.api.APIResponse
import org.springframework.web.bind.annotation.*

@RestController
@RequestMapping(value = 'test', produces = ['application/json', 'application/xml'])
class Test {

    @RequestMapping(value = 'sayHello', method = [RequestMethod.GET])
    APIResponse sayHello() {
        new APIResponse('Hello, world!')
    }
}

Meanwhile, the expected response of this web service is a 52-byte JSON file with the following content:

1
2
3
4
{
    "result": "OK",
    "message": "Hello, world!"
}

The load-generating client, Apache Bench, will reside in the same machine as Martini. The four test cases mentioned earlier will each execute the steps laid out below.

  • Request URL: GET /api/test/sayHello
  • Concurrent users: 150
  • Number of requests: 20,000
  • Benchmark invocation iteration: 5
  • Apache Bench command:

    1
    ab -k -n 20000 -c 150 http://localhost/api/test/sayHello
    

The step procedures in this test are:

  1. Ensure the Martini instance is freshly started without error.
  2. Run a single invocation of the aforementioned Apache Bench command.
  3. Ensure that there are no errors occurring in Martini and the invocation reports 0 errors and no 2xx responses. If there are errors, halt the benchmarking test and fix the issue.
  4. Wait for Martini to go back to its idle state after each test case; idle meaning all ActiveMQ messages of Tracker and Monitor have been de-queued by Martini between invokes. If Tracker and/or Monitor is turned off, then wait for a few seconds before running the next test case.
  5. Record requests per second results or what we know as throughput.
  6. Repeat step #2 onwards again repeatedly until the throughput result stabilizes; that is, when the throughput stops changing significantly. The goal here is to wait for Tomcat to arrive at a state where everything is well initialized. This might take a few ab invocations for the server to be in its optimal runtime state. It is at this state that we will officially start benchmarking. Once the throughput has stabilized, we will now start recording the throughput results of the ab invocation; we'll be doing this until we have five records.
  7. Repeat steps #2 to #5 once more until you have successfully recorded five consecutive and stable ab invocation throughput results.

It is highly important that you ensure Martini is at a state where all components are already initialized, as would be in a production environment. All ab invocation results from non-optimal states should be discarded.

Results

As discussed earlier, throughput will be measured in order to determine Martini's performance per test case. There will be five consecutive throughput results which we will average in order to ensure the fairness of results. From running this test, our team was able to achieve the following:

Results in table

Results in graph

The table and chart above shows the minimum, maximum, and average throughput per test case scenario in five consecutive and stabilized iterations.

Factors

There are several other factors that directly affect the overall performance of Martini aside from Tracker and invoke monitor. Tuning the JVM and Tomcat server also plays a major role in increasing the overall throughput of your web services.

Conclusion

Although very useful, Tracker and invoke monitor's indexing of data causes a significant performance overhead and from our findings, turning off these features would result in higher web service throughput.

Oftentimes, turning invoke monitor off is not an option; hence, it may be more beneficial to look at the difference of throughput results when (a) both invoke monitor and Tracker are on as opposed to (b) invoke monitor is on while Tracker is disabled.

From our test above, (b) turning off Tracker whilst still enabling the invoke monitor produced 91.89% more throughput than (a) leaving both Tracker and the invoke monitor enabled.