Main menu

During one of my recent projects, I had a chance to work with two components of Mule ESB that were new to me. One of them is the concept of shared VM endpoints using domain resources. The other is a block of messages processors that reside within a cached scope. They have been in the Mule runtime engine for a couple of minor versions. Hopefully, the information below will allow you to be more effective when designing your integration applications.

Inter-Application Messaging with Shared VM Endpoints

One of most frustrating obstacles that I have experienced while designing integration solutions with Mule has been with inter-application messaging. When a message needed to be transmitted from one application to another, in an asynchronous fashion, there were only a few options. The most popular implementation uses a queue to hand off the message. The problem with this solution was that a JMS server needed to be installed because VM endpoints could not be used between applications. In some cases, an existing JMS server could be used; however, when a company does not have a JMS server installed, it is often difficult to introduce one into the environment.

In Walks A Mule ESB Domain

Utilizing shared VM endpoints via a Mule Domain component solves this problem. Domains were introduced in Mule 3.5 (if I remember correctly) and allow Mule applications, running in the same JVM, to share resources. This includes the VM connector and endpoints. For companies that are hesitant or cannot install a JMS server, or for certain environments like local development environments, using shared VM queues works as a great replacement for JMS queues.

Decreasing Response Time with Cached Scoping

Caching technology has been around for quite some time. I was surprised to learn that Cache Scope has been in Mule for several minor versions. I was also surprised to learn how easy it was to implement a cache strategy in Mule. The problem, in our case, was one of our flows, that was required to operate under a very heavy load, calls a database resource to lookup some values. The values do not not change often, so this was a perfect scenario for implementing a caching solution. With our implementation, the database is only being called, at the most, once every five minutes. This make our process much more efficient. Below is the caching strategy configuration:

    <ee:object-store-caching-strategy name="PolicyCachingStrategy"
            doc:name="Policy Caching Strategy"
            keyGenerationExpression="#[payload['client'] + payload['method']]">
        <managed-store storeName="SimpleInMemoryCache"
            maxEntries="200" entryTTL="5000" expirationInterval="10000"/>
    </ee:object-store-caching-strategy>

Within the flow, an REST-based Web service is called that implements a database call in the service.

    <ee:cache  doc:name="Policy Cache" cachingStrategy-ref="PolicyCachingStrategy" >
        ... a block of message processors, including the HTTP call to the REST-based WS
    </ee:cache>