torsdag 9 december 2010

Eclipse XML autocompletion patch for Mule users

Earlier I wrote about my problems finding a good XML editor which works with Mule configuration files, as the one included in Eclipse WTP does not support XSD schema substitution groups.

That time I simply gave up and registered for a trial of the Oxygen XML eclipse plugin editor since that was the only one that I found that could give me autocompletion while modeling flows.
Well, it seems that the MuleSoft guys has figured out a way to patch this issue with Eclipse, which is very good news for the myriad of Mule users out there. This patch may also be worth downloading even if you're not intending to use Mule and regularly deal with schema substitutions in eclipse.

I have not yet tested the patch since I've been quite busy lately on my free time, but I'm sure it will be a welcome fix for many integrators.

Bye bye Oxygen, been nice to know you!

onsdag 1 december 2010

Soap through JMS support for WCF

One of the classic problems you face when dealing with webservices is how to assure a guaranteed delivery of your messages. Since http by its definition cannot guarantee that your messages are never lost a lot of initiatives has tried to solve this problem.

Currently this is typically handled in one of the following ways:

Not primary used as a means of guaranteed delivery, WS-Transaction still in some sense tries to solve the same problem with a consistency angle. I'll not dwell into the architectural reasons for WS-Transaction but simply note that it introduces a tight coupling between the service consumer and provider and is not preferable in i.e. a B2B solution. It is my firm belief that http/xml should not be used as a transaction medium since it was just not defined with that in mind in the first place. It also has performance implications as well as introduces complex call-chains and possible deadlocks.

WS-Reliable Messaging
Typically your provider and consumer would store outbound messages in some kind of message store and delete these first when they've been acknowledged by the other part through a conversation mechanism. Messaging platforms are often used as the persistent store, but regular databases could also be included, either as part of the messaging provider or by the WS stack.

Retry mechanisms and idempotent provider services
Clients simply retry the calls until a successful invocation is acheived. This puts a requirement on the provider to be idempotent either through the nature of the service, or by utilizing some framework for blocking resent requests based on a uniqueness in the message.

Soap through JMS
Most, if not all Java WS runtimes now support sending soap messages through a JMS provider. Soap/JMS is not yet a standard, but several providers have in fact agreed on a set of JMS properties that maps to a soap world, so it is actually quite interoperable.

This works fine for most Java implementations, but the .NET world has (to my knowledge) been left out on this option for a while.
Well, today I noticed an interesting thing in the Websphere MQ 7 documentation which apparently has support for hooking in WCF services/consumers to the Soap/JMS space. IBM has implemented transport interceptors that could be configured on WCF components. These are bundled with the MQ7 client and as far as I can see does not change either the MQ setup or your .NET code.

To me this is very good news as that makes Soap through JMS a valid alternative to the quite complicated stacks above even when you're looking for Java-WCF interoperability and guaranteed deliveries, given that IBM WMQ is already a part of your infrastructure.

Once again messaging comes to the rescue! Either as the actual transport layer or as a building block for enabling WS-Reliable Messaging.

Now I just have to bribe some .NET programmer to hook up with for a lab since my .NET skills are, lets say.. nonexistent at the moment

fredag 19 november 2010

Comparing OSS offerings, like comparing apples to oranges

Rounding up the OSS integration frameworks described earlier is not easy and by no means a task of selecting an overall "winner". I've only scratched the surface on them, and most of the experience comes from testing simple samples, reading forums and blogs and in some cases implementing small solutions. There are also a bunch of frameworks that I haven't had time to test yet, so I'll focus on Mule, WSO2 and Apache Camel. Note that these are my personal reflections and experiences, so if you disagree completely, feel free to post so and call me a moron (maybe you're right :-)

Mule ESB
Marketed as the "most popular" open source ESB this was a good alternative to start with.
Mule is quoted as a stand-alone ESB and offers a wealth of connectivity options. The Mulesoft company behind it are very commited to the product and posts regular updates to the documentation as well as blogs.
However, the configurations are all designed by editing a quite cumbersome XML files which might offput some users wishing for less "bracket coding". Editing these files is only more constrained by the amount of schemas that needs to be included, i.e. each transport has its own schema and restrictions.
It supports a stand-alone runtime, hot deployments and extra functions such as registries and management consoles if you opt for the Enterprise license.
Mule was not created with EIP in mind from the startup, but updates in the most recent version made it easier to design flows through the "flow" construct even though I've already found some bugs while using it.

Out of the three, wso2 is the only one that requires sort of an application server to run in the form of the Carbon server.
However, the management console is outstanding with its included support for statistics and registry. You also have the option to graphically design your integrations which is always good for beginners.
There are three things that bother me with WSO2 at the moment though:

- Big focus on the WS-stack, which is not surprising given the WSO2 team background. If you're after a webservice framework, this product is perhaps the most promising of the three, but as far as I see it might not be perfectly suited for other integration scenarios.

- Synapse. WSO2 is built upon Apache Synapse, a project that doesn't have as much community activity as the other two alternatives. Only the future will tell if it will grow in the same pace as the other ones, but I'm at least a bit concerned about its future.

- The need for an application server
This might be a political issue, but I'd think it will be more difficult to sell in a new application server to your management than a simple API runtime. Other might see this as a strength, but when you're already sitting on a stack of servers you'll undoubtley will get the question why you'll need another one.

Apache Camel
The most lightweight alternative of the three I've tested and probably the most flexible as well.
Since Camel is built as an implementation of the EIP patterns it fits right in to creating integration flows, which is the logical way of implementing integration.
With Camel you get alternatives wherever you go. Do you wish to develop your flows through XML? Or maybe java? Or perhaps Scala or Groovy? Would you like to execute it in a spring environemnt? Or as a routing component inside the ServiceMix OSGi runtime? Or inside a web application? Options everywhere... Which is acutally a good thing! Hopefully your team and organisation can settle with one and develop best practices using the one you prefer. Otherwise you could end up with a mess of implementations, but this holds true for all types of development environments, so its not a problem per se.

The only downside I could possibly think of when using camel is that since it's so easy to use you might start putting integration logic all over your projects instead of at a central ICC team of integrators and designers. Integration is a very complex area and should be handled by dedicated staff with end-to-end focus in mind. Camel with it flexibility and pragmatic approach might lead you to solving integration tasks as part of application projects, but as you can see I'm only nitpicking for a possible future problem here.

I especially like the fact that they've started to experiment with Scala as a a first-class citizen as I personally see a great future in Scala. Having this support from the start gives me hope that this component will have a long future even after Java has been declared as "dead" or legacy.

As noted these are just my first impressions of the frameworks and I might be proven wrong or have missunderstood some parts of them, so take these words with a grain of salt.
There are also a lot of alternatives that I haven't been able to try out extensively yet, for example JBoss ESB, ServiceMix and Spring Integration 2.0 which looks promising. So many toys, so little time...

See you soon!

onsdag 17 november 2010

Climbing the Camel

I remember my first ride with a live camel during one of our family vacations half a lifetime ago. It was a quite challenging task just to get up on the beast, and even hairier when this huge animal started to move. Rocketing back and forth it eventually managed to instill some sense of trust as it started to swaggle on with me sitting high atop on its back. After the first terror waned, the experience was actually simply a blast!

Well, recently I had the chance to climb up to the camel's back again, but now in another context. This camel was maybe even more scarier as I had never seen something quite like it and I was first startled by its complexity. But again, once it started moving I understood the pure beauty of the beast and felt safe and secure again.

I'm of course talking about Apache Camel, the lightweight OSS framework for designing and executing integration flows. It takes a pragmatic aproach to integration and has a very clean and efficient way of defining your integration logic between separate endpoints.
Camel is all about lightweight integration, and even though it does not qualify as a complete ESB since it misses some important parts such as manageability and hot deployments of your solutions, it certainly has some features which puts it right into the Enterprise integration space.

First of all one has to understand that camel is not an integration product per se, but rather an API for defining integration patterns. It does not execute by itself but should preferably be bundled together with some existing hosting environment i.e. JBI, Spring applications, your messaging engine, an application server or what not.

Setting up a Camel project is easy, since you could start out from Maven archetypes found on the Camel website. These will help you download all dependencies and create your Spring context. Camel does not force you to use either Maven or Spring, but much of the samples and tutorials on the net (and the documentation) assumes that you're already an experienced Maven/Spring user. After passing these first hurdles (which really had mostly to do with setting Maven up correctly) I was able to try out my first flows.

You have several options for choosing how to define your integration logic. The two major ones are through either Spring xml files similar to i.e. Mule ESB or a fluid Java API. There are also options for Scala and Groovy if that's your fashion. No matter which, the flow executes the same way as the design languages are only a means to model your integrations.

This separation is quite brilliant as it allows you to select the technology you are most proficient with, and the runtime will adapt. It also means that cross-functions such as automatic documentation is supported no matter which language you choose.

Our first flow

So, enough babbling. Lets have a look at an example:

public class ExampleRouteBuilder extends RouteBuilder {

* A main() so we can easily run these routing rules in our IDE
public static void main(String... args) throws Exception {

* Lets configure the Camel routing rules using Java code...
public void configure() {

So, what is going on here?
Well, the first thing you'll notice is that we're extending the RouteBuilder abstract class and overide its configure() method. RouteBuilders are classes that configure your integration flows, or "routes" in camel language. The route itself starts with the from() method call and is described by chaining methods together.

Regular code completion assist then shows you which types of "Processors" you can apply to the route. A processor is basically some part that does something with a message exchange in a pipe and filter architecture. Apache comes bundled with processors for most of the EIP patterns such as the split defined in the example above, but you can of course add extra processors by just implementing the org.apache.camel.Processor interface or by sending the message to a spring bean method.

As you can see we've managed to include at least 4 EIP patterns in just a couple of lines (tecnically just one line of java, but you get the idea...) of code, namely the competing consumers, wiretap (the log endpoint), splitter and content-based router patterns. Now that's impressive!

The same flow could as stated above be modeled using XML in a Spring-like fashion by just including the Camel namespace and using content-assist in your favourite XML editor with the exact same outcome, so it's up to you to decide which method that suits you best.


Camel has the concept of adressing endpoints as URI's with the endpoint type first followed by the address and optional parameters on a query format. In our example we're instructing the route to use 5 concurrent threads to get the messages from the purchaseOrder jms queue.

The API relies on third party providers to implement the endpoint factories called components, so for our jms provider we could bind the jms transport to a component realised by i.e. Websphere MQ. This could either be performed by declaring the component through code, or as below as a spring bean in your context configuration.

<bean id="jms" class="org.apache.camel.component.jms.JmsComponent">
<property name="connectionFactory">
<bean class="">
<property name="hostName" value="localhost"/>
<property name="queueManager" value="QM_ESB"/>

This snippet declares that the MQXAQueueConnectionFactory should be the queueconnectionfactory for our jms endpoints. Just make sure that the application user has access rights to the queue manager by including the user either in the mqm group or some other group with connect/read access.

You could also define your endpoints as beans in the Spring XML file and just reference them by name, making the actual endpoints changeable depending on i.e. your different test environments setup.

So you could define your endpoints in a Camel context xml as:

<camel:endpoint id="PurchaseOrders.IN" uri="jms:queue:purchaseOrder?concurrentusers=5"/>

.. and then just reference this endpoint in your route as:


This gives you a very good flexibility for choosing the actual endpoint location and even transport outside the actual flow.

Other cool features that Camel brings are for example

- Automatic type conversion:
Note how we could just use an Xpath expression even though we did not know the exact inbound object format? Camel automatically type converts between a bunch of different standard formats so you dont need to know whether the data is delivered as i.e. a bytearray, string, jaxb objects, xml stream etc.
If you don't find support for your particular format you can always write your own converters and easily plug these in to the runtime.

- A bunch of transport components out of the box:
We only saw two very common components in the example, but have a look at the and you'll find that even the most obscure transport options are available.

- Bean support for i.e. defining a Spring bean as a service.

So, this was just a quick intro on the capabilities of Camel. I advise you to take a look at the number of articles written about it and get a feeling for this awesome framework at

'til next time,

Over and out!

tisdag 2 november 2010

First look at WSO2 ESB

Earlier this year I promised myself to educate me on the open source offerings for ESB's and BPM products out in the field.

As you may have noticed I took a first step into Mule ESB recently, a product that clearly suprises me, even though it is quite cumbersome to work with since it requires some fiddling around with XML files and a quite comprehensive stack of XML schemas. I guess it'll all become second nature to you once you get the hang of it.

Well, today I downloaded WSO2 ESB for the first time and my initial impression was just "WOW!". I knew that the guys at WSO2 are experts in their field, but for being an open source offering this product just instantly blew me away.

Right out of the box you have a solution with features such as Web management, tracing and statistics, pattern catalogs, registry, EIP support for advanced features such as splitting and aggregation, transformation options (XSLT, XQuery, Smooks, Custom java/ruby just to name a few) as well as the option to add in extra transport adapters as you see fit. And so the list goes on...

I've not yet scoured the full transport offerings, but to me it seems that Mule certainly has an upper hand when it comes to out-of-the box transports. But WSO2 wins my heart any day when it comes to manageability and ease of use.

I also noticed that WSO2 has it's own IDE plugins to Eclipse which is always handy. I'm currently downloading it and hope to see if it eases the integration configuration even more.

I'll also take a quick look into the BPEL editor and their runtime to see what's in store in that area as well.

So, have you used WSO2 ESB or any other Carbon products? If so, how do you compare it to other OS offerings such as Mule, JBoss, ServiceMix, Camel etc? I'd be interested to hear your thoughts on the stack since it does not matter how many bells and whistles it brings if it has not been proved in production.

söndag 24 oktober 2010

First bump in the SCA investigation

During evalutations and investigations you sometimes come to a point where you hit the wall and finds an issue that leaves a big question mark in your head. Suffice to say I just hit one of those moments...

If you've followed my previous posts you know that I've recently started to look into how the SCA Tuscany runtime and composition model could be utilized for building transport/implementation neutral services at my client.

The existing strategy is based on building webservices with JAX-WS, Java's standard WebService framework. One great feature of any WS framework is the ability to define message interceptors/handlers that will be injected before the message receives the actual service. These handlers have access to the entire soap message, including the soap headers. So you could i.e. define a company-wide custom soapheader with meta-data which should be acted upon by each and every service before (or even after) invocation.

This takes care of non-functional requirements such as traceability, security, end-to-end header propagation etc, since those aspects of the call can be separated from the actual business service.

Looking at the SCA spec and what the tooling provides I have yet to find the similar function in the SCA world. Even if you i.e. expose a component as a webservice binding I cant find any way to specify my chain of handlers that should get invoked for incoming requests.

Looking into the SCA 1.1 specification it states that jax-ws handlers are not included in the standard, but should/could be implemented by vendor implementations. From what I can see IBM has not chosen to do so in the current version of the SCA feature pack runtime nor tooling.
This seems a bit strange to me since I know that the BPM (0.9 SCA) products has had the support for adding handlers to a webservice export for a long time.

I'll definately be going to dig into the subject more, since it could prove to be a big no-go for us as we otherwise would have to revise our entire call-chain strategy for i.e. composite services.

To you readers out there, have you had experiences with interceptors/handlers in the SCA context? I might be missing some obvious features that you know about? If so, I'd be very delightful for all advices you could provide.

Over and out...

onsdag 20 oktober 2010

Combining OpenSCA (WAS FP) and Classic SCA (WESB), continued

In my previous post I mentioned that I'm currently assigned to test the interopability between an SCA component developed in RSA with the IBM BPM products version of SCA.

I should perhaps start with a little background on the subject of SCA in the IBM products to give a little insight of the need for the test.

The history of SCA (from an IBM perspective)

IBM was a major influencer during the standardization of SCA and started to incorporate the framework as an early adopter before the standard was finalized. This version was called 0.9 and is the framework that the Process Server and ESB is built upon.

Unfortunately (for IBM at least) the standardization body did not agree on some of the naming conventions and functionalities that IBM had specified in their draft version. Therefore there are some subtle differences between the 1.0 version and the IBM 0.9 version. The architecture as whole is fairly similar, but some implementation specifics differ.

IBM has since then released a feature pack for its Application Server (which both WPS and WESB is deployed to) that implements the SCA 1.0 standard.
It is (at least to my knowledge) based on the Apache Tuscany SCA runtime and has a nice tooling support in Rational Application Developer.

The idea behind SCA is to compose your applications from separate building blocks, each with a defined interface. This makes it possible to separate the bindings and the underlying implementation as well as the data representation for each component. As long as you adhere to the interface you can connect to the component through its exposed physical bindings.
This makes it possible to i.e. expose a java service over JMS, WS, java etc by configuration rather than implementation. It also separates non-functional requirements such as Quality of Service and security to name a few from the actual implementation. Think of SCA as WebServices on steroids and you get the picture..

SCA is also a binding mechanism, which allows intra-JVM calls to a service through pass-by-reference rather than the ordinary serialization/deziralization normally required for i.e. a WS-call.
This has the benefit that if you'd like to call an SCA-composed service that is in the same JVM, you could call it directly without passing a physical transport. At the same time, if your service should be reachable from the outside all you have to do is to define a transport binding as well, i.e. SOAP/Http or SOAP/JMS.

So, back to the task...

I'm currently investigating whether it is possible (and beneficiary) to call a 1.0 SCA component from the 0.9 version provided in the WPS stack of products through an internal SCA binding. This could influence my client to bundle their services into SCA components to ease future integration with these types of products when they will be required by business projects.

Building the service component
I started by defining a simple HelloWorld echo service in the SCA tooling in RAD. The service is the simplest possible that accepts a string i.e. a name and responds with an appended "Hello" plus the input string.

Working top to bottom I set up an SCA project, composite and component implementing the service. The service specifies the WSDL as it's interface.
After implementing the service I chose to add two bindings to the service. One for Webservices calls to simulate an external client, and one SCA binding for intra-calls. Note the names on the bindings in the below picture.

After deploying this contribution to the Process Server the service is reachable as a Webservice on the endpoint http://localhost:9080/hellocomponent/WS ,where "hellocomponent" in the URI is the component and "WS" is the binding name.

Building the client in WID
I then created a simple mediation in WID with an SCA import pointing to the SCA binding. The interface for the import is the original WSDL, and the binding names are shown below. Note that "Module name" specifies which component to call while "Export name" is the name of the binding.
I also created a proxy interface for this service and a mediation to simulate a typical transformation scenario in WID.

The result is that the hellocomponent can be reached both through an SCA binding from Websphere ESB/Process server as well as through a webservice call.

I'll continue to dig deeper into the interopability and will try to see i.e if the internal binding is indeed faster than if the Process server would call the component as a webservice (which is ofcourse also doable). I'll also try to fiddle around with specifying qualifiers (security, transaction scopes etc) and set up a reverse scenario where OSCA calls CSCA.

'til next time.

Over and out...

måndag 18 oktober 2010

Combining OpenSCA with IBM Classic SCA (WPS/WESB)

My current client needs to perform a Proof-of-Concept on the interopability between the old style of SCA used by the Websphere BPM products (Process Server and Websphere ESB) with the standard SCA 1.0 that the WAS Feature pack for SCA implements.

So far my task is to package one of their existing services as part of an SCA contribution and verify that it is accessible from a long-running process inside the process server.

The first thing that hit me is that WID doesn't seem to include the SCA tooling plugin that RAD offers, so for the moment it seems to me that I'll have to develop the integration with 2 IDE's, RAD for the Java/SCA part and WID for the WPS parts. Hmm, I'm just wondering how my relatively weak workstation will respond to that.. :-D

Anyway, for guidance I'll be looking at the excellent series of articles at IBM Developerworks:

I'll keep you updated on my findings as I get along, but it is indeed an interesting subject.

Over and out...

lördag 16 oktober 2010

Hunting for the best XML editor for Mule ESB

Recently I've been hunting for a good XML editor to use for authoring Mule ESB configuration files.
Mule is an open source lightweight ESB that is quite simple to work with, but since configuring the flows is all xml-based it can be challenging to learn the ropes in the beginning.

After starting out with the default eclipse xml editor I noticed that it doesn't cope very well with the complex namespace structures that the configuration files require.

The solution?

After trying out a couple editors I finally found the perfect match in the oXygen XML editor (, whose autocompletion handles Mule ESB's namespaces perfectly. It also exists as an eclipse plugin, making it a good fit together with the Mule IDE (which is eclipse-based).
The downside? It's not free... So I'll be trying it out a little more to see whether the product warrants a purchase.

So, do you have an alternative xml editor that you can tip me about, and preferably a free/open source one? Give me your thoughts, especially if you've been using it in conjuction with Mule.

Over and out...

fredag 15 oktober 2010

First meeting with my mentor

Today I had my first meeting with my appointed mentor, a senior and well known technical consultant in the Microsoft world. We had some really interesting discussions regarding carieer goals, future directions, differences between Microsoft and IBM just to name a few topics.

One advice he gave was for me to start blogging more in order to start an online notebook for myself as well as to get in touch with other technology geeks like me.

So here it is! My first blog post of what I hope to be a continious series of posts regarding technical stuff that I come up with at work.

I'll be writing about integration and SOA as well as pure development, mostly Java-based but hopefully I'll manage to fit some Scala in as well since that is a newfound hobby for me.

'til next time.. Over and out!