Rule and event-driven business process M&A

On the heels of IBM’s acquisition of Lombardi comes Progress Software’s acquisition of Savvion.  The salient similarities are that IBM is adding BPM applications to its middleware stack as is Progress, at least with regard to its enterprise service bus offerings.  More interesting is the relationship between Progress’ complex event processing software and Savvion’s BPM.  Also of note is the vendor-provided integration of JBOSS Rules within Savvion versus the unrealized potential of IBM’s Ilog with respect to Lombardi.

We’ve written several times about the artificial distinction between CEP and BPM, their inevitable convergence, and the immature integration of business rules with business process management and event processing that inhibits knowledge-driven governance and decisioning.

We predict that IBM will soon abandon its hair-splitting between business event processing and complex event processing and make a deeper move in the CEP space than its partnerships with Coral8/Aleri and Streambase (we don’t see IBM’s acquisition of AptSoft as addressing the need).  It will also be interesting to see how closely IBM integrates Ilog with its BPM middleware and the Lombardi applications.  Unfortunately, Progress is unlikely to pressure IBM or advance the knowledge-driven enterprise to the extent that a move by Oracle, SAP, or TIBCO could.  And, although they each have some position with regard to rules, conversations with Oracle, SAP and TIBCO continue to indicate incremental approaches without any bold vision for knowledge-driven enterprises where IBM owns the vision and thought leadership.

To date, IBM and SAP have similar messaging of their primary rule-based tools with respect to their middleware platforms.  That is, Ilog is positioned with respect to WebSphere as YASU is positioned with respect to NetWeaver.  Of course, Ilog is the stronger product.  IBM also has better marketing, both in WebSphere versus NetWeaver and Ilog versus SAP’s now non-descript rules engine.  To put it another way, SAP almost ignores policy and decision management.

It is worth noting that SAP uses a second-generation decision-oriented scripting language called “Business Rules Framework” within many of its applications.  SAP is working on relating this pervasively applied, internally developed, procedural tool with its acquired rules engine.  I don’t see much to indicate that SAP will take an effective position versus IBM or Oracle in middleware decisioning, however.  Still, they seem aware of the need to formulate their strategy.  Perhaps we’ll see something in 2010.

Oracle clearly lags IBM in middleware decisioning and suffers from a convergence challenge among its Haley, Ruleburst, and JESS rules engines that are used in its CRM, public sector, and middleware offerings.  However, Oracle is well ahead of SAP in its decisioning capabilities, both within its applications and in its next generation of Fusion and Fusion-based applications.

But IBM, Oracle and SAP seem out of the CEP picture.  So now we have Progress promising the most robust platform for events, processes and rules.  Unfortunately, Progress/Savvion will not be as accessible as IBM or Oracle offerings.  That is, IBM and Oracle, by way of their acquisitions of Ilog and Haley, are much better suited for policy and decision management.

We have written previously that robust business process management must address event processing and that event-driven business processes tend to have many fairly simple processes that are triggered and orchestrated by events and rules.  Tools that ineffectively require users to map policies and rules into flow charts (including almost all CEP and BPM platforms) fail to raise business management from the procedural to the knowledge-driven enterprise.

The CEP vendors, including Progress, TIBCO, Coral8/Aleri, and Streambase, need to consider the analyst-accessible, linguistically-oriented approaches of IBM and Oracle in order to cross the same chasm that business rules vendors crossed last decade.  The same remains true for BPM vendors, although it is nice to see that Progress has selected one that takes rule integration seriously.  IBM is clearly on the move and Oracle is best positioned to respond, but they lack events capabilities, both in their platform/stack and in their knowledge management capabilities.

Of the CEP vendors, TIBCO is best positioned because it has rules and BPM capabilities now.  It will be interesting to see if Coral8/Aleri or Streambase make any moves towards accessibility and business process management in 2010.  IBM is clearly dictating this is the game this decade.

14 Replies to “Rule and event-driven business process M&A”

  1. I agree, JBOSS is doing a great job in this space. They won’t “make it happen”, though. Still, the fact that a user-driven community is headed in this direction may be taken as confirmation of things to come elsewhere. I have tended to view JBOSS as a fast follower, but it follows on an ever broadening front that is taking it towards technical leadership, even if others may have superior commercial offerings along a subset of its capabilities. To put it another way, it’s the jack of all trades. IBM and Oracle look to be the kings of their markets, which are comprised of more valuable customers, on average. Between them, IBM demonstrates more strategic vision. I should mention that Microsoft has made some moves that indicate longer term promise there, too, but for now they are out of the picture.

  2. The idea of integrating everything into 1 engine is foolish. Instead, there should be specialized engines for each type of processing, with a well defined integration API between the engines. From an architecture and design perspective, it’s not desirable to mix purely procedural rules in an inference engine. It can be done and in some cases may be preferred, but as a general rule patterns/development perspective that’s bad in my opinion. Mixing high volume stream processing into an inference engine is also a bad idea and not technically sound.

    Finding the right approach for integrating bpm with stream processing with inference rules is challenging. It’s going to take several years of dedicated research to figure out a good general purpose approach, and then a few more years to productize it.

  3. Interesting take. I assume procedural rules are “nothing more” than process…

    Here I am more concerned with knowledge acquisition and management more than with how many engines it takes to automate such knowledge. Furthermore, events and processes are not separable – semantically or pragmatically – despite current market fragmentation of event processing and process management. As for high-volume, low-level event-stream processing that produces higher level business events, I am inclined to agree with you but the line is murky even there. You are argue with TIBCO and JBOSS more than me on this point. The issue I am concerned with can be viewed as automatic programming of underlying technology driven by knowledge managed at the human level (albeit – hopefully? – not necessarily by programmers). Today, the streaming engines are where rules were over a decade ago (in terms of accessibility, worse in terms of agility). The challenges have more to do with semantics of events, processes, and time than research issues on information processing algorithms. Once those semantics are clarified, the line between event streams and business events will effectively vanish.

  4. Hi paul,

    By engine, I was thinking at the lowest levels. At the highest level I agree with you. I’ve been trying to get a clear definition of what differentiates event streams from business events. From the variety of definitions I see out there, there appears to be conflicting ideas in the CEP/ESP space. Even if we clarify the semantics of what those terms mean and the line between them vanishes, the execution may need to be in separate engines for some use cases. If a business process involves 500,000 facts a second, the engine still needs to effectively manage that and avoid OutOfMemoryException.

    I’ve been working on a stream matching algorithm modeled after short term memory. It’s a derivative of RETE and focuses on managing the working memory effectively. the basic idea is the engine tries to forget data as soon as possible based on rule metadata and ruleset analysis. right now I’m using temporal distance calculation to automate working memory garbage collection.

    I agree events and process aren’t really separable. From what i can tell, none of the engines today handle the entire spectrum of business requirements as a holistic problem. which leads to the fragmentation you describe. I like the promise of knowledge management. I’m sure you’ve seen this too, but the times I’ve tried to use knowledge approach, the developers and business people get confused. Working with said tabet over the years I saw that a few times. I’m enjoying the discussion.

  5. I’ve been thinking about this for a while. Assuming the knowledge base has sufficient details about the type of process, the software should be able to choose the most appropriate runtime execution. From what I understand of Tibco’s approach, it “appears” they have several different execution engines, but try to unify through the authoring tools. Plugtree’s comment about jboss making it all one engine sounds foolish to me. The execution semantics of each type of engine will vary drastically, so it doesn’t make sense to me to make one gigantic API.

    Clarifying the semantics of events, processes and inference rules is a noble challenge. I hope the goal is reached one day.

  6. If we damn JBOSS we do the same to TIBCO. I’m OK with them for lower bandwidth CEP, where the events are close to if not at the business level, rather than high-volume, low-latency algorithmic trading, for example.

    With regard to the semantics of time, events, state, and process. It will be this decade, probably within just a few years. You can quote me!

  7. “Plugtree’s comment about jboss making it all one engine sounds foolish to me. ”

    It is not all one engine, but a facade to hide away the execution details.

    The KnowledgeBase consists of 0..n definitions, be it rules or be it processes. The appropriate definitions are built and executed on the appropriate engine. We do not execute processes ontop of a rule engine, there is a separate more efficient engine for that. What we do is make the different engines very aware of each other and able to interact with each other.

    There is however a unified and generic api. A mixin approach for marker interfaces is used to demarcate the role of the interface to the user. So while the user would interact with the KnowledgeRuntime interface, that would actually extend both the ProcessRuntime and the WorkingMemory interfaces. This makes api exploration and learning easier.


  8. Thanks for clarifying plugtree’s misunderstanding. I assume plugin is mis-representing jboss, since his statements weren’t detailed or accurate. It makes sense to have a unified set of tools that make the transition “appear” seamless, without the making one gigantic API that becomes ugly and unmanageable.

    to paul,
    I hope your right. From what I see of the CEP engines, I have doubts this will happen within the next 5 yrs, 10 might be possible under the right conditions. From a BRMS perspective, I haven’t seen much progress from all the vendors. I’ve mainly focused on the execution side and haven’t tackled the authoring side, since I lack the breadth and depth to really know how to unify events + production rules + process rules + bi rules + eca + knowledge base and language semantics. Aside from you, the ruleml founders, dr forgy and few of the older timers with 20 yrs experience, no one else has the experience to solve the problem.

    To me, the biggest issue with unifying the semantics is that no single person has the experience required and working on a unified approach takes 10x longer with dozens of people. Even then, most of the people are so busy with other stuff, the progress is slow. I’ve been working with said tabet for several years now on ruleml related stuff and the progress is far slower than I ever expected.

  9. Hi Paul, when you say “semantics” are you thinking of a unified rule language or something different? I realized this morning I stupidly and probably mistakenly assumed you are referring to a unified language.

    It occurred to me, you could be referring to knowledge base centric approach like haley authority. Which were you thinking of?



  10. Peter, when I use the word “semantics” I mean “meaning” as in “knowledge” and “epistemology”. I did not mean anything at the technical or implementation level.

  11. thanks for clarifying. The times I’ve tried to build knowledge base for business apps, I found it quite challenging. The first obstacle was explaining to others what it is and how we should go about building one. Often times, even after the knowledge base is built, people scratch their heads and ask “what do we do with it now?”

    What I’ve done in the past with said was to model the concepts in the object model. We then modeled some of the other concepts in a business rule language. Even then, we weren’t able to capture everything in the knowledge base. The cost of modeling everything in the knowledge base versus the long term benefit just wasn’t there. One fear many people have with knowledge base is “who is going to maintain it?” after the rule consultants leave. Building this stuff is hard and finding “affordable” staff to maintain and use it is quite tough.

  12. Eventually we will get computers to do what we want by educating them rather than programming them. For now, there are few pieces of software that approach this vision, even within the limited domain of static decision making, as in the current business rules / decision management market. So, in practice, I am agreeing with you. However, Authority, now owned by Oracle is a quantum leap above other systems in managing knowledge independently of the implementing technology. Another system that is getting there for a similar domain is Attempto Controlled English (ACE), although it is fairly limited in its understanding of words and in its avoidance or ignorance concerning ambiguity. But neither of these get beyond the province of static, tense-less, modal free business logic for point decision making.

    There has been a great deal of progress in natural language since we built Authority. The progress on semantics has also been significant, at least technically. Getting usable ontologies that cover enough knowledge in a way that is useful across enterprises will probably require market leadership (e.g., a visionary) or a breakthrough in collaboration rather than a standards effort.

  13. I think you’re right. A “break through in collaboration” is more likely to produce a huge improvement than standardization effort. Having followed W3C’s rule efforts over the years, it’s quite frustrating and non-productive. I looked at Attempto Controlled English a few years back and found it quite limiting.

    I got to play with haley authority a bit a few years back when I evaluated it for Aetna. Haley authority’s ontology approach is far beyond any of the current ontologies tools on the market. Compared to Protege and TopQuadrant, my opinion is authority is still far more advanced and better designed.

    A few years back said and I looked into using various NLP tools for business applications and found it wasn’t cost effective. To really make it work, one would need a large corpus of training data to produce accurate results. In many cases that just wasn’t practical. I’ve tried to stay up to date on current NLP research, but it still feels like practical NLP for business rules is still a long term potential. I hope one day it becomes common practice.

Comments are closed.