I can hardly believe posts like this one in Charlotte requiring 6 years or more of IT. Two weeks ago I talked with a recruiter looking for consulting-to-hire people with significant Ilog experience at under $100 / hr in the DC area. Is this what happens when you cross the chasm? I guess it never hurts to ask!
Dave Mark’s post on Why Not More Simulation in Game AI? and the comments it elicited are right on the money about the correlation between lifespan and intelligence of supposedly intelligent adversaries in first person shooter (FPS) games. It is extremely refreshing to hear advanced gamers agreeing that more intelligent, longer-lived characters would keep a game more interesting and engaging than current FPS. This is exactly consistent with my experience with one of my employers who delivers intelligent agents for the military. The military calls them “computer generated forces” (CGFs). The idea is that these things need to be smart and human enough to constitute a meaningful adversary for training purposes (i.e., “serious games”). Our agents fly fixed wing and rotary wing aircraft or animate special operations forces (SOFs) on the ground. (They even talk – with humans – over the radio. I love that part. It makes them seem so human.) Continue reading “Real AI for Games”
Externalizing enterprise decision management using service-oriented architecture orchestrated by business process management makes increases agility and allows continuous performance improvement, but…
How do you implement the rules of EDM in an SOA decision service? Continue reading “Agile decision services without XML details”
I hope those interested in artificial intelligence enjoy the following paper . I wrote it while Chief Scientist of Inference Corporation. It was published in the International Joint Conference on Artificial Intelligence over twenty years ago.
The bottom line remains:
- intelligence requires logical inference and, more specifically, deduction
- deduction is not practical without a means of subgoaling and backward chaining
- subgoaling using additional rules to assert goals or other explicit approaches is impractical
- backward chaining using a data-driven rules engine requires automatic generation of declarative goals
We implemented this in Inference Corporation’s Automated Reasoning Tool (ART) in 1984. And we implemented it again at Haley a long time ago in a rules langauge we called “Eclipse” years before Java.
Regretably, to the best of my knowledge, ART is no longer available from Inference spin-off Brightware or its further spin-off, Mindbox. To the best of my knowledge, no other business rules engine or Rete Algorithm automatically subgoals, including CLIPS, JESS, TIBCO Business Events (see above), Fair Isaac’s Blaze Advisor, and Ilog Rules/JRules. After reading the paper, you may understand that the resulting lack of robust logical reasoning capabilities is one of the reasons that business rules has not matured to a robust knowledge management capability, as discussed elsewhere in this blog. Continue reading “Goals and backward chaining using the Rete Algorithm”
The ART syntax lives on in yet another product!
JBOSS Rules (formerly Drools) just described its imminent support for rules expressed in the CLIPS syntax here.
NASA derived CLIPS from the syntax of Inference Corporation’s Automated Reasoning Tool (ART) in the mid-80s. I designed and implemented the ART syntax with Chuck Williams on a team with Brad Allen and Mark Wright.
CLIPS didn’t have many of the features of ART (including an ATMS or backward chaining, for example), but it Continue reading “Haley / ART syntax lives on in open-source Java rules”
I am working on some tutorial material for business analysts tasked with eliciting and harvesting rules using some commercial business rules management systems (BRMS). The knowledgeable consumers of this material intuitively agree that capturing business rules should be performed by business analysts who also capture requirements. They understand that the clarity of rules is just as critical to successful application of BRMS as the clarity of requirements is to “whirlpool” development. But they are frustrated by the distinct training for requirements versus rules. They believe, and I agree, that unification of requirements and rules management is needed.
Consider these words from Forrester:
One might argue that Word documents, email, phone calls, and stakeholder meetings alone are adequate for managing rules. In fact, that is the methodology currently used for most projects in a large number of IT shops. However, this informal, ad hoc approach doesn’t ensure rigorous rules definition that is communicated and understood by all parties. More importantly, it doesn’t lend itself to managing the inevitable rules changes that will occur throughout the life of the project. The goal must be to embrace and manage change, not to prevent it. 
But note that Forrester used the word “requirements” everywhere I used “rules” above!
Both of the following statements are true, but the first is more informative:
- Business Rules Management Systems (BRMS) typically produce forward chaining production rules that are interpreted by a business rules engine (BRE) based on the Rete Algorithm.
- BRMS typically generate rules that are interpreted by a BRE.
First, dropping the word “production” before “rules” loses information. BRMS do not typically generate rules that are not production rules. Consider, for example, the BRMS vendors involved in the OMG effort produced the Production Rule Representation (PRR) standard. The obvious question is:
- What is different about production rules?
Second, dropping the words “based on the Rete Algorithm” loses information. The dominant rules vendors and open-source engines are all based on the Rete Algorithm.
- Why does the Rete Algorithm matter?
Third, dropping the word “chaining” before “rules” loses information. Chaining refers to the sequential application of rules, as in a chain where each link is the application of one rule and links are tied together by their interaction. But:
- Why does chaining matter?
Fourth, dropping the word “forward” before “chaining” loses information. Forward chaining reacts to information without requiring goals. This begs the question:
- Don’t goals matter?
A client recently asked me for guidance in establishing a center of excellence concerning business rules within their organization. Their objectives included:
- Accumulate requisite skills for productive success.
- Establish methodologies for productive, reliable and repeatable success.
- Accumulate and reuse content (e.g., definitions, requirements, regulations, and policies) across implementations, departments or divisions.
- Establish multiple tutorial and reusable reference implementations, including application development, tooling, and integration aspects.
- Establish centralized or transferable infrastructure, including architectural aspects, tools and repositories that reflect and support established methodologies, reusable content, and reference implementations.
- Establish criteria, best practices and rationale for various administrative matters, especially change management concerning the life cycles of content (e.g., regulations or policies) and applications (e.g., releases and patches).
I was quickly surprised to find myself struggling to write down recommendations for the skill set required to seed the core staff. My recommendations were less technical than the client may have expected. After further consideration, it became clear than any discrepancy in expectations arose from differences in our unvoiced strategic assumptions. Objectives, such as those listed above, are no substitute for a clearly articulated mission and strategy.