Elicitation and Management of Rules, Requirements and Decisions

A manager of an enterprise architecture group recently asked me how to train business analysts to elicit or harvest rules effectively. We talked for a bit about the similarities in skills between rules and requirements and agreed that analysts will fail to understand rules as they fail to understand requirements.

For example, just substitute rules in the historical distribution of requirements failures:[1]

  • 34% Incorrect requirements
  • 24% Inadequate requirements
  • 22% Ambiguous requirements
  • 9% Inconsistent requirements
  • 4% Poor scoping of requirements
  • 4% Transcription errors in requirements
  • 3% New or changing requirements

So the problem is not training analysts to capture rules but helping them understand that rules and requirements are two sides of the same coin, especially when focusing on the functional needs of clients[2]. Of course, when we focus on functional requirements and rules in the modern age, we expect new or changing rules and requirements to lead to more failure.

Note that these percentages hold from the era of waterfall development. There have been improvements from iterative, spiral, and other rapid application development methodologies. RAD helps correct, expose and disambiguate requirements by exposing development obstacles or functional inadequacies earlier, but RAD does not improve the process of elicitation directly.

Today, especially concerning functional requirements and business rules, change is both pervasive and chronic while agility is critical. That is, maintaining correct, adequate, unambiguous, and consistent requirements has become a real-time challenge. Change occurs asynchronously and more rapidly than even the shortest plausible RAD iteration.

Fortunately, rule-based technology can address change directly – if the rules are accurate, adequate, and unambiguous[3]. Regrettably, the same cannot yet be said for functional requirements (yet), as discussed here.

Still, key questions and points are apparent, particularly considering agility:

  • How do incorrect rules or requirements persist?
  • How can inadequacies be exposed in real-time?
  • Can ambiguity be avoided in real-time?
  • Can inconsistency be exposed in real-time?

In each case, if the answer is “not completely” or, worse, “not at all”, is there a coping strategy?

In the following sections, we examine correctness, adequacy, ambiguity, and consistency. As will become apparent, with some understanding and technique, in addition to use cases, eliciting rules and requirements can be accomplished much more effectively.


How can an analyst know if a rule or requirement is correct? In some cases, it may be evident. And in some such evident cases, the analyst will nonetheless be mistaken. Nonetheless, the “self-evident” criterion is useful and generally reliable. But when an analyst is less than certain, how is correctness established?

The fact that incorrect rules or requirements persist into development or production in so many cases is evidence that correctness is typically established poorly. How does this happen?

The answer must be simple. It must be the case that too few knowledgeable people considered the requirement before it passed into development. Alternatively, perhaps some such requirements are too complex for human contemplation? Or perhaps the requirement was ambiguous, which we discuss separately below.

If an unambiguous requirement is incorrect, it must be either too complex or it must not have been considered enough by enough knowledgeable people. If too complex and considered by enough knowledgeable people, however, it would naturally be clarified and broken down. No matter how complex, if enough people understood it, failure would not be at the requirements stage but in implementation.

An unambiguous and yet incorrect requirement is almost certainly a result of inadequate collaboration. There are organizational and technical reasons that typically result in such poor collaboration.

Organizationally, the requirements process may flow from stakeholders and subject matter experts to application developers in a waterfall process. Literally, the organization expects the analyst to understand their needs and communicate them to others, not clarify them, seemingly ad infinitum, back to the stakeholders and SMEs. The organization resists investing their attention into iterative dialogue and review. The organization, perhaps unwittingly, seeks to minimize its up-front investment and ongoing commitment.

Technically, the tools or techniques used by analysts to capture and – more importantly – present rules and requirements back to stakeholders and SMEs alienate them. Ideally, the results of elicitation would be so clearly documented and presentable that reviewers could comfortably read and return marked up copy between sequential meetings or drafts. In reality, analysts tend to produce complex spreadsheets or documents with four or more levels and many cross references.

Such spreadsheets and documents are typically structured in a manner that is not intuitive to the reviewers. And the prose within such documents is typically composed with technical jargon or stilted prose rather than almost axiomatic statements expressed unambiguously in grammar familiar to authoritative and knowledgeable reviewers.

In practice, these organizational and technical realities result in documents that cannot be more than cursorily examined and excessive reliance on conversation and perception.

The result is the persistence of incorrect rules and requirements.

The cure is to ensure that the presentable form of captured requirements is easy to understand and can be reviewed without explanation.

This has many corollaries, of course! I will offer a few here:

  1. Minimize the need to present or teach how to interpret what is to be reviewed.
  2. Minimize cross-references between sentences, paragraphs, sections, etc.
  3. Seek sign off at the level of sentences rather than on a section or chapter.

In addition to making elicited knowledge more perspicuous and subject to more collaborative review, every analyst knows that use cases are facilitate elicitation. Use cases not only assist with correcting rules or requirements, but identifying missing or inconsistent rules and requirements. So we cover the challenges of adequacy and consistency before discussing how use cases can be used more effectively. If you want to focus on this immediately, see my prior comments about requirements in this post.


A collection of rules and requirements can be inadequate only if they are incomplete. That is, if all relevant rules and requirements are correctly understood, they cannot be collectively inadequate. So avoiding inadequacy is achieving coverage or completeness. Put another way, you will achieve adequacy when you are certain you know everything there is to know (at least “within scope”)!

Perhaps it is intuitively obvious that adequacy is something to seek but that we must cope with inadequacy. We cope with it by trying to identify it and minimize it, but we must also, at times, realize that we cannot resolve a situation at hand, and cope with that fact.

To reiterate:

  • Try to identify incompleteness before moving to development or production
  • Cope with incompleteness when it is identified, especially during production
  • Once identified, seek the rule(s) or requirement(s) to eliminate incompleteness

But how can we “identify” incompleteness?

Two approaches come to mind:

  1. Use an exhaustive, technical approach that covers EVERY possibility.
  2. Determine when the rules and requirements produce no outcome for a use case.

In the first case, a technical approach is to use nested tables to map every possible combination of input variables to an outcome value. This is equivalent to a decision tree on a vector of singly-valued inputs.[4] Alternatively, a spreadsheet metaphor does not feel as constraining but requires programmatic coverage analysis.[5] In each case, expressiveness has been sacrificed in order to analytically ensure completeness.

If the stakeholders and SMEs agree that all their rules and functional requirements (e.g., business policies and regulations) are based on a finite collection of singly-valued variables, such authoring and analytic techniques may be effective. Unfortunately, doing so violates the previous guideline about presentation being effective without training. Nonetheless, for fairly simple, closed problems, such approaches may pan out.

The problem with starting out with a limiting metaphor is that it may become an obstacle during elicitation. Consider whether it would make sense to tell stakeholders that you were going to reduce everything that they told you to flowcharts for their review. Why would you be more comfortable telling them that you are going to reduce everything they tell you to a forest of decision trees or a catalog of decision tables?

I recommend that analysts keep using the tools they understand.[6]

For the most part, this means Microsoft Word or, perhaps, a requirements management system, such as IBM’s Rational ReqPro, many of which use Word as the front-end to a collaborative, repository-based capability.

Unfortunately, Word does not support analysts in their work and RMS have not advanced to the point of business rules management systems.[7] Specifically, RMS do not address the problems of ambiguity or transcription errors, such as by enforcing grammaticality using defined vocabulary as in BRMS. BRMS arguably support analyst tasks more effectively than Word or RMS, but they do handle only rules, not requirements![8]

The second alternative mentioned above for identifying inadequacy is to identify cases for which the rules and requirements gathered produce no result. As mentioned above, use cases effectively expose incorrect or inconsistent rules or requirements in addition to missing (i.e., inadequate) requirements.

The key in each case is to determine the outcome or outcomes, if any, indicated by the rules and requirements elicited so far. Ideally, this will be done well in advance of development or deployment into production.

Unfortunately, determining the outcome(s) for use cases is typically a mental exercise rather than automated. These mental exercises require comprehension and consideration hundreds or thousands of rules and functional requirements at once. They are clearly error prone. In many situations, their use is limited to the development phase, during which discrepancies between outcomes facilitate additional requirements or corrections.

In order for use cases to be more effective at eliciting more adequate and correct requirements, it is necessary for simulation to be more automatic than mental and to occur more before development or deployment than is typical. Simulation before programming is a capability of some BRMS[9] but is not facilitated by Word or RMS.

BRMS can simulate use cases without requiring code to the extent that use cases and rules are “understood”. If the form they understand is presentable without explanation to stakeholders and SMEs, a BRMS may be an adequate RMS for use by analysts.[10]

I don’t think the market is there yet, however. The market needs more natural language analysis applied to requirements, in particular. Specifically, the vocabulary and phraseology used in rules and requirements needs to be managed.[11] Without understanding the meaning of the words used in rules or requirements it is clearly not possible to understand the sentences that use them. Beyond the meaning of words or phrases, more natural language analysis of clauses and sentences is required to discern the plausible logical interpretations of each captured rule or requirement.[12]


Most analysts do not apply natural language processing technology as discussed above to ensure that the rules or requirements they have captured are grammatical and use only defined vocabulary. Until they do, ambiguity will be extremely difficult to eliminate.

Microsoft Word may appear almost good enough here. It does a good job of flagging grammatical errors based on a grammar of English and knowledge of the parts of speech a large vocabulary. Unfortunately, Word’s grammar check cannot be limited to the domain of discourse relevant to a particular problem. You can’t stop someone from introducing requirements involving gorillas using Word. And Word won’t complain about grammatical nonsense, such as “He at piece of Pennsylvania”.

Analysts need (B)RMS tools that identify rules and requirements that are not grammatical or that use undefined vocabulary (including misspellings) or that use vocabulary or phrases within sentences that are not understood. Such sentences may or may not make sense. If they do make sense, the tool needs to acquire the meaning of that construct before it will be able to deduce the plausible meanings of sentences that use it.

The plausible meaning or logical interpretations of a sentence are what it might mean in a formal, rigorous, “interpretable” sense. Ambiguity arises when the sentence(s) that express a rule or requirement have more than one plausible interpretation. If an analyst has taken the first step of applying natural language analysis this second step will flag requirements that have zero or multiple plausible interpretations.

If a rule or requirement has zero plausible interpretations it cannot be simulated automatically. In practice, this may result in a delay in verifying its correctness, as discussed above, or in detecting inconsistencies, as will be discussed below.

If a rule or requirement has multiple plausible interpretations any or all of them can be simulated automatically using a logic or rules engines. Typically, this execution is limited to rules using a rules engine. Authority, for example, understands rules expressed in English, and will complain if they are grammatical using only defined vocabulary and phrasings, but ambiguous nonetheless.

The technology to extend Word-like grammar checking to Authority-like understanding may not be obvious but is nonetheless straightforward. This step will substantially improve requirements and modeling processes.

In the event that a rule is expressed in grammatical sentences that are unambiguously understood, Authority can generate and interpret rules using Haley’s Eclipse rule syntax and engine. By also capturing use cases in Authority, the correctness, adequacy and consistency of rules can be determined based on outcomes that are simulated using automation rather than mentally.

The technology to extend Authority-like simulation of rules to include requirements involves either more production rule generation or the use of logical theorem provers. These steps will substantially improve the requirements process by identifying incorrect, inadequate, and incomplete requirements before implementation.

RMS will eventually incorporate disambiguation but it will be longer before they incorporate automatic simulation. BRMS are better positions to move in this direction but are not adequate for direct use by analysts in capturing requirements at this time. In addition, there is too much coupling between the RMS capabilities of rules vendors and their proprietary business rules engines (BRE). Until BRMS vendors actually support (rather than simply endorse) emerging standards, my recommendation to separate RMS and BRE decisions will remain in force.

I truly hope that Ruleburst continues in the direction I set out in Authority and extends its natural language understanding to requirements in addition to rules. Regrettably, I did not have the opportunity to demonstrate the generation of logic or the use of theorem proving for requirements before they bought the assets of “my” former company. Therefore, I suspect that they will remain in just a BRMS and leave this opportunity.

Wouldn’t it be great if IBM did this for Rational ReqPro?


In this article we are addressing the failures in eliciting requirements first cited above. Incorrect, inadequate, and ambiguous requirements were cited as 80% of the problem and we have discussed that the impact of changing functional requirements and rules was underestimated. According to the Pareto Principle, therefore, we might ignore inconsistency.[13] However, the framework we have discussed above addresses inconsistency so directly that it can be tackled with little additional effort.

The challenge for an analyst during elicitation shifts from an initial emphasis on simply acquiring correct rules and requirements, where almost anything new is welcome, to focusing on inadequacies where there is enough content that fleshing it out rather than building it up takes priority. As the content containing rules and requirements becomes substantially correct and increasingly adequate use cases and simulation become increasingly important to identify residual errors and inadequacies, as well as inconsistencies.

As discussed above, relying exclusively on mental simulation results in more incorrect requirements and less adequate requirements entering development or production. Iterative development methodologies reduce but cannot eliminate the resulting increases in costs and time to market that risk projects and lower ROI.

Simulation without Implementation

Using automatic simulation of use cases, as first discussed above, the outcomes for each case can be determined by interpreting rules and functional requirements prior to development or deployment into production. For each simulated use case, the outcomes fall into one of the following:

  1. No outcome is determined.
  2. A single outcome is determined.
  3. Multiple outcomes are determined.

For the purpose of discussion, assume that the business process at hand involves a number of decisions and that the expected or correct decisions for each use case are known by the business analyst or can be determined given the results of simulation by the stakeholders or SMEs.

In the first case, the absence of a decision, inconsistency is not the problem. An incorrect rule or functional requirement is possible but it is more likely that one is missing. In this case, simulation will facilitate elicitation.

In the second case, the outcome is either expected or unexpected. In either case, it may be correct or incorrect. If authorities determine, upon reflection given a use case, that an expected outcome is incorrect, one or more rules or requirements are usually incorrect. Otherwise, there is an inadequacy (i.e., an omission). Quite frequently, authorities determine that an unexpected outcome is in fact incorrect, typically arising from an incorrect or missing rule or requirement.

In either case, the ability to review the chain of reasoning that produced an errant decision for a use case with authorities is a very effective and focusing elicitation tool.

Regrettably, most analysts have little experience with such effective elicitation until the latter iterations of traditional software development processes. This is one of the early improvements that can be realized from augmenting analysts with tools to capture and simulate functional requirements and rules.

In the third case, where simulation produces multiple outcomes for a single decision given a use case, the outcomes are typically in conflict. That is, they are inconsistent and, by implication, rules or requirements must be inconsistent.

Theory versus Practice

Before diving in to the use of simulation and elicitation to avoid inconsistency, we should discuss the notion of resolving inconsistencies. Unlike avoidance, resolution implies tolerance for inconsistency. Resolving inconsistency is practical. Avoiding inconsistency, however desirable, is not – in general – practical.

Most commercial business rule engines (BRE) are based on production rules. Conflict resolution has been addressed since the early beginnings of this technology.[14] Conflict resolution has since matured to a notion of deliberation in which decisions are considered before action is taken.[15][16]

Avoiding inconsistent rules and requirements requires extreme and onerous precision in the specification of conditions and exceptions. Resolving inconsistencies typically involves some type of scoring function, which may be as simple as a priority, or preferences, which may be levels of stratification, pair-wise, or even higher level rules and requirements (sometimes called “meta” knowledge).

For example, if a customer qualifies for multiple loans from various lenders, clarifying conditions and exceptions such that only one loan from a single lender is selected would be impractical. Instead, meta-knowledge about the respective benefits of alternative loans might order specific recommendations or result in a single, high scoring recommendation.

Technical analysts tend to place far too much emphasis on reducing decisions to a single outcome without deliberation. There are requirements and then there are requirements. Or, some rules are made to be broken. OMG goes as far as to anticipate this with the notion of an enforcement level on requirements captured using its SBVR standard.

Inconsistency in Practice

Let us now return to the analyst given a use case for which simulation produces multiple outcomes for a single decision. Each of such outcomes can be reviewed as if it was the only outcome. That is, each outcome is either correct or incorrect. Any outcome that is incorrect would indicate incorrectness or inadequacy as previously discussed.

Perhaps, after eliminating all errant outcomes through further elicitation, there remains only a single correct outcome will be reached by subsequent simulation. If subsequent simulation results in no outcome, we are back to the first case discussed above, which may involve further incorrectness or inadequacy. Alternatively, if subsequent simulation still results in multiple outcomes, none of which is judged as objectively wrong, there may be heuristics or preferences rather than strict rules or requirements to be elicited.

Organization and Architecture for Agility

I have discussed previously that practical applications cannot avoid incomplete or inconsistent knowledge. This is further discussed and reflected in the references on conflict resolution, deliberation, and stratification given in the footnotes cited above. It also needs to be reflected in an agile, iterative organizational process and in the architecture of information systems.

Take it is a given that a new, modified, or missing rule or functional requirement will arise after a system is in production. Take it as a given that a case will arise during production use for which no outcome, an errant outcome, or multiple, possibly conflicting outcomes result given correct automation of rules and requirements reasonably deployed. What are the risks and the processes of improvement?

Rules or requirements mapped into code typically result in action without deliberation. That is, if multiple outcomes might be indicated, only the first will result. In other words, the implementation of rules and requirements is typically incapable of detecting inconsistent requirements.

By using an engine to interpret requirements deliberatively in an architecture that logs outcomes, including multiple potential outcomes, elicitation of the knowledge that will avoid such inconsistencies in the future becomes possible. In cases that produce no outcome, elicitation of the additional knowledge necessary (including clarifications of incorrect knowledge) can also be facilitated. And by auditing cases for which single outcomes are reached, discovery of additional or errant knowledge can be accomplished at a controlled level of effort.

The ability to facilitate elicitation has development and organizational implementations.

To the extent that newly elicited rules and requirements can be automated without programming, as is commonly the case for rules using BRMS, elicitation post production becomes more viable economically.

In addition, the auditable use of rules and requirements can be combined with performance metrics to analyze and tune or learn rules, requirements, heuristics and preferences, some of which may produce enough insight to lead to new business models.

Such architecture effectively becomes a source of use cases but the organization has to remain engaged in the analytic and elicitation process after the rules and requirements have gone into production to receive a business benefit beyond the development impact.

[1] “Getting the Requirements Right.” in EDP ANALYZER (Vol. 15, No. 7) as quoted here[2] rather than non-functional requirements, as discussed in this post[3] see this regarding consistency[4] Ilog shows a nice version of the decision tree/table approach here.[5] e.g., as shown in this image of Corticon or this video from Haley.[6] as discussed concerning Haley and Ruleburst here in response to comments on this[7] for more on RMS versus BRMS, specifically concerning rules and requirements, see this[8] ibid

[9] one of the advances we made circa 2000 at Haley and since by Ilog, among others

[10] Unfortunately, as discussed previously here, BRMS do not handle requirements (yet).

[11] OMG’s Semantics of Business Vocabulary and Rules (SBVR) recognizes this but, excluding Haley, I am not aware of any (B)RMS that manages the vocabulary and phraseology used in rules or requirements and their formal, ontological meaning.

[12] Ruleburst still presents a white paper on how we did this at Haley for rules here.

[13] i.e., the 80/20 rule

[14] Production system conflict resolution strategies

[15] Soar: A Comparison with Rule-based Systems or Cognitive Theory, SOAR

[16] there have been similar pragmatic developments in the logic community, as in stratification and courteous logic, albeit without tolerance of inconsistent logic