Verification of Multi-Agent Systems using COVERAGE

Neil Lamb and Alun Preece

University of Aberdeen, Computing Science Department
Aberdeen AB9 2UE, Scotland
Phone: +44 1224 272296; FAX: +44 1224 273422
Email: apreece at csd dot abdn dot ac dot uk


Anomaly detection, as performed by the COVER tool, has proven to be a useful method for verification of knowledge-based systems. The increasing development of distributed knowledge-based systems based upon the multi-agent architecture demands techniques for the verification of these systems. This paper describes the COVERAGE tool - an extension of COVER designed to perform anomaly detection on multi-agent systems.

Please note that a later version of this paper is available.

1 KBS and MAS

Knowledge-based systems (KBS) are a relatively mature aspect of artificial intelligence technology. These systems solve problems in complex application domains by using a large body of explicitly-represented domain knowledge to search for solutions. This approach enables them to solve problems in ill-structured domains which defeat more conventional algorithmic programming. Unfortunately, this approach makes KBS harder to validate and verify because there is not always a definite "correct" solution. However, in recent years considerable progress has been made in developing effective validation and verification techniques for such systems (Ayel:91, Gupta:90). One verification tool in particular, COVER, has been demonstrated as an effective means of revealing flaws in complex KBS (Preece:92). COVER operates by checking the knowledge base of a KBS for logical anomalies which are symptoms of probable faults in the system.

Recently it has been realised that in order to solve certain kinds of complex problem it is necessary to create a system in which a number of KBS cooperate and combine their problem-solving capabilities. Sometimes this occurs because the problem-solving activity covers a large geographic region (such as in telecommunications networks or military applications), where different KBS have responsibility for different geographical areas; sometimes it occurs because different KBS have different "specialities" to bring to the problem-solving process, similar to the co-operation among human team members. The Multi-Agent System (MAS) architecture has proven a popular and effective method for building a co-operating team of KBS: each KBS in the team is constructed as a software agent, conferring abilities of autonomy, self-knowledge, and acquaintance knowledge on the KBS - abilities useful for team-forming and co-operative problem-solving. However, the problem of assessing the reliability of MAS through verification and validation has received scant attention, which is a matter of some concern because these systems are considerably more complex than individual KBS (Fisher:93, Grossner:95).

The objective of the work described in this paper is to build upon successful techniques in verification of KBS, to develop methods for verifying multi-agent KBS. Specifically, we extend the COVER tool to create COVERAGE: COVER for AGents. Section 2 defines the architecture of MAS that we assume, Section 3 describes the properties which COVERAGE is designed to detect, Section 4 describes the COVERAGE tool, and Section 5 concludes.

2 MAS Architecture

Our intention is to provide a general approach to verifying MAS. The architecture of conventional KBS is reasonably well-understood, and the approach taken by the COVER tool is to check KBS represented in a generic modelling language, into which many representations can be translated without too much difficulty (Preece:92). Using a modelling language rather than the implementation language has three important benefits: the verification approach is not tied to a specific target representation; by abstracting away detail not required for verification, it is computationally easier to work with the modelling language; and not least, verification can be performed prior to implementation (for example, on a conceptual model of the KBS (Wielinga:92)). These advantages far outweigh the disadvantage, which is that inaccuracies may arise because the modelling language may not exactly match the semantics of the target implementation language - this is fully discussed in (Preece:92), where it is shown that such inaccuracies can be dealt with in the verification process.

We would like to take a similar approach to MAS, but their architecture is not as well understood at this time (Wooldridge:95). After careful consideration, the ARCHON (ARchitecture for Co-operating Heterogeneous ON-line systems) architecture (Cockburn:95) was adopted as a starting point for developing a generic representation for COVERAGE, for two main reasons:

The ARCHON approach helps designers correctly decompose and package components of a MAS. The individual components of an ARCHON system are agents, which are autonomous and have their own problem solving abilities. Each agent is divided into two separate layers: the ARCHON Layer (AL) supports interaction and communication, while the Intelligent System layer (IS) provides domain level problem solving - the IS can be a KBS or some other type of information system. This is illustrated in Figure 1.

Figure 1: Structure of an ARCHON Agent.

The agent initiates problem-solving using control rules in the PCM. The PCM rules will typically either invoke co-operation messages to other team members (whose capabilities are defined in the acquaintance models) or by passing a goal to its own IS to solve. The monitoring layer between AL and IS acts as a translator between IS and ARCHON terms, and allows the AL to control the IS by invoking goals. For the purposes of this paper we will refer to the knowledge base of the IS layer as domain knowledge (DK) and the knowledge base of the AL as the co-operation knowledge (CK). Links between CK and DK structures are defined by monitoring units (MU).

2.1 Modelling Language for DK

We are interested in MAS where each agent's DK is a KBS. Therefore, we adopt a slight variant of the COVER KBS modelling language for capturing the DK of an agent. A fuller description of this language appears in (Preece:92), so we will merely summarise the main structures here: rules, goals, constraints and data items.


Rules make up most of the DK, defining the flow of problem-solving in either a backward or forward-chaining manner. The following is an example COVER rule:

rule 42 :: 
    goOut hasValue yes if 
    weather hasValue raining and 
    haveUmbrella or 
    not weather hasValue raining.

Rule consequents conclude a value for some data item, called the rule subject (goOut in the example). Data items may be boolean facts (haveUmbrella in the example) or single or multi-valued parameters (goOut and weather in the example).


Goals may be intermediate states in the DK or final conclusions represented within an MU. There will be at least one goal specified for each task required by the ARCHON layer and the MUs will represent one goal each. An example goal declaration is goal goOut, where goOut is a data item.


Constraint declarations permit us to verify the rules. An example declaration:

rule pregnantMan :: 
    impermissible if 
    pregnant(X) and 
    sex(X) hasValue male.

Each declaration states that a combination of data items and their values is semantically invalid: that is, it is not permissible in the real world.

Data Declarations

Data declarations are used to determine which facts, object/attribute values that can be derived and those that have to be supplied. In a standard KB this is simple: the system will derive as much information as it can from the input given by the user. For an agent the idea is actually the same, except that the "user" is now the ARCHON layer, and data declarations now define which data the ARCHON layer has to give the IS in order to achieve a task. An example declaration (in COVER, input data are defined askable; here we use the more general keyword data):

data sex / category / [male, female].

The data declarations are used in verification to establish the types of data items, to determine possible legal values that data items can take, and to check that certain values for data items can be established.

2.2 Modelling Language for CK

The following language is derived from the description of components in the ARCHON layer (Cockburn:95), consisting of PCM rules to govern the behaviour of the agent, and a set of declarations that define internal data for the agent (beliefs, goals, domain facts) and capabilities available internally (MUs, self model) and externally (aquaintance models).

PCM Rules

The PCM rule declarations define the behaviour of the agent. For our immediate purposes of this language, PCM rules only represent the communication of information and the completion of requested tasks. For example, the following rule obtains a forecast for the weather by issuing a request that its acquaintance WeatherAgent perform its own task forecast using data items location and date, to determine a value for data item weather.

pcm_rule guessWeather ::
    request(WeatherAgent, forecast, 
        [location, date]) if 
    weather hasValue unknown and
    agentModel(WeatherAgent, forecast, 
        [location, date], [weather]) and
    not location hasValue unknown and
    not date hasValue unknown.

This will happen only if the agent that has this rule cannot determine weather itself. This example rule uses the request operator to find information from an acquaintance. There is also a activate operator to invoke a task within an agent's own IS.

Agent and Self Declarations

Each acquaintance known to an agent has one or more entries in the agent's acquaintance model table of the form:

agentModel(Name, Task, Inputs, Outputs).

In the previous example PCM rule, the agent refers to an entry in its acquaintance model, agentModel to find the identity of another agent (WeatherAgent, a variable) which can perform task forecast using data location and date to produce a value for weather.

Self models are very similar: they are a self-reference model for the agent's own capabilities, and refer to MU items.

Monitoring Units

The MUs provide a bridge between AL task terms and IS goal terms, allowing for different naming in each (this is necessary to allow existing KBS to be "wrapped" as agents). An example MU for the forecast task:

   [mapping(location, home), 
    mapping(date, today)], 
   [mapping(weather, weather)]).

Here, the AL operates in terms of input data location and date, while the equivalent terms at the IS layer are home and today. The output term weather is the same in each layer.

3 MAS Anomalies

In defining the anomalies that can exist in a MAS, modelled using the DK and CK languages described above, we identify four distinct types:

DK anomalies
Anomalies confined to an agent's DK.
CK anomalies
Anomalies confined to an agent's CK.
CK-DK anomalies
Anomalies in the interaction between an agent's CK and DK (that is, between the modelled AL and IS).
CK-CK anomalies
Anomalies in the interaction between acquaintance agents (this is, between different agents' ALs).

The first three types are also known as intra-agent anomalies; CK-CK anomalies are also known as inter-agent anomalies.

3.1 DK Anomalies

Essentially, the DK is a KBS; therefore, the COVER anomalies are applicable to the DK. COVER defines four types of anomaly (Preece:92); these, and their effects in MAS are as follows:

Redundancy occurs when a KBS contains components which can be removed without effecting any of the behaviour of the system. This includes logically subsumed rules (if p and q then r, if p then r) rules which cannot be fired in any real situation, and rules which do not infer any usable conclusion. In MAS, simply redundant DK components will not affect an agent's ability to achieve goals, but it can lead to inefficient problem-solving and maintenance difficulties (where one rule is updated and its duplicate is left untouched). However, redundancy is usually a symptom of a genuine fault, such as where an unfirable rule or an unusable consequent indicates that some task is not fully implemented.
Conflict occurs when it is possible to derive incompatible information from valid input. Conflicting rules (if p then q, if p then not q) are the most typical case of conflict. Conflicting DK is as harmful for an agent as conflicting knowledge in a standalone KBS: the agent can derive incompatible goals from a valid set of input to a task. In human terms, it is untrustworthy.
Circularity occurs when a chain of inference in a KB forms a cycle (if p then q, if q then p). It may then not be possible for the KBS or agent's DK to solve some goals.
Deficiency occurs when there are valid inputs to the KB for which no rules apply (p is a valid input but there is no rule with p in the antecedent). If an agent's DK is deficient, then it may be promising a capability that it cannot supply: if some agentModel in an agent's AL indicates that the agent can solve some goal using some input, and the DK is deficient for some combination of that input, it is effectively in "breach of contract". We expand on this theme below.

3.2 CK Anomalies

Anomalies confined to an agent's CK fall into two types:

3.3 CK-DK Anomalies

Anomalies indicating faulty integrity between an agent's CK and DK will typically result in a "breach of contract": the agent's AL is effectively promising that the agent has certain capabilities, and the agent's IS is meant to provide those cababilities. If there is a mis-match between CK and DK, then a promise made by the AL will not be honoured by the IS. If an agent A has a CK-DK anomaly, this may cause a problem for A itself, if it relies on being able to fulfill a task in its self-model; or it may cause a problem for an acquaintance of A, say B, even where B's agentModel of A is consistent with A's self-model. (If B's agentModel of A is not consistent with A's self-model, then this is a CK-CK anomaly, discussed later.)

CK-DK anomalies may be simple cases of malformed MUs, where the MU does not properly map to DK terms, or they may be much more subtle cases of unachievable tasks, where the information defined in the MU does not permit the required goal(s) to be achieved by the IS under any possible chain of reasoning.

3.4 CK-CK Anomalies

Again, these anomalies result in "breach of contract" between agents: typically, the agentModel held by agent A of agent B is is in conflict with B's self-model. Special cases include:

4 Coverage

The COVERAGE tool has been implemented to detect the anomalies described above in MAS modelled by our DK and CK languages. COVERAGE includes and extends COVER. Each agent in the MAS is presented to COVERAGE in two files, for example and The procedure followed by COVERAGE is as follows:

COVERAGE is implemented mostly in Prolog (except for part of COVER which was implemented in C). Indexing of the reference tables makes the above checking operations reasonably efficient for a moderately-sized MAS (complexity of the CK-CK checks is obviously proportional to the sizes of the agentModel tables held by the acquaintances). The most expensive check is the CK-DK check that ensures that tasks defined in MUs can be carried out by the IS: this involves checking the environment table created by COVER, containing in effect every inference chain in the IS (Preece:92). Arguments in support of COVER's tractability apply here: IS in practice tend not to have very deep reasoning, leading to short inference chains which are exhaustively computable. (Where this is not the case, heuristics as in SACCO can be used to constrain the search for anomalies (Ayel:93).)

5 Conclusion

COVERAGE is merely a first step - albeit a necessary one - towards the verification of MAS. One difficulty at the outset was defining a stable architecture for MAS, given the current immaturity of the technology. We elected to base our CK and DK languages on the ARCHON framework since ARCHON has been proven to be "industrial-strength", and the notion of agents-as-KBS is well-supported by ARCHON. Moreover, the ARCHON framework introduced the important issue of separating the agent's components into distinct layers, allowing COVER to be used without modification to verify the KBS at the heart of each agent. It can only be hoped that our approach will be applicable to a wider class of MAS architectures.

The following points summarise the current status of our work:

Future work in the COVERAGE project includes the following areas.

Implementation of further CK-DK anomalies

The anomalies defined for CK-DK "breaches of contract" are largely sub-anomalies of a broader "unachievable task" anomaly. There are other broader anomalies that we have identified but not implemented in COVERAGE, including:

Dynamic anomalies

In addition to the verification of statically-detectable anomalies, it is necessary to verify dynamic MAS behaviour. For example, an agent's mechanism for planning and re-planning task achievement, coupled with communication and interaction, is extremely complex to test. It is difficult to envision a dynamic MAS verification system that is omniscient, because the space of all possible interactions will usually be vast or infinite. A possible method of circumventing the problem would be to make the verification system an agent in itself, and to let it verify the process as it is happening.

Real world systems

The work carried out so far has reasonably predicted the nature and contents of agents based on the ARCHON architecture. Although this is adequate for proving the anomalies discussed are both relevant and important, it does not avoid the need for testing on real world systems. Future work is planned which will evaluate the COVERAGE approach in practice.


Marc Ayel and Jean-Pierre Laurent, editors. Verification, Validation and Test of Knowledge-based Systems. John Wiley and Sons, 1991.

Marc Ayel and Laurence Vignollet. SYCOJET and SACCO, two tools for verifying expert systems. International Journal of Expert Systems: Research and Applications, 6(2):357-382, 1993.

D. Cockburn and N.R. Jennings. ARCHON: A distributed artficial intelligence system for industrial applications. In Foundations of Distributed Artificial Intelligence, New York, 1995. John Wiley and Sons.

M. Fisher and M. Wooldridge. Specifying and verifying distributed artificial intelligent systems. In Progress in Artificial Intelligence-Sixth Portugese Conference on Artificial Intelligence (LNAI Volume 727), pages 13-28, Heidelberg, Germany, 1993. Springer-Verlag.

C. Grossner. Models and Tools for Co-Operating Rule-Based Systems. PhD thesis, Concordia University, 1995.

Uma G. Gupta. Validating and Verifying Knowledge-based Systems. IEEE Press, Los Alamitos, CA, 1990.

N.R. Jennings and J.M. Corera. Using ARCHON to develop real-world DAI applications for electricity transportation management and particle accelerator control. IEEE Expert, 1(3):199-215, 1995.

Alun D. Preece, Rajjan Shinghal, and Aida Batarekh. Principles and practice in verifying rule-based systems. Knowledge Engineering Review, 7(2):115-141, 1992.

B. J. Wielinga, A. Th. Schreiber, and J. A. Breuker. KADS: a modelling approach to knowledge engineering. Knowledge Acquisition, 4(1):5-54, 1992.

Michael Wooldridge and Nicholas R. Jennings. Intelligent agents: Theory and practice. Knowledge Engineering Review, 1995. Forthcoming.