Jörg Denzinger's
Research
 
      

Malicious Argumentation

Argumentation is a technique for decision making in multi-agent systems. At its core is the (regulated) exchange of so-called utterances (including logical statements, the arguments) that establish knowledge or attack utterances by other agents (that negates previously established knowledge). The knowledge that "survived" the argumentation (i.e. all utterances attacking it were successfully attacked themselves) is then used to make the decision.

Practically, argumentation has to use logical formulae (over a given logic) to represent knowledge and utterances and an attack on such a formula is realized by providing a set of formulae that together contradict the attacked formula. There are several conditions that need to be fulfilled, like the attacking set of formulae not already being contradictory, and an agent needs to check each utterance of another agent for these conditions and that it indeed is an attack. Unfortunately, for practically relevant logics, like first-order logic, all these checks represent undecidable problems, so that they are done in a resource limited fashion and require the usage of a default if the agent runs out of resources before a definite answer to the check can be given.

This resource limitation (resp. the undecidability of most relevant logics) opens the door for what we call malicious argumentation. By making the arguments complicated enough to ensure that a check cannot be done within the resource limit, all kinds of phony arguments and therefore utterances can be constructed. And these utterances can then be used to either support otherwise unsupportable arguments by the agent using them or to attack otherwise not attackable arguments of other agents. Naturally, it is possible to have as default if a check runs out of resources the rejection of the utterance that is checked, but in many cases this is too much of a limitation, since there will also be absolutely valid utterances that will be rejected. And then the agent will become more and more isolated. Therefore, malicious argumentation is a serious threat to systems using argumentation between agents.

We have looked at how malicious argumentation can be performed in

  • Kuipers, A. ; Denzinger, J.:
    Pitfalls in Practical Open Multi Agent Argumentation Systems: Malicious Argumentation,
    Proc. COMMA 2010, Desenzano del Garda, IOS, 2010, pp. 323-334.
  • Kuipers, A. ; Denzinger, J.:
    A Challenge for Multi-Party Decision Making: Malicious Argumentation Strategies,
    Proc. FLAIRS 2017, Marco Island, 2017, pp. 574-579.


back to our page on multi-agent systems.

Last Change: 15/8/2017