Evidence-Based Trust in Distributed Agent Systems
Trust is a crucial basis for interactions among parties in large, open distributed systems. Yet, the scale and dynamism of such systems make
it infeasible for each party to have a direct basis for trusting another party. For this reason, the participants in an open system
must share information about trust. Traditional models of trust employ simple heuristics and ad hoc formulas, without adequate mathematical
justification. These models fail to properly address the challenges of combining trust from conflicting sources, dealing with malicious agents, and updating trust.
This dissertation understands an agent Alice's trust in an agent Bob in terms of Alice's certainty in her belief that Bob is trustworthy. Unlike previous approaches, this dissertation formulates certainty in terms of a statistical measure defined over a probability distribution of the probability of positive outcomes.
Specifically this dissertation makes the following contributions. It
1. Develops a mathematically well-formulated approach for an evidence-based account of trust; proves desirable properties of certainty; and establishes a bijection between evidence and trust.
2. Defines a concatenation, an aggregation, and a selection operator to propagate trust, and proves desirable properties of these operators.
3. Develops trust update mechanisms and formally analyzes their properties.
4. Extends the definition of certainty from binary events to multivalued events. Establishes a bijection between Dempster-Shafer belief space
and evidence space, and defines a novel combination operator, which is commutative and associative. In contrast with traditional combination operators, which ignore conflict and sometimes yield counterintuitive results, the proposed operator treats conflict naturally.
Advisor:Greg Byrd; Ting Yu; Dennis Bahler; Munindar Singh
School:North Carolina State University
School Location:USA - North Carolina
Source Type:Master's Thesis
Date of Publication:01/21/2009