Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
422861 | Electronic Notes in Theoretical Computer Science | 2009 | 13 Pages |
Reputation systems support users to distinguish between trustworthy and malicious or unreliable services. They collect and evaluate available user opinions about services and about other users in order to determine an estimation for the trustworthiness of a specified service. The usefulness of a reputation system highly depends on its underlying trust model, i.e., the representation of trust values and the methods to calculate with these trust vales. Several proposed trust models that allow representing degrees of trust, ignorance and distrust show undesired properties when conflicting opinions are combined. The proposed consensus operators usually eliminate the incurred degree of conflict and perform a re-normalization. We argue that this elimination causes counterintuitive effects and should thus be avoided. Therefore, we propose a new representation of trust values that reflects also the degree of conflict, and we develop a calculus and operators to compute reputation values. Our approach requires no re-normalizations and thus avoids the thereby caused undesired effects.