Repository logo
 

A vector model of trust to reason about trustworthiness of entities for developing secure systems

Abstract

Security services rely to a great extent on some notion of trust. In all security mechanisms there is an implicit notion of trustworthiness of the involved entities. Security technologies like cryptographic algorithms, digital signature, access control mechanisms provide confidentiality, integrity, authentication, and authorization thereby allow some level of 'trust' on other entities. However, these techniques provide only a restrictive (binary) notion of trust and do not suffice to express more general concept of 'trustworthiness'. For example, a digitally signed certificate does not tell whether there is any collusion between the issuer and the bearer. In fact, without a proper model and mechanism to evaluate and manage trust, it is hard to enforce trust-based security decisions. Therefore there is a need for more generic model of trust. However, even today, there is no accepted formalism for specifying and reasoning with trust. Secure systems are built under the premise that concepts like "trustworthiness" or "trusted" are well understood, without agreeing to what "trust" means, what constitutes trust, how to measure it, how to compare or compose two trusts, and how a computed trust can help to make a security decision.
To help answer such questions, this dissertation proposes a new vector model of trust. The model has several powerful features such as the ability to numerically evaluate different parameters influencing trust and to express different degrees of trust quantitatively, the ability to model the dependence of trust on time and on trust itself, and the formalization of trust comparison and trust composition operations. This work also formally defines trust context and relationships between different contexts and shows the importance of these in trust evaluation.
The primary contributions of the dissertation are: (1) A flexible quantitative model of trust based on different parameters and providing multilevel of trust. The model is extensible as the parameters are independent to each other. Addition of new parameters does not affect the other features of the model. The model can evaluate trust even when all the relevant information to do so is not available. (2) Formalism of trust context and relationship between different contexts. This formalism can help to make reasoned decisions about trust in a context when no information is available for that context. These demonstrate that the model is useful in making fine-grained security related decisions in different security contexts where other mechanisms or other trust models are not sufficient to make such decisions.
The effectiveness of the model is validated by estimating the relative trustworthiness of two security solutions (namely, cookie solution and filtering mechanism) to denial of service attacks in an e-commerce platform and comparing the outcome with the result known from practice. Trust-based decision making in different security scenarios are also discussed to show potential application of the model.

Description

Rights Access

Subject

secure systems
security
trust
trustworthiness
computer science

Citation

Associated Publications