ⓘ Trusted system. In the security engineering subspecialty of computer science, a trusted system is a system that is relied upon to a specified extent to enforce ..

                                     

ⓘ Trusted system

In the security engineering subspecialty of computer science, a trusted system is a system that is relied upon to a specified extent to enforce a specified security policy. This is equivalent to saying that a trusted system is one whose failure would break a security policy.

The meaning of the word "trust" is critical, as it does not carry the meaning that might be expected in everyday usage. A system trusted by a user, is one that the user feels safe to use, and trusts to do tasks without secretly executing harmful or unauthorised programs; while trusted computing refers to whether programs can trust the platform to be unmodified from that expected, whether or not those programs are innocent, malicious or execute tasks that are undesired by the user.

Trusted system can also be seen as level base security system where protection is provided and handled according to different levels. This is commonly found in military, where information is categorized as unclassifiedU, confidentialC, SecretS, Top secretTS and beyond. These also enforces the policies of No-read up and No-write down.

                                     

1. Trusted systems in classified information

A subset of trusted systems "Division B" and "Division A" implement mandatory access control MAC labels; as such, it is often assumed that they can be used for processing classified information. However, this is generally untrue. There are four modes in which one can operate a multilevel secure system: multilevel mode, compartmented mode, dedicated mode, and system-high mode. The National Computer Security Centers "Yellow Book" specifies that B3 and A1 systems can only be used for processing a strict subset of security labels, and only when operated according to a particularly strict configuration.

Central to the concept of U.S. Department of Defense-style "trusted systems" is the notion of a "reference monitor", which is an entity that occupies the logical heart of the system and is responsible for all access control decisions. Ideally, the reference monitor is a tamper-proof, b always invoked, and c small enough to be subject to independent testing, the completeness of which can be assured. Per the U.S. National Security Agencys 1983 Trusted Computer System Evaluation Criteria TCSEC, or "Orange Book", a set of "evaluation classes" were defined that described the features and assurances that the user could expect from a trusted system.

Key to the provision of the highest levels of assurance B3 and A1 is the dedication of significant system engineering toward minimization of the complexity not size, as often cited of the trusted computing base TCB, defined as that combination of hardware, software, and firmware that is responsible for enforcing the systems security policy.

An inherent engineering conflict would appear to arise in higher-assurance systems in that, the smaller the TCB, the larger the set of hardware, software, and firmware that lies outside the TCB and is, therefore, untrusted. Although this may lead the more technically naive to sophists arguments about the nature of trust, the argument confuses the issue of "correctness" with that of "trustworthiness".

In contrast to the TCSECs precisely defined hierarchy of six evaluation classes - the highest of which, A1, is featurally identical to B3, differing only in documentation standards - the more recently introduced Common Criteria CC - which derive from a blend of more or less technically mature standards from various NATO countries - provide a more tenuous spectrum of seven "evaluation classes" that intermix features and assurances in an arguably non-hierarchical manner and lack the philosophic precision and mathematical stricture of the TCSEC. In particular, the CC tolerate very loose identification of the "target of evaluation" TOE and support - even encourage - an inter-mixture of security requirements culled from a variety of predefined "protection profiles." While a strong case can be made that even the more seemingly arbitrary components of the TCSEC contribute to a "chain of evidence" that a fielded system properly enforces its advertised security policy, not even the highest E7 level of the CC can truly provide analogous consistency and stricture of evidentiary reasoning.

The mathematical notions of trusted systems for the protection of classified information derive from two independent but interrelated corpora of work. In 1974, David Bell and Leonard LaPadula of MITRE, working under the close technical guidance and economic sponsorship of Maj. Roger Schell, Ph.D., of the U.S. Army Electronic Systems Command Ft. Hanscom, MA, devised what is known as the Bell-LaPadula model, in which a more or less trustworthy computer system is modeled in terms of objects and subjects active entities - perhaps users, or system processes or threads operating on behalf of those users - that cause information to flow among objects. The entire operation of a computer system can indeed be regarded a "history" in the serializability-theoretic sense of pieces of information flowing from object to object in response to subjects requests for such flows.

At the same time, Dorothy Denning at Purdue University was publishing her Ph.D. dissertation, which dealt with "lattice-based information flows" in computer systems. She defined a generalized notion of "labels" - corresponding more or less to the full security markings one encounters on classified military documents, e.g., TOP SECRET WNINTEL TK DUMBO - that are attached to entities. Bell and LaPadula integrated Dennings concept into their landmark MITRE technical report - entitled, Secure Computer System: Unified Exposition and Multics Interpretation - whereby labels attached to objects represented the sensitivity of data contained within the object, while labels attached to subjects represented the trustworthiness of the user executing the subject. The concepts are unified with two properties, the "simple security property" a subject can only read from an object that it dominates; each element of which engenders one or more model operations.

                                     

2. Trusted systems in trusted computing

The Trusted Computing Group creates specifications that are meant to address particular requirements of trusted systems, including attestation of configuration and safe storage of sensitive information.

                                     

3. Trusted systems in policy analysis

Trusted systems in the context of national or homeland security, law enforcement, or social control policy are systems in which some conditional prediction about the behavior of people or objects within the system has been determined prior to authorizing access to system resources.

For example, trusted systems include the use of "security envelopes" in national security and counterterrorism applications, "trusted computing" initiatives in technical systems security, and the use of credit or identity scoring systems in financial and anti-fraud applications; in general, they include any system i in which probabilistic threat or risk analysis is used to assess "trust" for decision-making before authorizing access for allocating resources against likely threats including their use in the design of systems constraints to control behavior within the system, or ii in which deviation analysis or systems surveillance is used to ensure that behavior within systems complies with expected or authorized parameters.

The widespread adoption of these authorization-based security strategies where the default state is DEFAULT=DENY for counterterrorism, anti-fraud, and other purposes is helping accelerate the ongoing transformation of modern societies from a notional Beccarian model of criminal justice based on accountability for deviant actions after they occur – see Cesare Beccaria, On Crimes and Punishment 1764 – to a Foucauldian model based on authorization, preemption, and general social compliance through ubiquitous preventative surveillance and control through system constraints – see Michel Foucault, Discipline and Punish.

In this emergent model, "security" is geared not towards policing but to risk management through surveillance, exchange of information, auditing, communication, and classification. These developments have led to general concerns about individual privacy and civil liberty and to a broader philosophical debate about the appropriate forms of social governance methodologies.



                                     

4. Trusted systems in information theory

Trusted systems in the context of information theory is based on the definition of trust as Trust is that which is essential to a communication channel but cannot be transferred from a source to a destination using that channel by Ed Gerck.

In Information Theory, information has nothing to do with knowledge or meaning. In the context of Information Theory, information is simply that which is transferred from a source to a destination, using a communication channel. If, before transmission, the information is available at the destination then the transfer is zero. Information received by a party is that which the party does not expect - as measured by the uncertainty of the party as to what the message will be.

Likewise, trust as defined by Gerck has nothing to do with friendship, acquaintances, employee-employer relationships, loyalty, betrayal and other overly-variable concepts. Trust is not taken in the purely subjective sense either, nor as a feeling or something purely personal or psychological - trust is understood as something potentially communicable. Further, this definition of trust is abstract, allowing different instances and observers in a trusted system to communicate based on a common idea of trust otherwise communication would be isolated in domains, where all necessarily different subjective and intersubjective realizations of trust in each subsystem man and machines may coexist.

Taken together in the model of Information Theory, information is what you do not expect and trust is what you know. Linking both concepts, trust is seen as qualified reliance on received information. In terms of trusted systems, an assertion of trust cannot be based on the record itself, but on information from other information channels. The deepening of these questions leads to complex conceptions of trust which have been thoroughly studied in the context of business relationships. It also leads to conceptions of information where the "quality" of information integrates trust or trustworthiness in the structure of the information itself and of the information systems in which it is conceived: higher quality in terms of particular definitions of accuracy and precision means higher trustworthiness.

An introduction to the calculus of trust Example: If I connect two trusted systems, are they more or less trusted when taken together? is given in.

The IBM Federal Software Group has suggested that provides the most useful definition of trust for application in an information technology environment, because it is related to other information theory concepts and provides a basis for measuring trust. In a network centric enterprise services environment, such notion of trust is considered to be requisite for achieving the desired collaborative, service-oriented architecture vision.

                                     
  • becomes feasible. The term trusted computing base goes back to John Rushby, who defined it as the combination of kernel and trusted processes. The latter refers
  • Operational Assurance: System Architecture, System Integrity, Covert Channel Analysis, Trusted Facility Management, and Trusted Recovery Life - cycle Assurance :
  • operating system Assuring that an authentic operating system starts in a trusted environment, which can then be considered trusted Providing of a trusted operating
  • 4 Trusted Solaris Trusted UNICOS 8.0 Rated B1 XTS - 400 Rated EAL5 IBM VM SP, BSE, HPO, XA, ESA, etc. with RACF Examples of operating systems that
  • Trusted Computing TC is a technology developed and promoted by the Trusted Computing Group. The term is taken from the field of trusted systems and
  • low - budget market Trust Company Ltd., a car parts company TRUST a computer system for tracking trains Computational trust generation of trusted authorities
  • An honor system or honesty system is a philosophical way of running a variety of endeavors based on trust honor, and honesty. Something that operates
  • The Trusted Computing Group is a group formed by AMD, Hewlett - Packard, IBM, Intel and Microsoft to implement Trusted Computing concepts across personal
  • component called Solaris Trusted Extensions which gives it the additional features necessary to position it as the successor to Trusted Solaris. Inclusion of