Trust Frameworks Analogue to Digital Converters

From IIW

Issue/Topic: Trust Frameworks as Analog o Digital Converters

Session: Tuesday 1B

Conference: IIW-11 November 2-4, Mountain View, Complete Notes Page

Convener: Scott David

Notes-taker(s): Jamie Clark


Tags:

trust_framework, taxonomy, contracts, risk_allocation, UI


Discussion notes:

File:Nov 2 Rethinking Personal Data Workshop.pdf

Facilitating Personal Data Transactions in a Secured Manner on a Global Scale": part of presentation for WEF (Davos) prep session on "Rethinking Personal data" workshop, New York, September 2010; should be posted shortly to OIX website

What's the international law of identity?

There isn't any.

Can we do things with law and/or rules and/or tech to weave together the disparate systems that interact?


What should identity systems do? Meet "system participant" (user) needs. Such as:

  • data subjects need identity integrity
  • replying parties need assurance
  • identity providers need risk reduction

These high-level 'needs' share some basic lower-level functional requirements like, security, reliability, UI, etc.

What can tech and law do about this?

  • technology tools guide data movement & protect data at rest
  • legal rules create duties to incent behavior

-- By far most of the data breaches I've seen (S. David) were human error, not tech failure. So the human rules and incentives matter.

A "Trust Framework" is a possible documentation style ("term sheet"?) for the agreed risk and reliance arrangements between system participants.

There is some "low hanging fruit" of law and practice guiding these duties:

  • In the US: NSTIC, Levels of Assurance. In some states, data breach laws.
  • Privacy laws like HIPAA, Gramm-Leach, FICA, etc.
  • Fair Info Practice Principles (originally US DHEW 1973) - levels of

control

ABA drafting a report on Federated Identity which addresses a taxonomy of issues and actors; OIX doing a "risks wiki"; some out for public review now; posted work product expected early 2011(?)

One difficulty is operationalizing assurance which is mostly processed by end-users as emotional states like "trust", "reliability." Quantification needed, to clear the semantic fog here.

The idea here is to address some recurring liability issues, but not all. 80/20 approach, not boiling the ocean. May be industry groups and self- regulatory efforts that give rise to the best evolving solutions.

First step is a candidate common analytical framework, to get to "apples-to- apples" on some of the risks, practices and concepts

Inspirational vision: UI simplification - risks and control issues displayed simply like red-light-yellow-light-green-light displays.

Audience: Frameworks generally get developed in a context of siloes - non-interoperable specialized cases. Is there a "metalanguage" for crosswalks among the privacy practices of those siloed players? Or 15% of them, anyway, for scalability's sake.

there is a PPT deck associated with this session: "nov 2 Rethinking Personal Data Workshop.ppt"