Legal Issues Underpinning of UMA

From IIW
Revision as of 15:52, 3 February 2011 by WikiSysop (talk | contribs) (Undo revision 3163 by Igiwydijok (Talk))

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Issue/Topic: UMA and the law

Session: Tuesday – Session 2 - B

Conference: IIW 10 May 17-19, 2009 this is the complete Complete Set of Notes

Convener: Jeff Stollman

Notes-taker(s): Eve Maler


#UMA #authorization #contract #agreement #clickwrap #terms-of-service

Discussion Notes:

UMA main site:

UMA unfinished Legal Considerations document:

Attending: Jeff Stollman (leader), Heather West, Iain Henderson, Eve Maler, Mason Lee, Judith Bush, Alex Smolen, Stacy Pitsillides, Brian ...worth (? - didn't catch), Mark Lizar

Some participants in the UMA Work Group at the Kantara Initiatives have been focusing specifically on legal considerations. One concern is scalability. Even if we constrain UMA initially to something that will work simply, we want it to scale much bigger eventually.

UMA holds out the prospect of an individually negotiated contract between an authorizing user and a requesting party, which starts out with the user's wishes being the initial terms. The user's wishes can be carried out by an "authorization manager" that decides whether a requester application deserves to get an access token (a la OAuth) for accessing some host.

Liability is the place where all federated identity seems to fall apart! As Tom Smedinghoff has pointed out to us, we need to figure out what legal theory of liability should be in play. E.g., there's contract, tort, negligence… Heather explains that contract law doesn't see the "I Agree" clicking process as an example of a contract. It's just terms of service (clickwrap). The OIX approach is going in the direction of contracts, which is much stronger. The Computer Fraud and Abuse Act is what has been used most often to prosecute TOS violations, same as for prosecuting hackers -- and TOS is simply not very strong.

However, if UMA were used to ask for positively asserted claims vs. just a user interface that asks for "click to agree", or if UMA were deployed within a trust framework that is contract-based, it's possible to apply a contract theory of liability.

Iain's Information Sharing work at Kantara, and his work at, focuses in part on "volunteered information". His lawyers have said that volunteered information with an advertisement of the terms would use contract law. Europe has the eight principles of privacy protection, and if these principles are part of the offered contract terms, it becomes quite strong.

Contract theory is the only one that scales internationally. Various boundaries are relevant to this question: domestic, treaty-member nations vs. non-treaty nations, etc. Again, certified parties to a trust framework are a strong way to get some level of contract protection.

Eve sketches two areas of UMA that seem like they would have impact on the liability theory used.

One is the particular user experience on the requesting-party end. E.g., if it's Bob (person-to-person or Alice-to-Bob sharing), we don't necessarily want to make him agree to the same privacy policies, to the same "strength", as if it's a company (person-to-service sharing where the service is run by a company acting on its own behalf). And you might have an "I Agree" button for Bob, but not the company.

The other is the particular method that Alice uses to provision the requesting party with knowledge of the resource. The Data Dominatrix method has Alice pasting (or whatever) a URL in the recipient's interface, and the latter discovers the constraints on trying to get the information. The Hey Sailor method has Alice advertising something like a personal RFP, and Iain notes that if the advertisement also includes the offered terms, that would use standard contract law.

Brian points out that the UMA proposition is a lot like DRM for personal information. Alex observes that the power imbalance between people and companies seems to make that okay. :-)

If you want to have a contract between the authorizing and requesting parties, but the intermediary parties only have pairwise TOS's, the TOS's can weaken the contract at the ends.

The UMA protocol uses the technical notion (not the legal notion) of "claims" for finding out more about the requesting party to figure out if they qualify to get access. What if the authorization manager demanded a claim saying (in a verifiable way) that the requesting party is a certified member of a trust framework? This could mitigate the risk of having any part of the ecosystem using TOS's versus a contract.

Heather points out the example of the Veteran's Administration, which is incredibly successful with its e-health records because the entire ecosystem is under very strong contractual privacy protections, which makes it easy for ordinary humans to understand and consistently applicable by other parties in the ecosystem. Could it be the case that more stringent but more consistent privacy controls could be good for the growth of the commercial market! However, recognizing that "privacy" and "security" generally aren't good selling points, it's hard to use this rationale for business benefits.

What would incentive hosts to accept an authorization manager's policy decisions, and all the liability implications thereof? The theory of the UMA folks is that most websites offer terrible (or no) selective sharing options, and if they could offer it as a value-add simply by adding an "UMAnizing" module, that would be attractive.

How would the changes in various contracts in the ecosystem be handled? UMA could help an authorizing user decide to revoke access, but what if other entities in the system change their policies? If there were a standard for giving notice of changes (maybe using Atom feeds?), that could be used.