Personas and Privacy
Session Topic: Personas & Privacy
Wednesday 2A
Convener: Annabelle Richard
Notes-taker(s): Dave Sanford
Because of the need for common attributes shared across personas - attribute firewalls between personas were discussed along with the need for firewalls to “allow poking holes” between those firewalls as needed.
There was some push back against the initial discussions assuming these personas were used in a federated space.
Distinction was made between personas, attributes and context. One definition of privacy that was put forth as privacy = ‘contextual integrity’.
Also discussion of separating the discussion of:
a) user behavior required to maintain separation of personas (hard to maintain consistency)
b) conceptual frameworks to allow definition and implementation of personas
c) tools that actually allow users to have and manage multiple personas
There was discussion of big data business models hovering up data and able to break separation. For most people this ability to de-anonymize them doesn’t matter, for a few it is a matter of life and death.
There was discussion of not tying personas to account ids. One thought was the idea of mapping personas to times of day/calendar attributes. Various exceptions to this were identified in normal human behavior (personal interrupts during work, etc.).
The claim was made that if a common payment method or credential (bank acct, credit card) across personas, that transactions would be mapped in the cloud.
There was a continuing discussion of whether we want to assume good actors (Relying Parties, Identity Providers) in the cloud – or protect against these as bad actors (big data aggregators not honoring boundaries). Consensus is that we need to support and assume both to some extent – but that these are different problems. Some discussion of the UMA authorization manager and its ability to support multiple personas.
There was discussion of two different ways in which context is created, either context is inferred from transactions or persona owner declares context.
Not all personas are created equal. Some need to be strongly authenticated, others not so much.
It was pointed out that it only takes one mistake from the users to allow the big data mash-up parties to break persona separation – once that is broken it will be impossible to fix without discarding personas and creating new ones.
We want to encourage as many good actor business decisions (e.g. Amazon will not send email recommendations for LGBT products even if your history suggests that you want that, because who knows who will read that email).
Our main job is to facilitate building of tools, given the assumption that there are some external bad actors. The question was asked – where was the ethics review at Target that allowed the pregnancy prediction to be made and acted upon.
Providers need to recognize the ethics.
Still an open issue as to whether we will only or best be able to separate context by ID. If we want to limit or question the linkage between context and IDs, we would start with what we want for personas – identify how to call out contextual awareness and then figure out how and when to link this to identifiers and authentication.
Personas can be thought of as sets of ‘public’ attributes.
There was some discussion of the old pre-Internet models:
a) pay cash in the physical world (anonymous)
b) buy drink, show driver’s license – the transaction information never goes back to DMV
There was the claim that targeted advertisement is being found to be not that effective – however at the moment more money is going into it.