Unintentional Consequences of What We Build

From IIW
Jump to: navigation, search

Unintended Consequences of What We Build


Day/Session:Tuesday 5C

Convener:Annabelle Backman

Notes-taker(s): Annabelle Backman


Tags for the session – technology discussed/ideas considered:

Ethics


Discussion notes, key understandings, outstanding questions, observations, and, if appropriate to this discussion: action items, next steps:


Two modes:

  • Negative human behavior applied to/within technology, i.e., badly behaving users
  • Technology that behaves badly itself, i.e., biased AI algorithms
    • This is technology codifying negative human behavior


Online disinhibition leads to worse human behavior


Do we have a responsibility to prevent users from themselves? No consensus.

  • Users can’t be responsible for something they don’t understand, e.g., massive Terms of Use.
    • Aggregating forces, snowball effects create pseudo-coercive effects; “opt-in” may not truly mean “opt-in”
  • Users should be held accountable for the contracts they agree to.
    • Users happily click through consent prompts then complain when their data is used in unexpected ways.
  • Regulation is expensive, inhibits startups and new competition.
  • Need to actually fix the way people think or don’t think.
  • Are we always responsible? Or never responsible?
    • Responsibility to advise and inform.

What happens when things go wrong? Example: Aadhar being misused, leading to fraud 

  • Similar to SSN; a lot of very useful properties, natural for people to use it
  • Need to consider if all the properties of a system/solution are desirable.