WebID and DNSSEC - combined session
Issue/Topic: WebID & DNSSEC – combined session
Convener: Henry Story (WebID) & Esther Makaay (DNSSEC Henry.firstname.lastname@example.org / email@example.com Notes-taker(s): Esther Makaay
Tags for the session - technology discussed/ideas considered:
How WebID works (‘foaf’ + SSL) – a new way to construct trusted social webs. How DNSSEC can enable and strengthen identity use cases from the core of the Internet.
Discussion notes, key understandings, outstanding questions, observations, and, if appropriate to this discussion: action items, next steps:
For background information and slides, see: http://esw.w3.org/Foaf%2Bssl (and see the FAQ there)
The status of social networks these days: users are prisoners of the network, seeing only a small part of the information, while the ‘owners’ of the network see everything. Furthermore all communication requires the communicators to be on the same network. With the telephone and e-mail people can communicate across service providers.
The WebID protocol enables browser based one click login to any server without the user needing to remember either a username or a password. It works in the great majority of desktop browsers as is: using the SSL/TLS stack on which https and the whole of web e-commerce is based.
Of course there is a small twist in how it is used - since client side certificates never took off. The trick is not to rely on Certificate Authorities, and to make creation of Certificates cheap and replaceable. This requires changing the TLS authentication procedure at the Relying Party’s server. First the Social Web CMS should make it easy for a user to create any number of compliant X.509 certificates for each of their browsers. This is easy to do using the now documented html keygen tag. Using this the browser can create a private key and send the public key to the server which then creates certificate containing a a URI identifying a user (eg: http://bblfish.net/#hjs ) - in the X.509 Subject Alternative Name field. The certificate is returned to the browser and automatically added to the keychain. By simultaneously placing placing at the document location ( http://bblfish.net/ ) a machine readable description tying the WebID to the public key in the certificate the client is set.
Secondly in order to allow users to login the Relying Party need just make an https endpoint available that requests a client certificate. The browser will then ask the user to select one of his certs, which will be sent to the server. The Relying Party’s https server will then check that the WebID Profile does indeed list the WebID specified in the certificate as knowing the private key of the published public key.
The Document to put at the WebID is defined semantically. It could be a list of simple PEM-files, open-contact documents or foaf files annotated with cert ontology. To get feedback on the best way to do this it is worth participating in the discussion on the foaf-protocols mailing list.
Question: Does WebID suffer the same problems as MS Infocard? (Moving it between different devices, using it on mobile devices.) Answer: Creating WebID certificates is cheap, so you can create one for each device on the fly.
Question: What if you lose your key? Answer: If you loose your key, your Social Web CMS - aka Personal Data Store - need just remove the public key from the WebID profile document. How would they know you are the owner of the account? Of course you would need other secure methods of authentication such as one time passwords sent via SMS perhaps.
Did PGP not show the web of trust to be a failure? PGP requires users to sign each other’s keys which is cumbersome. Instead of placing information in the Certificate WebID places it on the web where it can easily be changed without changing certificates.
Question: How do you authorize? Answer: Now – if you receive someone’s business card, you can add them to your profile as someone you know. Their e-mail can tie them back to their WebID using WebFinger, or their home page could be their WebID profile...
Question: How does that tie into the Web of Trust? If your friends/contacts link to other friends and contacts, then you can gain some assurance that someone you don’t know who is connecting to you is at least somewhat known, or trackable via your friend.
Question: If I hand over my laptop, do I hand over my certificates? Answer: That depends on how you handle your accounts. On OSX one can have a guest account that just deletes all information when the user logs out. No need to hand them your browser with your cookies and passwords available.
The WebID protocol relies on DNS and CAs for security. With DNSSEC re-inforcing DNS and potentially reducing the need for CAs, deployment of WebIDs will be even easier. For more information about DNSSEC itself, see the notes on the session “DNSSEC explained” at IIW10: http://iiw.idcommons.net/DNSSEC
DNS: the navigation-protocol of the Internet. Enter an address (manually/machine), together with the chosen protocol and it either directs you to a location or provides the needed information. Using DNS for IdM-solutions has been looked into in the past, but discarded because of security-issues. While DNS is very scalable and robust, it used to be untrustworthy. Until now: DNSSEC is being deployed world-wide. The root zone and many TLD’s already use it or have announced deployment in the next year.
What does it mean to have DNSSEC? It means verifiable DNS answers. DNSSEC provides for origin authentication, data integrity and authenticated denial of existence. There’s a metaphor describing it as a sealed, transparent envelope around the (DNS-) message. Anyone can still read the message. The seal is attached to the envelope and applied by the sender of the message.
DNSSEC is a real game-changer when it comes to using DNS for identity-related use cases. WebID is a good example of this, where self-signed certificates need to be verified by browsers. A certificate is most commonly used for TLS (SSL), where information send to and from a web server is encrypted for confidentiality. Browsers use their stored keys from the CA’s (certificate authorities) to verify the certificate. CA’s are third parties, providing these certificates in various degrees of both encryption and validation. The lowest level of validation basically provides verification of the domain name the certificate is issued to, where the much more elaborate (and costly) ‘Extended Validation’ certificates (that turn your browser-url green or blue) also identifies the party that registered the domain name (and even identify the person applying for the certificate as a valid representative of this party).
There are quite a few situations where I want to have TLS to provide confidential server-communications, but don’t really need a third party validating the certificate as belonging to this domain name. This happens for example when I’m using my own servers at home (under my own domain name, with the certificate I put on the server myself), or when I’m connecting to the mail server from my employer. These certificates are usually ‘self-signed’ and any browser will go through the well-known ‘security risk’ warning-procedures before allowing you onto a site that has such a certificate.
DNSSEC offers a solution to bootstrap these certificates into DNS, allowing for a scalable, self-manageable (and cheap) solution to use TLS with self-signed certificates. Any validating resolver can verify all information in any DNSSEC-signed domain zone file, using the public key for the root zone as a trust anchor. If you include a certificate or a public key into the zone file for a domain, that information can also be validated. This would tie the (self-signed) certificate to the domain zone, obsoleting the need for a third party (CA) to validate this information. Of course, there’s no identification of the person or party that has registered the domain name. You’d still need EV for that. People in the IETF are working to standardize inclusion of keys and certificates into the zone file for different purposes. (is it RFC4398 and RFC4398 ?)
Question: Using DNS requires tooling for updating information in the domain zone file. Answer: True. A lot of registrars already offer tooling to manage common redirects for mail and websites. We hope they adapt this tooling to support new usage, like managing keys and certificates. And looking at the current tooling, it’s usually not very flexible or user-friendly. We could certainly do with more sophisticated ways of managing zone file information.
Comment: Work needs to be done to create low-level API’s. People need to work towards this, coming from different layers: upwards from the DNS-layer and downward from the application-layer. Open standards on the client side are needed!
Comments/clarifications: DNS is a public protocol. It’s not meant to act as a store for public data. It can be used to point towards data stores though (similar to the way it now points to websites, mailservers or SIP-servers). DNS is not a P2P-model. Delegation gives it a lot of strength, but also limits usage. E-mail is weird, going through a lot of hops instead of directly P2P. Maybe the model for news-groups is better? But the delegation in DNS allows for discovery services. There’s no working, scalable alternative to DNS.
We’re talking about the information in the zone file, not the Whois-information. The information in the zone file is telling about where the services, hosts and sub-delegations for this domain name can be found. The information in the Whois shows contact-information about the parties involved in the domain registration (like the contact persons and the registrar).