Magic Sandwiches

From IIW

Magic Sandwiches

Thursday 20E

Convener: Justin Richer

Notes-taker(s): Neil Thomson

Tags for the session - technology discussed/ideas considered:

Discussion notes, key understandings, outstanding questions, observations, and, if appropriate to this discussion: action items, next steps:


Normative vs. non-normative

Normative (examples) - you have to pay atention to

non-normative (examples) - an illustration, example - not a specific requirement

The non-normative text can have unexpected consequences (and influence)

Despite the “musts”, “shalls” in the specification

In DID working group - text referring to a separate document that implied it was normative. (e.g. requirements can be found in this other document)

Should have been - the other document was, for related details of interest (only)

“Sandwich” comes from DID <magic> DID Document, where magic is the “meat”

Working group was averse to specifying the magic. The consequence was that the community disagreed with where the boundaries were. Were arguing “stuff” had to be added to DID or DID Doc, where it should have been defined as contracts as to how to get from DID (input) to DID Doc (output)

Problem of where the abstraction lines are drawn are key to all the standards discussions.

If you don’t specify enough about the implementation in the specification, you get problems.

Yet the implementation provides key context which changes how you achieve security, etc.

Do you put that in the contact requirements or in the implementation notes?

Because implementations are wildly different, what happens is separate groups with their own implementation guidelines.

Should be able to abstract that as to inputs, behavior and outputs (interface contract use cases)

Blog post (need link from Justin Richer) on Sandwich concept.

Any arguments on implementations were about the boundaries with the magic.

Pointed out that this applies in many areas, including UI/Interfaces.

But isn’t this what Architecture (decisions) is all about.

Look @ OAuth - many ways to get tokens, use of them, refresh, etc. etc.

That it is opaque to the applications was the real abstraction, not that their can’t be several different implementations (under the hood).

Different levels of abstraction are required (and possible)

Suggested - do you need both the abstractions for the contracts AND the implementation guidelines (need multiple inputs on how to think about the solution). More guidance is “considered helpful”

Usually hard to get agreement on the details (even given specific use cases). Consensus disagreement on where each perceives where the implementation needs to support variability (in the future).

“Too many cooks” problem where people want to be compliant with standards. But it is not mandatory to support/implement all the possible applicable standards.

Why use standards at all (interop). DID Com doesn’t specify a transport layer. Utility to who. As you add more and more specification (away from the abstraction), you then can shut out some groups with needs that actually are compliant with the abstract/intent.

Standard example where over specifying killed it (as didn’t actually fit any real world situation)..

SOAP as an example (incredible of detail) for many things. How to figure out how to do the few simple things that were actually useful was made very difficult.

Complex client, sending complex message over complex protocol to a complex server.

SOAP was hurt by that complexity for simple operations, which resulted in the much simpler REST which essentially replaced it

DID Resolvers are unspecified - can’t possibly support all the details of all the DID methods.

Not specify how DID resolvers work, but the core interactions/interfaces (in/out). Push back is that it is both it is overspecified and underspecified.

Alan offers condolences for everyone working on standards. Disagreements based on different understandings of the facts and what they were trying to build.

Misunderstanding of specifying inputs/output/ interfaces vs. specifying network protocols

When writing specifications - treat it like programming. Normative statements are like lines of code that have to run on the most unreliable execution platform - which are software engineers

Overspecifying that a DID needs a private key.

There can be many different levels of interoperability.

DIDCom is not a protocol. It is a set of ideas that does not define boundaries. It’s an abstract model of 10,000 foot behavior

DID operations are at multiple levels of abstraction. Variability (http, etc. for comm channels)).

Have a DID, I did a DID document, what if more information needs to be passed than the DID and more coming back than just the DID document. Implementations all work in their own space, and work, but they cannot agree on what they are specifying.

Like passing around an unspecified variable vs. a numeric variable

DID Document (byte stream) [Function] Resolve(DID, Options)

Where byte stream which includes metadata

If you are a resolver you must implement the above function. You can do other things, but that is the minimum.

If you can tell me how to send you the string that represents the DID and the map of string/string to map the input options to generate the DID Doc. The goal is a generic interface - only strings, not even JSON (could be XML).

Standards tend to act like something is not complete until there is nothing more to add. The goal really should be a standard is complete when there is nothing more to take away.

Link: to DID-Core specs http://github.com/w3c/did-core/pulls

Especially 253, which is now 262-265

A dereferencer is different from a resovler in that it (potentially) different inputs and outputs (And behavior) even though the input and output (DID, DID document) are the same

There is metadata about the document. The metadata is data ABOUT the document that is separate from data IN the document.

Response Headers are one area of contention.

Example DID vs DID URL

What is a matrix parameter - different form of query specification - turns out to have been a bad idea.

The idea behind the contracts discussion that could map to http or other mechanisms (uses a map of strings could be http headers). Exactly how you do this is up to your implementation.

People are biased in thinking about their implementation vs. the abstraction.

This is also about architecture and design partitioning, with the example of partitioning interfaces

The goal of abstraction (of interfaces) is able to call a function, without consideration for the underlying implementation

What is the part we want variability on and what are the limits on variability. Some level of input, output and behavior is required.

OAuth for device flow (for set top boxes). Two implementations. Assumption made on the optionality on the field (optional) on the time between multiple requests. One implementation put in default of (time=0), which swamped the system that assumed not specifying the time between would result in something reasonable, not that 0 would be a reasonable.

This effectively is a bug in the spec - it wasn’t specific enough (e.g. if not specified, cannot be 0, with default is <non-zero default behavior>. In order to have a sensible default on both sides (client/server).

We’re good at behavior when specifying mandatory values with specific ranges, with optional values which leave ambiguous as to what to do if not specified (e.g. default value).

Clarity for humans is vitally important.

Need to explain it for humans, but specs should use formal language to specify (and be unambiguous).

Are there people on the standards committee who have the skills to do formal analysis.

[Standards] People tend to be one of two extremes - too abstract (with no understanding of implementation) or too (specific) implementation centric (with no understanding of abstraction)

Standards can be created too soon - before there is sufficient practice to create a pragmatic standard.

The RUST core in the standard is very small with everything as an external library, which are then pulled into the core as it is proven. However, this approach allows different competing libraries, where one wins and is pulled into the core, which breaks apps built on the other equivalent libraries.

There is tension in both directions over vs. under-standardize.

The reality is iterative and incremental - get it as right as possible right now and then fix as used.

Get a complete set of use cases - with the core set of high use cases drive initial implementation with a strong set of edge cases to think about the breadth of the standard that have to be solved at some later level.