Schema Mapping Using Personal Data Madel
See this blog post for details: Incontextblog: Schema Mapping session at IIW
Convener: Paul Trevithick
Notes-taker: Joe Andrieu
Tags: Attributes, claims, schema mapping, semantics, persona
Used to think that we could figure out a common schema. But realized that is too hard. Human nature is such that we want the power to mint the names and titles of the terms we use in /our/ systems.
So what Paul has been working on an open source schema for information about human beings: first name, etc... If you make a rich complex schema, it ends up being complex. It's easy to do the dumb things and keep it simple. Hard to do anything that captures the richness of reality without having significant complexity. This schema mixes and matches from tons of places and is intended to capture EVERYTHING, even if no one uses it directly. But you can build schema mapping in & out of this schema for whatever the input and output need to be.
When working on a schema, it is typically done with a specific purpose in mind, which leads to many different schema. So, let's embrace that and have a vehicle for mapping in and out of each of these.
Question: Doesn't that bring up issues about language discovery? RP wants a claim "X". It asks the IdP. "X" must be golabally unique. If the IdP doesn't have "X", it can try to find a transformation path to product "X" from the data it does have available.
Note that a given transformation could take multiple steps from multiple different transformation rules. And if we have a big, rich central transformation ruleset (Y), then for most transformations, all you need is to be able to map in & map out. Also, the more granular the base Y, the easier it is to scope in and out... there are more possible transformations the more granular data.
It's faster for me to figure out how to do it on my own rather than to go learn some other ontology. This fuels the cacphony.
(What are the rules? Inference rules?)
Not clear what that means? Is it the role a person is in? Perhaps thats just a claim?
Uses some RDF and leverages interesting SPARQL stuff, but in the end, it doesn't need complicated SemWeb tech.