Martin Nally: IBM, CTO of Rational brand. My group came to LD looking for solution to app programming model. Wanted a suite of apps that would work better together than in the past. Discovered in LD a different way of looking at the problem that is promising.

Arnaud Le Hors: IBM, Softward Stds group. Current role is standards lead for LD.

Ted Slater: Merck, interested in integrating lots of data from lots of places.

Cornelia Davis: EMC, corporate CTO office. Data int is a constant challenge. Systems are siloed. Challenge is going to the product groups saying we want them to think about LD, and they don't understand what it means. Need to think about how to achieve aLD vision wo overwhelming the developers.

John Arwe: IBM, sys mgmt and operations. Installations at telcoms, etc. Looking at operationalizing or linking the data across apps. Our large customers have this odd notation that they should be able to enter the data once and not care where it is.

Martynas Jusevicius: from Denmark, working w an instrument company building websites. Also side projects using RESTful APIs. Now blended those things into one and it went so well we made a new project for it. Similar to Callimachus.

Bradley Allen: Elsivier. perspective of large ent w info solos. Also as a publisher want to understand how to scale for customers across content sources across the web.

Martin: IBM Rational. Life looked good, but trying to evolve toward the web. Saw pressure for more global dev, and customers were tired of separated tools. Their processes go from one end to the other, and they want their tools working together that way. This is our story, but there's nothing very special about this one.
... Security is also important -- what's shared and what's not. All the normal enterprise concerns.
... People have been trying to integrate these tools for a long time, and the predominant most basic one is glue code within the tool to do point-to-point connections.
... It's worked for a long time, but it's endured because when you open an API others can script to it. But it is tightly coupled.
... What are the integration functions that people need?
... Create a link betwen artifacts in different tools, create an artifact in another tool. Share common concepts across tools, e.g., people, team, project, release.
... Or be able to query across information in multiple tools.
... People have been working on this problem for a long time. Typical attempt is to try to use a single common repository. Some of our competitors are still trying to do this, which makes me happy because I know how it will turn out -- the same as it's turned out all the other times we tried it.
... Or another is the ESB approach. It's a bit more structure than the n^2 approach, but it still has the same basic disadvantages. And the ESB becomes the bottleneck. Hasn't worked out well for us.
... We've been stuck with these approaches for 20 years.

scribe: Linked data allows groups to work independently, loosely couples, tech neutral, minimalist, etc.
... So in 2005 or 2006 we started adopting this style. But on the cons: it was unproven when we started; big paradigm shift; lots of invention required.
... People do not know how to do this. Most of the answers are out there, but knowing where it is and how to find them and how to put them together -- that's what's missing.
... Big orgs, trying to get these people to move in this direction is a huge issue.
... We did this to implment their own stuff, but most do not only use Rational. So we started to share this w our partners. OSLC and open community. http://open-services.org/
... Some of what's on that site is pretty good, and a few pieces make me cringe.
... Due to lack of guidance or ignorance.
... There are some mini-ontologies. I hope that part endures, it has some value. There's another part called core.open-services.org and I hope that part goes away. That's an attempt to capture best practices. Want to find the right home for it.
... The day you stop using XML is like ending a bad relationship. You realize how much it was screwing up your whole life.
... I need help leading an org down the right path.
... problem 1: creating data on the web. This is a read-write paradigm. We're creating linked data on the web, so we need creation and update protocol as well.
... And how do I find the things that already exist?
... And the core.open-services.org part is where we got it wrong.
... If you start w RDF you need to start w a basic resource that you can POST to, and then be able to do a GET to find out what's been posted before.
... And tis is how we create collections -- things w the same subject and predicate.

(Showed RDF example)

Martin: POSTing to this creates a resource w the same subject.
... If you start from RDF, and you can POST to and GET from them, then that's all you need!
... Very simple. But you need something like this, otherwise everyone invents their own thing.

<timbl> RWLD

Martin: I first POST testCase1

<SteveBattle> Callimachus containers give you a very similar construct.

Lee Feigenbaum: Have you looked at the SPARQL 1.1 Graph Store Protocol? It does this sort of thing.

Martin: No, we don't do SPARQL

<martynas> what about using sioc:Container / sioc:has_container? we're using it for a similar purpose (I think)

Lee It has SPARQL in the title, but isn't SPARQL

TimBL: In the client space domain, you could also PUT that graph. Have you looked at the R/W LD thing?

Martin: Then why does it have SPARQL in the title?

TimBL: If you POST as media type Turtle then it will be appended to the graph.

Martin: I POSTed testCase1, and as a side effect a triple got added to this graph.

<sandro> LeeF, Andy and I (the reviewers) both said the current title is okay, FWIW. (And I said we should take the word SPARQL out of the title.)

<sandro> er, NOT OKAY

<LeeF> sandro, ***sigh***

<sandro> (and this interchange with Martin is why. I've had this conversation acouple times, myself.)

TimBL: How did someone know that this is the way to POST there?

Cornelia: How do you know what you can POST to, and what format you can POST to?

<sandro> ericP, can/do we have a way to postpone/list issues like this?

Martin: I have these special URLs, and if you POST to them then it adds a resource and adds the triples. I could write the whole spec on a napkin.

<ericP> sandro, sure, but i feel like this gets people into a useful mode

Martin: I would like a base spec for this.

TimBL: We have various people coding up the RW web stuff and they just got to this. If something ends with a slash, should it have this property, like a directory on a file space?

<Cornelia> Cornelia: AtomPub addresses what you can POST to and what it is you are POSTing.

Martin: There are lots of domain models. File system is one.

TimBL: In a secure env w access control, when you POST, that implies things about the access.

<LeeF> Elias Torres will be glad to know that 6 years later IBM is still working on matching up RDF with Ato/APP :-)

Martin: We have access control, but not a universal design that could become a std.

<julius> 30mins, but we started earlier

Martin: It took us 5 years to get to the harder parts. Most of it wasn't inventing anything, but finding the right things to do.
... So if you take this approach, then you end up w thousands or millions of triples, so you need to paginate.
... So for every URL we have another URL w "?nextPage" added to it.
... This is ok if you don't care about the order of the triples.

EricP: How does "?next", "?next" avoid serving the same triples as "?firstPage"?

Martynas: Seems natural to map these things to things we have in SPARQL.

Martin: This is pre-SPARQL. Very simple.

TimBL: But using the same terms will make it easier for people to grok.

<SteveBattle> The Linked Data API allows you to set number of resources per page as a configuration parameter. Clearly, pagination is a key issue with (HTML views of) linked-data.

Martin: we often have metadata about the resources, and we only want to GEt the metadata.
... We have also proposed "Basic Profile for LD"
... All of our resources use RDF.

<sandro> Martin: The big problem with RDF/XML is that it gets in the way of trying to help people understand how RDF is different from XML.

Martin: RDF/XML is bad not only because it is so ugly, but because it makes it harder to get people away from XML.

(Martin outlines 12 rules for Basic Profile for LD)

Martin: Open World Assumption (OWA) is one of the hardest things for people to get used to.

<scribe> ... Closed world is what leads to monstrosities like UML.

UNKNOWN_SPEAKER: Programmers want to test everything. But then nobody can create anything anymore.
... People also get very elaborate in making their links. Instead, have people represent links as predicates.
... And if you need to add qualifiers or annotations they should be more triiples.
... And sometimes they even reify RDF or make their own reified-like node.

Ora: I am very happy to see rule #9 and want it as a bumper sticker, but it's a hard sell.

Martin: Don't infer that Mary is the same person as Jane, tell me there's an error.

<timbl> Theer are two models, one in whohc you truyst your apps and you must give then access to write arbitrary stuff into the storage, and the other in which you say have an unathenticated feed to pub;ic notifications and you wanrt them to be limite dto exclude spam and be filterered to allow a simple announcement triple.

<bheitman> just FYI, the current speaker is now starting to cut into the timing of the original schedule

Martin: PUT doesn't work well for updating data.
... SQL has no equivalent of PUT.
... Proposed solution: PATCH

TimBL: Have you looked at the design issues note on RW data? In some cases you're using WebDAV server and you dont' have an option. But if yo uhave a smart server, a SPARQL update resource is POSTed

martynas: Giving overview of comic Danish website and content on the site as http://heltnormalt.dk
... look to rebrand the website in 2011 for all the various supported content types
... overview of old code base, model contained many details of data represented on website
... model became very bloated over time
... reference of EJB model as leaky abstractions in comparison
... issues with many incompatible APIs, similar to what was referenced in previous session
... many different data (model) conversions, everything needed to connect to everything including model mismatch
... highlight the fact that if data sources were based on standard model (linked data), there would be 0 data conversions
... linked data picture gives a nice picture but doesn't explain how data is managed inside the bubbles
... REST+RDF: built off layers of abstractions and existing work in JAX-RS, as well as concepts from Jena...using PHP so can't use Jena
... defined an abstract RDFResource class
... in the end came up with a simple model for the site and platform, using data store to host the model (or any other supporting data)

<John> sandro, it appears they are always building the DOM themselves; RDF is their lingua franca, so presumably they'd import into an RDF store, represent that as an RDF/XML DOM and then XSLT the DOM they built

<Sumalaika (Susan Malaika)> Is there a formal W3C group working on the JSON-LD?

martynas: look to reuse as much as possible, even if not directly PHP (like JAX-RS)

Sumalaika, there is a community group on it

<Arnaud> the RDF WG is officially in charge

<Arnaud> but the work is currently taking place in a community group

<Arnaud> at some point the RDF WG will look at the possibility of taking their spec and turning it into a Working Draft

<Sumalaika> Thank you Arnaud and SteveSpeicher

martynas: codebase comparison, summary new platform working with 15 content types and old one handled just 1

<ericP> i note that XSLT is way more verbose than PHP

martynas: code trimmed down an order of magnitude, not many bugs as a result
... server load after deploying new model (draws an ahhh sound from room) showing much lower load
... provided some caching details (which I did not capture)
... sources of RDF come from a number of places
... when blending generic linked data, UI and various RDF sources can start creating interesting mashups

bheitman: started with Scientific American article
... survey of over 100 applications over 10 years
... source from Semantic Web Challenges 2003-2009, ESWC 2006-2009, 12 questions asked, own analysis of paper
... analyzed paper by self/bheitman (not own application author)
... 65% validated entries (responses to email validation), problem perhaps based on short life of academic email addresses
... highlights class of applications, standards and vocals used (summary in slides)
... conceptual arch has community consensus with 3 main components: RDF handling, data integration UI
... large number of apps only show data (read-only) vs creating/updating

ericP: what classes of apps are these? for example for enterprise use

bheitman: : use a class of components in their applications used in enterprise apps (if I captured that right)

bheitman: summarized typical cases of having to manually export some files out of apps to then have to import the file into others
... LD gaps, writing RDF data is hard and 71% of apps don't support update/write
... distribution of app logic: many comps and standards, distributed hard to coordinate
... typically 3 data models in apps (graph, relational, oo): results in many roundtripping issues resulting in loss of data

Linked Data Standards and Infrastructure for Scientific Publishing

bheitman: need more guidelines, best practices and design patterns...researchers don't receive this well due to sw engineer process don't apply
... need more sw libs beyond RDF storage, good libs can help reinforce guidelines/patterns
... another sw eng solution is sw factories: need the components and patterns to enable this
... full article at http://tinyurl.com/semweblessons

LeeF: Out of apps surveyed, did you look at how many dereferenced URIs vs SPARQL?

bheitman: no

dbooth: Any observations that are not part of this survey which seemed focused on isolated cases, like enterprise?

bheitman: no clear class of these apps, can see some patterns that are typical patterns in enterprise apps in the apps here

We are told about the powder spec which allows the definition of URLs of a particular shape

John tells about the "URL Oracle" concept that knows about all the real current URLs

John tells about servers that want to be moved (and don't mess up their own URLs) - a discussion ensuesas to whether relative URLs solve the problem ... the concept of URL groups ... the concept of internal infrastructure URLs that are not externally visible was also discussed

There was a discussion about Oracle and Sun documemtation - and maintaining the Sun documentation under the Sun URL

<sandro> sandro: sometimes I think it's best to use URLs like ns4343.com so they can be managed separately.

A discussion about PURLs : It is pointed out that LD products ship - it is the customers that have to manage the PURLs

<sandro> tim: Oracle should keep the sun.com URLs intact, where they are used

A long discussion about the Linked Data discipline that Linked Data products impose on their customers

on their users

Now John covers the scenario where someone re-uses an old URL or server name

Jon talks about standards and their role with URL

A discussion ensues about the need for JSON RDF for UI components

SPARQL query results is a table - JSON

A discussion about scraping JSON feeds - language simpler than GRDDL

<martynas> I've actually started making a generic XSLT transformation for RDF/XML > JSON-LD

<ted_> David Wood presents 3 Round Stones : diverted URI patterns.

<ted_> ...motivated by mirroring of LD.

<ted_> scribenick: ted_

scribe: routing of URIs to handlers, four different cases explained
... "diverted;" used as a token in these patterns
... multiple implications discussed, positive and negative
... Callimachus web patterns at callimachusproject.org
... another pattern in use is embedding SPARQL queries in Turtle, so that the queries can be named.
... useful for driving Google Chart widgets.
... Spin suggested, as was the Linked Data API from the UK.

<martynas> SPIN vocab can be used to construct SPARQL queries from RDF fragments

scribe: demo of a mash-up of US nuclear power plant data
... demo.3roundstones.net
... Cambridge Semantics and Revelytix have come up with similar solutions.
... this pattern is about assigning a URI to the interface that is different from the one for the data
... Tim wants to empower the user to quickly choose from within the browser the particular view of the data they want
... "ensure the data doesn't die in the browser"

<Sumalaika> is uri opaqueness essential to Linked Data?

<martynas> try to look at this pattern as a Linked Data browser/proxy running as Web application in a normal Web browser: what you "type" in after /diverted is what you would type into the address bar; there is also caching involved, possibly history etc

<dbooth> To clarify: there is a fire alarm in the building, but not in this *part* of the building. This part of the building was not directed to evacuate at present.

many different query URLs can yield the same resource
... Ryan is presenting this bit
... Internal identifiers and "RESTish" queries don't help, nor do URL queries with URI or URN identifiers
... Problem 2: missing or ambiguous identity (Ora again)
... Problem 3: versioning of data and identity, largely ignored by the W3C
... should version info be part of the identity of an object?
... problem 4: lack of stable identity
... "cool URIs do not change" except they do
...
...conclusions: confusion in matters of identity hinders interoperability. No particular solutions herr
... Er, here. That last was for everyone, regardless of gender
... lots of discussion RE versioning
... Ralph Hodgson has apparently created a versioning ontology (according to ericP)
... "identity crisis" should be a breakout topic

<sandro> dbooth: Put a unique string in a document, so you can find it.

<sandro> timbl: Important to be able to 'follow your nose', start with JUST a URI and find everything you need. If you do it that way, then you're always getting authoritative stuff, and immune to span.

scribe: trendiness of the term "REST" and educating customers
... describing the principles of REST (slide 4)
... highlighting hypermedia as one of the biggest hurdles in REST
... talking about the state of existing frameworks (slide 5)
... no framework assists developers with hypermedia
... talking about ATOM (slide 6)
... (slide 7) ATOM already has support for links
... a links rel value indicates the semantics of a particular link
... (slide 8) detailing an example of ATOM

relating ATOM elememts to RDF

(slide 10) suggesting leverage patterns that relate to the semantic web but avoid using RDF

open for questions

<sandro> tim: People ask why RDF/XML, well, this is there to make this stuff not so strange for folks coming from some angle.

<sandro> tim: the damage is, you make something look like XML, and then it doesn't actually make sense as XML.

<sandro> tim: people use Atom for RDF, then you can't actually load it into a triplestore. :-( I tried this with oData.

<sandro> tim: There were a lot of people for whom JSON was easy because they knew Javascript;.... stuff looks weird from different perspectives.

<sandro> martin: I'm very sympathetic. We went through all of this. We couldn't sell RDF, so we spent 2 years in hybrid land, trying to please both sides. If you write your XML this way, both sides can kind of handle it.

<sandro> ... But it did not work out.

ditto, hybrid solutions don't work

<sandro> ... People would continue to invent things that ended up making no sense. RDF already did those things,.

<sandro> martin: So, 2 years of transition, 1 year of cleanup, but I'm not sure there was a shortcut. Maybe that's the just the price to pay for switching a lot of people over.

<sandro> martin: Atom is a decent spec if XML is your starting point.

Xml gurus try to parse RDF/XML as XML

<dbooth> FWIW, I have seen XML gurus try to parse and understand RDF/XML as XML and it was a disaster.

<sandro> dbooth: I've seen XML gurus trying to consume RDF/XML and it was a distaster, because of the wrong mindset

<sandro> brad: A lot of what we've tried to do has been the same -- go to things people are comfortable with, and make the bridge. That being said, I'm not sure I agree.

<sandro> brad: What's important is RDF as a model, not RDF as a serialization. If there's a way to leverage infrastructure, ...

<sandro> brad: How do we leverage ATOM, etc, to do things RDF doesnt do, like Pagination?? We've struggled witht hat.

<sandro> cornelia: Where did the transformation from oData to RDF fail?

<sandro> timbl: There were peices of Halo that were cleanly properties of a table, and I knew how to map those. Then there were some links between web pages which stand for pieces of the table, in a way which I could not figure out how to map those, with a consistent semantics, repeatable, in the payload data. There was not a clean boundary between the relational payload and the links in the outer peice.

<sandro> timbl: maybe I just hadn't got it, but after spending a day or so, I figured there wasn't a clean model. But maybe someone else can do it.

<sandro> timbl: Clearly they were exposing an RDBMS, and I know how to do that in RDF, so as long as their mapping was reversiible, it's doable, but I couldn't figure out their mapping.

if we can't success on the polical problem, we can't succeed on the technical problems

<sandro> martin: I'm most concerned about Cornelia's problem. How can i convince the 400K people at IBM?

RDF has been around for over adecade, no explosive growth yet

david: fought the fight as a consultant form the outside in

the people who have been most succeful with don't attemp to the mapping to RDF at all

<sandro> davidw: I've fought this fight as a consultant. As an author, I get around to fight it, too. The people who have been most successful dont try to ease the path or do the mapping, as you just showed. The problem is that there are lots of ways to solve any given problem. You can take any given example application and the DB guy will do it one way, and the Web Services guy will do it, and the RDF guy will do it.

everyone wants to solve the same problem with a different hammer

<sandro> davidw: Where RDF really shines is in crossing silos, connecting things where traditional approaches have left off.

<sandro> davidw: Some orgs that have succeeded well (DoD, O'Reilly), they built a new team and hire ontologists if they need them, they get consultants in, they build a skunk works to do that bit between the silos. They leave the DBAs in place, because the DBA stuff still needs to get done.

don't take silo people making them into data integration guys

<sandro> davidw: And they have consultants/new team to build out that bridging infrastructure. You're not going to convert your silo folks -- really good at silos -- into data integration folks.

<sandro> Allen: That's what we're doing, with a startup group, showing we can solve this interop problem.

<sandro> Allen: When people see this, they perk up, and want to know more.

Silo developers are the ones implementing the atom interface on the top

<sandro> martin: Stick with the very simple stuff. RDF triples and REST resources.

<sandro> martin: Even though this is very simple, the consequences are not. Linked Data 101 -- complete enough to write real software, but keep complex stuff out.