To: | Ontology Summit 2014 discussion <ontology-summit@xxxxxxxxxxxxxxxx> |
---|---|
From: | Amanda Vizedom <amanda.vizedom@xxxxxxxxx> |
Date: | Sun, 2 Feb 2014 11:33:42 -0500 |
Message-id: | <CAEmngXtrkxU0ynhxrrPe=w4NB3j1--_djHiNmm0j0HgUm0RENQ@xxxxxxxxxxxxxx> |
On Sun, Feb 2, 2014 at 9:41 AM, Eric Prud'hommeaux <eric@xxxxxx> wrote: Ideally, someone will have the resources to compare development and I agree, and beyond: Past a Gartner business case, I'd love to see some real empirical study of all of this and more. I wish that we could begin to build data (even anecdata, to begin with, if well-sourced and carefully documented) about so many of the ontology features we talk/argue about. Examples include nearly everything that was raised in last year's summit as a characteristic that varies among ontologies and might be subject to evaluation. By "anecdata", I mean specific cases, the features of the ontologies use and as you say the process and outcome characteristics, as best they can be documented. Patterns and correlations might then be suggested for more rigorous exploration (dare I say, experiments resulting in actual data). I agree, it's a tooling problem, but I think it's a tooling problem Agreed. In the short term, we Well, in the short term, I think that the choice is unacceptable to many in business and industry. Thus, those with the funding and understanding, but neither the time to wait for standards & tools to evolve nor the motivation to share their competitively advantageous innovations with the community, sometimes just roll their own tools to fit their need. Especially if their projects involve many different types of developers touching the ontology, many user communities (with at least their own jargon and /or focus, as in business units or functional areas within the same enterprise). Also especially if their processes and/or systems operate cross-lingually.
And of course, what I've just said is must more anecdote, since I'm not in a position to speak about, or on behalf of, anything I've seen of this nature that isn't already public. :-( Nevertheless I feel obligated so say *something*, as I see the open software and standards communities (and research) so often stuck on a problem and without the benefit of the insights and innovations that have made real, production applications successful in the private sector.
Protégé's ability to customize the interface to use e.g. rdfs:label Agreed. This isn't enough. For one thing, few existing, public ontologies have real, substantive lexicialization, so the labels that exist aren't sufficient to enable even manual setting of the right labels for some user. For another, there are few connectors built into IDEs enabling good concept OR suitable-label-set autocomplete. And here's another great feature I've only seen in an in-house tool suite: while composing OWL, SPARQL, XML, or other in a developer's text-editor, or while looking through query, test, auto-complete, or indexing results, one can right-click or hover over a concept ID and see, optionally, parts of the documentation, other concepts with similar labels, other labels, for example. This makes more human-readable info available right there in the developers tools.
Is this kind of functionality so unreasonable to expect in standard tooling, when it (I assert) helps so much with both efficiency and accuracy? Is it so far from the kinds of support programmers reasonably expect? It seems not to me, given that the standards languages in the OWL orbit give us the hooks on which to hang such tool features already, if only we are able to take seriously the need to keep our ontological relations ontological and our lexical relations lexical, and spend the effort on each.
And then, I'd even say, whether the unique IDs from concepts are human interpretable or not matters less. I still think that human interpretable = more human misinterpretable, but when the tools make good use of the distinctly lexical features of the ontology languages, as well as those optimized for other purposes, the result is support for correct human understanding and finding, and the mistakes are significantly mitigated.
I'm quite interested in your experience with these tools and whether But the opaque identifiers wouldn't matter so much if you also had the lexical resources ready-to-hand in your SPARQL-writing/ debugging environment. Which could, indeed, be a text editor with the ability to access and use your ontologies.
Despite my skepticism above, I'm certainly happy to be proven wrong. I I think we agree? I also want to re-emphasize your point at the top. We really do need studies and data. And I don't know how to solve that, if no one in the public/open sphere builds the kinds of tools I'm talking about, and those who build them for in-house use don't see it as in their interests to put their tools or their cases out their for study. :-(
Apologies if my frustration occasionally shows through. :-/ Best, Amanda _________________________________________________________________ Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/ Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/ Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx Community Files: http://ontolog.cim3.net/file/work/OntologySummit2014/ Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2014 Community Portal: http://ontolog.cim3.net/wiki/ (01) |
Previous by Date: | Re: [ontology-summit] [ReusableContent] Partitioning the problem, David Price |
---|---|
Next by Date: | Re: [ontology-summit] [ReusableContent] Partitioning the problem, Simon Spero |
Previous by Thread: | Re: [ontology-summit] [ReusableContent] Partitioning the problem, Eric Prud'hommeaux |
Next by Thread: | Re: [ontology-summit] [ReusableContent] Partitioning the problem, doug foxvog |
Indexes: | [Date] [Thread] [Top] [All Lists] |