OntologySummit2013: (Track-D) "Software Environments for Evaluating Ontologies" - Synthesis (3KA0)
Track Co-champions: MichaelDenny, KenBaclawski & PeterYim (3KAG)
Mission Statement: (3KA1)
Through this track, we aim to coordinate the following: (3KCY)
- provide a venue to bring together individuals and communities who can help define and advance the state-of-the-art in software and systems for evaluating ontologies (3KD0)
- the collection and enumeration of software environments and tools for evaluating ontologies (with emphasis on those that are open efforts and those that are publicly available) (3KCZ)
- investigations and development work (software prototyping and implementation) focused on the ontology evaluation theme, leading to interim presentations at the symposium, and possibly continued after this Ontology Summit ... (this bullet, which was on our original mission statement is now handled by the Hackathon-Clinics Activities champions - see: OntologySummit2013_Hackathon_Clinics) (3KD1)
see also: OntologySummit2013_Software_Environments_For_Evaluating_Ontologies_CommunityInput (3KA3)
Work-products from this Track: (3RFK)
- Our approach: (3RFL)
- Introduction of our track mission and approach - slides from the OntologySummit2013 Launch Event on 2013.01.17 (3RFM)
- Introduction of our approach to the survey on software support to ontology quality and fitness - slides from the OntologySummit2013 Synthesis-I session on 2013.02.21 (3RFN)
- The two panel discussion sessions when we invited stewards of some exemplary ontology software tools and environments out there to share with us their work, experience and insights ... (3RFO)
- 2013_02_14 - Thursday: OntologySummit2013 session-05: "Software Environments for Evaluating Ontologies - I" - Co-chairs: PeterYim & MichaelDenny - Panelists: MichaelGruninger, JeanneHolm, GavinMatthews - ConferenceCall_2013_02_14 (3RFP)
- 2013_03_21 - Thursday: OntologySummit2013 session-10: "Software Environments for Evaluating Ontologies - II" - Co-chairs: MichaelDenny & PeterYim - Panelists: AdamPease, TillMossakowski, TaniaTudorache, MichelDumontier, KingsleyIdehen - ConferenceCall_2013_03_21 (3RFQ)
- The Survey on "Software Support for Ontology Quality and Fitness" - see:http://ontolog-02.cim3.net/wiki/Category:OntologySummit2013_Survey (3RFV)
- we also provided support to the Hackathon-Clinics program team ... (3RFR)
- at their introduction - slides (also) from the OntologySummit2013 Synthesis-I session on 2013.02.21 (3RFS)
- and launch - slides from the OntologySummit2013 Hackathon-Clinics Program Launch session on 2013.02.28 (3RFT)
- Lastly, some thoughts and insights gathered through the course of the Track-D discourse - slides from the OntologySummit2013 Synthesis-II session on 2013.04.04 (3RFU)
The following is an initial input from track D for the Summit Communique. Not all of these points will necessarily be addressed and included. These are provided for comment. --MikeDenny /2013.03.28 (3Q3Z)
Track D, as "Software Environments for Evaluating Ontologies", falls within the current Communique outline in: (3Q40)
C. The State of the Art of Ontology Evaluation (4) What tool-support is currently available to support the evaluation of the characteristics (identified in C-2) and the best practices (identified in C-3)? (3Q41)
Within this vein, some preliminary Track D concepts that may be developed for inclusion in the Summit communique are, in no special order: (3Q42)
- The notion of tool support of quality is broader than the tracks title and should include "guidance" as well as "evaluation" of those ontology characteristics determining an ontologys quality and fitness. Ontology tools and software environments may intentionally constrain or recommend to the user proper ontology structure and content. (3Q43)
- Tools may contribute this "evaluation" or "guidance" function at different points along the ontology life cycle, and for a given characteristic, some tools may perform better in one life cycle phase than in another phase where a different tool is better suited. Generally, appreciation of the full cycle of life of an ontology is not well established within the ontology community. (3Q44)
- There are central aspects of ontology that may not be amenable to software control or assessment. For example, the need for clear, complete, and consistent lexical definitions of ontology terms is not presently subject to software consideration beyond identifying where lexical definitions may be missing entirely. Another area of quality difficult for software determination is the semantic fitness of an ontology to its world domain (reality) or to its application domain. Software guidance may be available for the fitness of candidate ontologies for import and reuse, but not so for the novel content of a new ontology. (3Q45)
- The design, implementation, and use requirements of an ontology may affect how quality and fitness on a particular ontology characteristic are determined, as well as interpreted and valued. Perhaps all quality and fitness assessments by software should be traceable to stated ontology requirements. (3Q46)
- Significant new ontology evaluation tools are currently becoming available to users. Carving a link between such tools and existing IT architecture and design tools (e.g., EA and SA) remains a future possibility in order to integrate ontology into mainstream application software development within enterprise or more focused IT environments. This capability could offer a definitive means of connecting ontology quality/fitness characteristics and measures to use case and application software requirements. (3Q47)
- Approximate lexical and structural matching of a new ontology or ontology component to the content of a repository of known ontologies may offer an effective means of identifying comparable ontology content for: (3Q48)
- Given sufficient results from the Ontology Quality Software Survey, the degree to which current tool capabilities align with ontology quality priorities expressed by Tracks A-C. (3Q49)
- Discoveries about the state of ontology evaluation stemming from the Hackathon and Clinic experiences. (3Q4A)
-- maintained by the Track-D co-champions: MikeDenny, KenBaclawski & PeterYim ... please do not edit (3KA4)