OntologySummit2012: (X-Track-A1) "Ontology Quality and Large-Scale Systems" Community Input    (32CT)

Track Co-Champions: Dr. AmandaVizedom & Mr. MikeBennett    (338V)

Mission Statement:    (32CU)

This cross-track aspect will focus on the evaluation of ontologies within the context of Big Systems applications. Whether creating, developing, using, reusing, or searching for ontologies for use in big systems, engineers, architects, designers, developers and project owners will encounter questions about ontology evaluation and quality. How should those questions be answered? How do we know whether an ontology is fit for use in (or on) a large-scale engineered system or a large-scale systems engineering effort? This cross-track aspect ties together the evaluation-related discussions that arise within the Summit Tracks and individual sessions, providing a context in which to take up and address the issues generally. Specific focus will evolve with recurring themes, potentially including such topics as ontology quality characteristics, fitness for purpose, requirements, metrics, evaluation methodologies and resources. T    (32DV)

see also: OntologySummit2012_Quality_Synthesis    (32EG)

General Discussion    (333F)


2012.01.25, AmandaVizedom:    (333G)

Some initial thoughts on the scope of this cross-track topic and potential threads within it:    (333H)

Already, after the first events of OntologySummit2012, a variety of quality-related issues have come up. More are likely, in the judgement of your humble co-champions. Here, we begin to gather these issues under one umbrella.    (333I)

2012.1.31 MikeBennett :    (3414)


2011.01.26 AmandaVizedom:    (33AA)

I made an attempt during today's call (Session 03, ConferenceCall_2012_01_26) to notes some of the quality-related issues raised and remarks made. I'm sure I didn't get everything, or get everything quite as the speaker intended. Add and correct!    (339O)

JackRing, slide 8:    (339P)

AnatolyLevenchuk, slide 5:    (339S)

GiancarloGuizzardi:    (33A7)


2012.02.02 Amanda Vizedom:    (3463)

During today's telecon (http://ontolog.cim3.net/cgi-bin/wiki.pl?ConferenceCall_2012_02_02)    (3464)

AmandaVizedom: I am unsure how much of the thread described on Henson's slide 5 intersects the Quality cross-track. I think some but not all. I'd be happy to take suggestions, comments, thoughts regarding issues under the slide 5 topic that are at least significantly issues of ontology quality, metrics, and evaluation. We can use those suggestions to prioritize issues to cover in that cross-track.    (3465)

SimonSpero: [how to tell when an ontology is complete enough] - This fits in to the quality and metrics cross-track    (3466)

PeterYim: @Amanda - I captured that - http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2012_BigSystemsEngineering_CommunityInput#nid345Z    (3467)

AmandaVizedom: @Simon and @David: I agree, and will make "how to tell when an ontology is complete enough" into the quality cross-track focus.    (3468)

Via the above discussion, AmandaVizedom committed to using the quality cross-track to cover some of what HensonGraves emphasized in his slides and comments, especially    (3469)

It was noted that we should coordinate this with Track 4, Large-scale domain applications. SteveRay noted that this line of interest has been emphasized with the planned speakers for Track 4.    (346E)


Working towards synthesis:    (38JJ)

Quality in its most formal sense refers to the rigorous use of requirement specifications, requirements-centric design, multi-stage testing and revision, and other risk-management and quality assurance techniques. This is a hallmark of systems engineering, distinguishing it from less rigorous systems creation activities and essential to success in developing large-scale and complex systems and managing them throughout their life-cycles. Various sub-domains within systems engineering apply these risk- and complexity- management techniques to systems overall, to system components, to component interfaces, and engineering, interface, and other processes. Quality at any of these levels is defined in terms of the degree to which any one of the system, component, process, etc., meets the specified requirements. Analysis and specification of requirements and functions at each of these levels, along with identification and application of relevant quality measures, is an essential part of good systems engineering. In the ontology cross track we focused on this formal definition of quality as applied to ontologies, as well as considering more informal definitions of quality. In particular, we explored the management of ontology quality within large-scale systems engineering projects, looking for both lessons learned and areas needing better support.    (38JK)

It emerged that projects involving ontologies as part of engineered systems, or ontologies as part of systems engineering processes, tend to have little or no ontology quality assurance measures. Notably, even in systems engineering projects in which rigorous attention is spent on identification and specification of other components and aspects of systems, identification and specification of ontology requirements is given little to no attention.    (38JL)

Reasons for this exception to otherwise rigorous methodology vary, but can include a belief that ontologies are non-technical artifacts, not subject to engineering methodologies; a lack of necessary resources; an absence of concerns for related areas of accountability; a belief that variations in ontology do not affect end functionality or performance; or a belief that, however desirable quality assurance measures for ontologies might be, no such implementable, usable, reliable measures exist.    (38JM)

We considered two kinds of scenario in which ontologies may be used in a big systems project. The ontology may be an integral part of the solution (ontology as application), or it may be a part of the development of some system (ontology as business conceptual model). Many of the same quality assurance parameters may apply in both cases, but the requirements, and the use to which the ontology may put, will be very different.    (38JN)

The academic literature contains many approaches to formal quality of ontologies, many of them very mathematical. There less coverage in the literature of what it is that makes the terms in an ontology truly meaningful, or to link formal ontology requirements as such, to implementation. That is, there are many approaches to ontology quality but few quality assurance measures. This was addressed in a number of presentations at our cross-track session, with a strong focus both on quality factors for ontologies and on how these fit into the role of the ontology in various types of project.    (38JO)

The findings in the Federation and Integration tracks focuses on one of two usage scenarios for ontology: using ontology as common conceptual model).However, many of the available ontology quality measures focus on ontology as an application, with mathematical considerations like decidability and so on.    (38JP)

Where do techniques like ontology design patterns (Guarino ,Gangemi and others) fit in with this picture? Are these quality measures that run across both usages scenarios?, to the extent that the best application of these patterns leads to ontologies which respect semantics? Are there measures for meaning as well as measures for ontology as application? What measures apply to one or other or both? Is there research targeted at these distinct usage scenarios or are they just targeted at "ontologies" in the round?    (38JQ)

This last may point to an interest direction for possible research: given an understanding of the different roles for ontologies as articulated in many of the tracks in this yearÂ’s Summit, we wonder if there are possible new directions for research into quality measures that are targeted to the intended purposes for which ontologies are developed?    (38JR)

Note also that at least one of the presentations in our cross-track session gave the impression that for the usage scenario in which the ontology is used as a formal conceptual model (and therefore has its role within the quality assurance process for other artifacts in a project) it was not appropriate to also apply quality measures to the ontologies themselves. This is an idea which is worth unpacking and challenging, with implications for large systems quality assurance either way.    (38JS)

To further explore and challenge these assumptions, a survey is being assembled which aims to identify the precise scenarios in which people are using ontologies in the context of big systems, and to get some idea of whether quality issues were considered and if so, how these were addressed. [During the initial weeks of the Summit, the Ontology Quality for Big Systems Co-Champions closely attended to, and asked questions about, ontology quality experiences in big systems engineering projects. Based on prior experience, there was some expectation of reports regarding difficulties with ontology quality, and ontology quality assurance, and of resulting problems for projects. Such reports were indeed forthcoming. However, they were fewer than the reports of projects conducted without any sense of the quality of ontologies used, nor indeed any idea of how to get such information. Following up on this revelation, the Co-Champions developed a survey to elicit more detailed information about experiences related to ontology quality and big systems projects, without relying on the respondent, or indeed the project team, having thought about ontology quality explicitly, or having knowledge or agreement of factors contributing to such quality. The survey was designed to be as neutral as possible to varying theories about ontology quality, and to elicit enough potentially-relevant information to let patterns emerge. The results of this survey are reflected in this text. Details of these results can be found at <insert link>. ]    (38JT)

The good news here is that these issues are comparatively easy to address with better documentation and dissemination of approaches to ontology quality management that are already used. There is work to be done, but a substantial improvement can be accomplished via wider attention to ontology quality and better documentation and dissemination of existing knowledge. Summit exchanges demonstrated that systems engineers are open to deploying that knowledge and ontologists are open to creating resources that systems engineers can use.    (38JU)

A greater challenge lies in understanding ontology requirements as they derive from usage characteristics. While the bulk of literature on ontology quality has historically been addressed to quality in a different sense, the trend is toward greater attention to factors related to fitness for purpose, and the relevance and usability of this research to systems applications is consistently increasing. To the extent that broadly-experienced ontologists develop a sense of what kind of ontology is needed for what kind of application, this sense remains largely in the head of the ontologists. Broadly-experienced ontologists can be difficult to identify, while less experienced ontologists may be skilled at developing certain kinds of ontologies but unaware of variations across application type.    (38JV)

Furthermore, to the extent that ontologists disagree, principles of variation in requirements should be not only explicit, but based on more than individual, anecdotal experience. This basis does not yet exist. To provide resources, including tools and methods, that reliably support ontology quality management in big systems engineering contexts, this basis is needed. That area is a forward challenge for ontology researchers. The survey developed for the summit is designed, in part, to provide some initial information to stimulate such research, and to suggest particular areas likely to be worth investigation.    (38JW)


Enter your input below ... (please identify yourself and date your entry)    (32CV)