OntologySummit2012: (X-Track-A1) "Ontology Quality and Large-Scale Systems" Community Input (32CT)
Track Co-Champions: Dr. AmandaVizedom & Mr. MikeBennett (338V)
Mission Statement: (32CU)
This cross-track aspect will focus on the evaluation of ontologies within the context of Big Systems applications. Whether creating, developing, using, reusing, or searching for ontologies for use in big systems, engineers, architects, designers, developers and project owners will encounter questions about ontology evaluation and quality. How should those questions be answered? How do we know whether an ontology is fit for use in (or on) a large-scale engineered system or a large-scale systems engineering effort? This cross-track aspect ties together the evaluation-related discussions that arise within the Summit Tracks and individual sessions, providing a context in which to take up and address the issues generally. Specific focus will evolve with recurring themes, potentially including such topics as ontology quality characteristics, fitness for purpose, requirements, metrics, evaluation methodologies and resources. T (32DV)
see also: OntologySummit2012_Quality_Synthesis (32EG)
General Discussion (333F)
2012.01.25, AmandaVizedom: (333G)
Some initial thoughts on the scope of this cross-track topic and potential threads within it: (333H)
Already, after the first events of OntologySummit2012, a variety of quality-related issues have come up. More are likely, in the judgement of your humble co-champions. Here, we begin to gather these issues under one umbrella. (333I)
- Meta-topic: Questions about the Quality Cross-track (333J)
- Question: Is this about the quality of ontologies for large-scale systems, or about the quality of such systems themselves? (333K)
- Response (AmandaVizedom): This track is specifically focused on the quality of the ontologies. This does not rule out discussion of the quality of systems incorporating, or engineered using, ontologies, insofar as dependencies exist between the two. The focus, however, is on ontology quality specifically. (336W)
- Question: Is it possible to say anything meaningful about ontology quality within the limits of the cross-track, given how much debate there is, and how little settlement, about ontology quality? (333L)
- Response (AmandaVizedom): Indeed, the unsettled state of the question, in contrast to the significant effect ontology quality has on systems that incorporate ontologies, is precisely why the track was suggested. No argument, then, as to whether this is a reasonable question. Here's why I think we *can* make useful progress: because we are limited by the specific focus of the track, and the summit itself, on ontologies for large-scale systems and systems engineering. We are therefore obligated to confine ourselves to discussing ontology quality 'as it makes a difference to' large-scale systems systems engineering. We will exercise some constraint on ourselves, and focus within this motivating context. This practical focus takes a considerable amount of potential discussion out of scope. This practical focus also gives us an agreed reference direction for the discussions we do have: the direction of big systems and systems engineering use cases, and the ways in which characteristics of ontologies support or fail to support thos (336X)
- Question: Is this about the quality of ontologies for large-scale systems, or about the quality of such systems themselves? (333K)
- Topic: Elements, Dimensions, or Degrees of Ontology Quality When we speak of ontology quality, even when focusing on what makes a difference to large-scale systems & systems engineering, we may be thinking of many different things. The importance of considering these different things separately has been raised. How to usefully frame and identify these quality-related things, however, is neither clear nor standardized. Within the summit discussion, at least these ways of analyzing ontology quality have been suggested: (336Y)
- in terms "how much quality is needed," (336Z)
- in terms of types or dimensions of ontology quality, (3370)
- in terms of distinct characteristics (or features, or properties) that ontologies may have, fail to have, or have in degrees. How should we understand ontology quality? What manner of breaking down the complex (333N)
- Topic: Metrics and Measurement What metrics are available for ontology quality? What methods of measurement? What's missing? For what elements of ontology quality are metrics and measurements needed but missing? How easily might such metrics and measurements be developed? (3371)
- Topic: Ontology Evaluation How are ontologies evaluated? How should they be evaluated? How much ontology evaluation is general (use-independent)? How much ontology evaluation is use-specific? Are currently used methods of ontology evaluation any good? Good enough? (3372)
- Topic: Specification of Ontology Requirements How are ontology requirements for systems specified? How should they be specified? What guidance or assistance is available for specifying ontology requirements? What guidance or assistance is needed? (336V)
- Topic: Use Cases What about past and present Big Systems that incorporate ontologies? How do (did) they manage ontology quality? How well does (did) that work? What do these use cases contain by way of issues, solutions, lessons learned, and challenges regarding ontology quality? (339D)
- Topic: Relating Use Cases, Requirements, and Ontology Quality What are the relationships between use cases, ontology requirements, and elements, dimensions, or degrees of ontology quality? (333O)
2012.1.31 MikeBennett : (3414)
- Background: What is Quality? (340E)
- There are really two distinct usages of the term 'Quality' which are in circulation: (340F)
- We could think of these as qualitative quality and quantitive quality (the Q word was not really a good choice for QA - it's really about having processes that demonstrate control over deliverables) (340I)
- These are different. For example Macdonald's Golden Arches have arguably the best quality assurance of any restaurant chain, but they do not have the best burgers. It is the consistency of production which is monetizable to them. (340J)
- There is some cross-over: (340K)
- to the extent that you can quantify "what makes a good ontology?" you can build the answers to this into the formal specifications against which an ontology is verified and validated (340L)
- you can also build into the development process, the design reviews and other activities required to ensure that those who have some understanding of those intangibles are able to input to the process, correct and or veto deliverables and so on - just as in code development you would have design reviews for coding, application of agreed design conventions and so on. (340M)
- Background: Applying Quality to Big Systems (340N)
- Not all big systems are engineered, and not all engineering is big systems. (340O)
- Here we are looking at ontology quality both for engineered and for non engineered big systems (340P)
- Non engineered systems (big or otherwise) introduce a new dimension into what an ontology is required to do (340Q)
- and therefore, for quantitative quality, how one is to demonstrate that these requirements are met. (340R)
- Example: Engineered systems are design by intelligent agents (people) and so are designed to operate within clearly defined and formally specified parameters (340S)
- for instance (simplifying wildly), a feedback control system is designed to operate within one stable quadrant of all the available behaviors of the system (the unstable quadrants are where you will find behaviors like unstable oscillations, things that fly apart etc.). The designer knows not to cross these mathematically defined boundaries (340T)
- non designed, i.e. emergent systems are subject to no such constraints. Complex dynamic systems have no predefined bounds on their behavior (340U)
- What does this mean for ontology? (340V)
- An ontology for an engineered system has a reasonably well defined set of ontological commitments: (340W)
- For emergent systems (or any non engineered system) the choices for ontological commitment must be made by the ontologist (340Z)
- what granularity of descriptions of kinds of 'thing' is appropriate to adequately describe the system for whatever are the purposes for which it is to be described? (this depends on what is the use case for the ontology itself) (3410)
- What features or aspects of the system need to be described for the purposes to which the ontology is to be put? (3411)
- What theoretical or descriptive framework is appropriate? (3412)
- Much of this may be intangible, qualitative quality. How and to what extend can elements of this be quantified, or experts brought in to the review process who understand these questions? (3413)
2011.01.26 AmandaVizedom: (33AA)
I made an attempt during today's call (Session 03, ConferenceCall_2012_01_26) to notes some of the quality-related issues raised and remarks made. I'm sure I didn't get everything, or get everything quite as the speaker intended. Add and correct! (339O)
- Is the ontologists' goal (iteration stop rule) Proof of Correctness or Fit for Purpose or what? (339Q)
- How to ensure the continual integrity of any resulting ontology? (339R)
AnatolyLevenchuk, slide 5: (339S)
- For ontology-based formalization of systems engineering, the ontologies used must not be "folk" or "common sense" ontologies, but must be counter-intuitive, based in "engineering state-of-the-art", "knowledge about things, not about descriptions of things." {nid 339T} AnatolyLevenchuk, slide 7: (339U)
- The needed type of ontology, supporting "engineering artifact...processing... needs combined usage of terminology/semiotics and ontology." {nid 339V} AnatolyLevenchuk, slide 8: (339W)
- The needed ontology must evolve, and include: (339X)
- less formal semantics, more formal pragmatics (339Y)
- multi-agent belief revision theory (339Z)
- separation of administrative and ontology domains (units of ontology maintaining/editing/communication/library granularity and units of belief revision. {nid 33A0} AnatolyLevenchuk, slide 10: (33A1)
- The needed ontology must support: (33A2)
- Many examples of failure to understand the relations between concepts in different models, especially examples in which there was a failure to understand that the concepts do not refer to the same thing. (33A8)
- Need to incorporate both formal ontological theories and conceptual modeling principles (33A9)
2012.02.02 Amanda Vizedom: (3463)
During today's telecon (http://ontolog.cim3.net/cgi-bin/wiki.pl?ConferenceCall_2012_02_02) (3464)
AmandaVizedom: I am unsure how much of the thread described on Henson's slide 5 intersects the Quality cross-track. I think some but not all. I'd be happy to take suggestions, comments, thoughts regarding issues under the slide 5 topic that are at least significantly issues of ontology quality, metrics, and evaluation. We can use those suggestions to prioritize issues to cover in that cross-track. (3465)
SimonSpero: [how to tell when an ontology is complete enough] - This fits in to the quality and metrics cross-track (3466)
PeterYim: @Amanda - I captured that - http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2012_BigSystemsEngineering_CommunityInput#nid345Z (3467)
AmandaVizedom: @Simon and @David: I agree, and will make "how to tell when an ontology is complete enough" into the quality cross-track focus. (3468)
Via the above discussion, AmandaVizedom committed to using the quality cross-track to cover some of what HensonGraves emphasized in his slides and comments, especially (3469)
- slide 5: "Success And Relevance Of Semantic Issues In Engineering" in which it is noted: (346A)
- There has been push back on this topic on the grounds that it was covered last year (346B)
- However, marketing ontology is not the same as establishing where there are successes and analysis of failures, and conditions that might drive success (346C)
- this is of great concern to engineering decision makers (346D)
It was noted that we should coordinate this with Track 4, Large-scale domain applications. SteveRay noted that this line of interest has been emphasized with the planned speakers for Track 4. (346E)
Working towards synthesis: (38JJ)
Quality in its most formal sense refers to the rigorous use of requirement specifications, requirements-centric design, multi-stage testing and revision, and other risk-management and quality assurance techniques. This is a hallmark of systems engineering, distinguishing it from less rigorous systems creation activities and essential to success in developing large-scale and complex systems and managing them throughout their life-cycles. Various sub-domains within systems engineering apply these risk- and complexity- management techniques to systems overall, to system components, to component interfaces, and engineering, interface, and other processes. Quality at any of these levels is defined in terms of the degree to which any one of the system, component, process, etc., meets the specified requirements. Analysis and specification of requirements and functions at each of these levels, along with identification and application of relevant quality measures, is an essential part of good systems engineering. In the ontology cross track we focused on this formal definition of quality as applied to ontologies, as well as considering more informal definitions of quality. In particular, we explored the management of ontology quality within large-scale systems engineering projects, looking for both lessons learned and areas needing better support. (38JK)
It emerged that projects involving ontologies as part of engineered systems, or ontologies as part of systems engineering processes, tend to have little or no ontology quality assurance measures. Notably, even in systems engineering projects in which rigorous attention is spent on identification and specification of other components and aspects of systems, identification and specification of ontology requirements is given little to no attention. (38JL)
Reasons for this exception to otherwise rigorous methodology vary, but can include a belief that ontologies are non-technical artifacts, not subject to engineering methodologies; a lack of necessary resources; an absence of concerns for related areas of accountability; a belief that variations in ontology do not affect end functionality or performance; or a belief that, however desirable quality assurance measures for ontologies might be, no such implementable, usable, reliable measures exist. (38JM)
We considered two kinds of scenario in which ontologies may be used in a big systems project. The ontology may be an integral part of the solution (ontology as application), or it may be a part of the development of some system (ontology as business conceptual model). Many of the same quality assurance parameters may apply in both cases, but the requirements, and the use to which the ontology may put, will be very different. (38JN)
The academic literature contains many approaches to formal quality of ontologies, many of them very mathematical. There less coverage in the literature of what it is that makes the terms in an ontology truly meaningful, or to link formal ontology requirements as such, to implementation. That is, there are many approaches to ontology quality but few quality assurance measures. This was addressed in a number of presentations at our cross-track session, with a strong focus both on quality factors for ontologies and on how these fit into the role of the ontology in various types of project. (38JO)
The findings in the Federation and Integration tracks focuses on one of two usage scenarios for ontology: using ontology as common conceptual model).However, many of the available ontology quality measures focus on ontology as an application, with mathematical considerations like decidability and so on. (38JP)
Where do techniques like ontology design patterns (Guarino ,Gangemi and others) fit in with this picture? Are these quality measures that run across both usages scenarios?, to the extent that the best application of these patterns leads to ontologies which respect semantics? Are there measures for meaning as well as measures for ontology as application? What measures apply to one or other or both? Is there research targeted at these distinct usage scenarios or are they just targeted at "ontologies" in the round? (38JQ)
This last may point to an interest direction for possible research: given an understanding of the different roles for ontologies as articulated in many of the tracks in this yearÂ’s Summit, we wonder if there are possible new directions for research into quality measures that are targeted to the intended purposes for which ontologies are developed? (38JR)
Note also that at least one of the presentations in our cross-track session gave the impression that for the usage scenario in which the ontology is used as a formal conceptual model (and therefore has its role within the quality assurance process for other artifacts in a project) it was not appropriate to also apply quality measures to the ontologies themselves. This is an idea which is worth unpacking and challenging, with implications for large systems quality assurance either way. (38JS)
To further explore and challenge these assumptions, a survey is being assembled which aims to identify the precise scenarios in which people are using ontologies in the context of big systems, and to get some idea of whether quality issues were considered and if so, how these were addressed. [During the initial weeks of the Summit, the Ontology Quality for Big Systems Co-Champions closely attended to, and asked questions about, ontology quality experiences in big systems engineering projects. Based on prior experience, there was some expectation of reports regarding difficulties with ontology quality, and ontology quality assurance, and of resulting problems for projects. Such reports were indeed forthcoming. However, they were fewer than the reports of projects conducted without any sense of the quality of ontologies used, nor indeed any idea of how to get such information. Following up on this revelation, the Co-Champions developed a survey to elicit more detailed information about experiences related to ontology quality and big systems projects, without relying on the respondent, or indeed the project team, having thought about ontology quality explicitly, or having knowledge or agreement of factors contributing to such quality. The survey was designed to be as neutral as possible to varying theories about ontology quality, and to elicit enough potentially-relevant information to let patterns emerge. The results of this survey are reflected in this text. Details of these results can be found at <insert link>. ] (38JT)
The good news here is that these issues are comparatively easy to address with better documentation and dissemination of approaches to ontology quality management that are already used. There is work to be done, but a substantial improvement can be accomplished via wider attention to ontology quality and better documentation and dissemination of existing knowledge. Summit exchanges demonstrated that systems engineers are open to deploying that knowledge and ontologists are open to creating resources that systems engineers can use. (38JU)
A greater challenge lies in understanding ontology requirements as they derive from usage characteristics. While the bulk of literature on ontology quality has historically been addressed to quality in a different sense, the trend is toward greater attention to factors related to fitness for purpose, and the relevance and usability of this research to systems applications is consistently increasing. To the extent that broadly-experienced ontologists develop a sense of what kind of ontology is needed for what kind of application, this sense remains largely in the head of the ontologists. Broadly-experienced ontologists can be difficult to identify, while less experienced ontologists may be skilled at developing certain kinds of ontologies but unaware of variations across application type. (38JV)
Furthermore, to the extent that ontologists disagree, principles of variation in requirements should be not only explicit, but based on more than individual, anecdotal experience. This basis does not yet exist. To provide resources, including tools and methods, that reliably support ontology quality management in big systems engineering contexts, this basis is needed. That area is a forward challenge for ontology researchers. The survey developed for the summit is designed, in part, to provide some initial information to stimulate such research, and to suggest particular areas likely to be worth investigation. (38JW)
Enter your input below ... (please identify yourself and date your entry) (32CV)