Quality in its most formal sense refers to the rigorous use of
requirement specifications, requirements-centric design, multi-stage
testing and revision, and other risk-management and quality assurance
techniques. This is a hallmark of systems engineering, distinguishing
it from less rigorous systems creation activities and essential to
success in developing large-scale and complex systems and managing them
throughout their life-cycles. Various sub-domains within systems
engineering apply these risk- and complexity- management techniques to
systems overall, to system components, to component interfaces, and
engineering, interface, and other processes. Quality at any of these
levels is defined in terms of the degree to which any one of the
system, component, process, etc., meets the specified requirements.
Analysis and specification of requirements and functions at each of
these levels, along with identification and application of relevant
quality measures, is an essential part of good systems engineering. In
the ontology cross track we focused on this formal definition of
quality as applied to ontologies, as well as considering more informal
definitions of quality. In particular, we explored the management of
ontology quality within large-scale systems engineering projects,
looking for both lessons learned and areas needing better support.
It emerged that projects involving ontologies as part of
engineered systems, or ontologies as part of systems engineering
processes, tend to have little or no ontology quality assurance
measures. Notably, even in systems engineering projects in which
rigorous attention is spent on identification and specification of
other components and aspects of systems, identification and
specification of ontology requirements is given little to no attention.
Reasons for this exception to otherwise rigorous methodology
vary, but can include a belief that ontologies are non-technical
artifacts, not subject to engineering methodologies; a lack of
necessary resources; an absence of concerns for related areas of
accountability; a belief that variations in ontology do not affect end
functionality or performance; or a belief that, however desirable
quality assurance measures for ontologies might be, no such
implementable, usable, reliable measures exist.
We considered two kinds of scenario in which ontologies may be
used in a big systems project. The ontology may be an integral part of
the solution (ontology as application), or it may be a part of the
development of some system (ontology as business conceptual model).
Many of the same quality assurance parameters may apply in both cases,
but the requirements, and the use to which the ontology may put, will
be very different.
The academic literature contains many approaches to formal
quality of ontologies, many of them very mathematical. There less
coverage in the literature of what it is that makes the terms in an
ontology truly meaningful, or to link formal ontology requirements as
such, to implementation. That is, there are many approaches to ontology
quality but few quality assurance measures. This was addressed in a
number of presentations at our cross-track session, with a strong focus
both on quality factors for ontologies and on how these fit into the
role of the ontology in various types of project.
The findings in the Federation and Integration tracks focuses
on one of two usage scenarios for ontology: using ontology as common
conceptual model).However, many of the available ontology quality
measures focus on ontology as an application, with mathematical
considerations like decidability and so on.
Where do techniques like ontology design patterns (Guarino
,Gangemi and others) fit in with this picture? Are these quality
measures that run across both usages scenarios?, to the extent that the
best application of these patterns leads to ontologies which respect
semantics? Are there measures for meaning as well as measures for
ontology as application? What measures apply to one or other or both?
Is there research targeted at these distinct usage scenarios or are
they just targeted at "ontologies" in the round?
This last may point to an interest direction for possible
research: given an understanding of the different roles for ontologies
as articulated in many of the tracks in this year’s Summit, we wonder
if there are possible new directions for research into quality measures
that are targeted to the intended purposes for which ontologies are
developed?
Note also that at least one of the presentations in our
cross-track session gave the impression that for the usage scenario in
which the ontology is used as a formal conceptual model (and therefore
has its role within the quality assurance process for other artifacts
in a project) it was not appropriate to also apply quality measures to
the ontologies themselves. This is an idea which is worth unpacking and
challenging, with implications for large systems quality assurance
either way.
To further explore and challenge these assumptions, a survey
is being assembled which aims to identify the precise scenarios in
which people are using ontologies in the context of big systems, and to
get some idea of whether quality issues were considered and if so, how
these were addressed.
The good news here is that these issues are comparatively easy
to address with better documentation and dissemination of approaches to
ontology quality management that are already used. There is work to be
done, but a substantial improvement can be accomplished via wider
attention to ontology quality and better documentation and
dissemination of existing knowledge. Summit exchanges demonstrated that
systems engineers are open to deploying that knowledge and ontologists
are open to creating resources that systems engineers can use.
A greater challenge lies in understanding ontology
requirements as they derive from usage characteristics. While the bulk
of literature on ontology quality has historically been addressed to
quality in a different sense, the trend is toward greater attention to
factors related to fitness for purpose, and the relevance and usability
of this research to systems applications is consistently increasing. To
the extent that broadly-experienced ontologists develop a sense of what
kind of ontology is needed for what kind of application, this sense
remains largely in the head of the ontologists. Broadly-experienced
ontologists can be difficult to identify, while less experienced
ontologists may be skilled at developing certain kinds of ontologies
but unaware of variations across application type.
Furthermore, to the extent that ontologists disagree,
principles of variation in requirements should be not only explicit,
but based on more than individual, anecdotal experience. This basis
does not yet exist. To provide resources, including tools and methods,
that reliably support ontology quality management in big systems
engineering contexts, this basis is needed. That area is a forward
challenge for ontology researchers. The survey developed for the summit
is designed, in part, to provide some initial information to stimulate
such research, and to suggest particular areas likely to be worth
investigation.
--
maintained by the X-Track-A1 champions: AmandaVizedom & MikeBennett ... please do not edit