Amanda,
One challenge of requirements is that of their source. Single source requirements, whether from a system sponsor, some authoritative entity within an enterprise, or some trans-enterprise entity (i.e., a virtual enterprise or government entity) result in ontologies and evaluations thereof with very limited utility in other contexts with different scope dimensions. Typically the sponsor of such requirements has little concern for the requirements of “adjacent” domains or enterprises, unless some formal (i.e., contractual or legal) relationship exists with these other entities. This is the root cause of most interoperability problems that are not simply errors in implementation. Of course, if there is a “point source” of requirements, and presumably sponsorship to see those requirements implemented, by all means capture them as completely, objectively, and accurately as possible. But increasingly even such sponsored requirements have to operate in a more “global” context, whether we like globalization or not, or whether we want to minimize exposure/interaction outside our “intranet” or not. There may be contexts where outside exposure is undesirable for the purposes at hand, but I don’t think this forum is focused on such contexts.
So the challenge is how to broaden the source of requirements considered for ontologies intended to be used across enterprise sponsor boundaries and to support interactions with adjacent domains, yet still have those requirements be affordable and manageable. How can we avoid “gratuitous” requirements constraints (overly narrow/specific ontologies and ontology assessment contexts), while at the same time avoiding the overly broad and expensive “do everything for everybody” ontology and ontology assessment contexts, lacking a single requirements sponsor? And even if you have a single requirements sponsor, has that sponsor made some implicit, yet unwarranted, requirements assumptions? Or has that requirements sponsor/source overlooked some important ontology requirements (often due to being overly familiar/invested in a specific domain, paradigm, or enterprise)?
The SCOPE model doesn’t answer these questions, but it does offer a structured way for an interested party or group to explore possible requirements “space”, much like Goldilocks explores the home of the three bears and decides which porridge and which bed is “just right” for her. Only in this situation, what is “just right” is more likely a range of requirement attribute values rather than a single value. Having a diverse set of interested parties participate in a SCOPE workshop regarding some target ontology’s requirements is a good way to develop requirements that transcend the needs of a single sponsor, yet explicitly reject requirements scope creep that the group considers “excessive” and be able to capture/document that those requirements were overtly/explicitly considered and rejected, as opposed to simply being overlooked.
You will also note that my earlier post on “assessment context” explicitly includes life-cycle phase in which an assessment is conducted as one of several possible assessment context dimensions. A similar consideration applies here as to what range of ontology assessment contexts the group wants to consider/support, and which to explicitly reject and leave for others to address, should they choose to do so at some future time.
Hans
From: ontology-summit-bounces@xxxxxxxxxxxxxxxx [mailto:ontology-summit-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Amanda Vizedom
Sent: Saturday, December 08, 2012 8:19 AM
To: Ontology Summit 2013 discussion
Subject: Re: [ontology-summit] Ontology Summit 2013
+1
And is that focus that allows evaluation to be both a narrow enough focus and one that brings key other topics, but only insofar as they make a real difference to evaluation.
The summit can start from a position that is neutral with respect to many subjects that are bottomless-cans-of-worms when approached without such a framework. We can work to support discussion that puts *requirements* at center.
I'd like to add that since evaluation of quality, in this sense, encompasses whatever requirements have been identified, it can encompass both validation and verification, both global and local requirements. What matters is whether they have been identified, through a careful gathering and analysis, as requirements in a particular case. That's why "evaluation across the lifecycle" is a better defined topic than "evaluation" or "quality" without qualification.
It will be important to pay attention to that requirements identification and analysis bit early and often, and to think in terms of a range of lifecyle exemplars.
On Sat, Dec 8, 2012 at 4:29 AM, Matthew West <dr.matthew.west@xxxxxxxxx> wrote:
Dear John,
This is the right question.
> What is the purpose of an ontology?
MW: My answer to this is that an ontology (computer science artefact) is an
information system, it both is a repository of information and you can query
it to get the answers to questions.
MW: So now we can consider the purpose of information, which in business, is
to support decisions taking (information can also be entertainment or some
combination of the two). Better information leads to better decisions. Where
decisions appear in the processes of an enterprise.
MW: So now we have a basis for evaluating our ontologies. Do they provide
(some of) the information that meets the requirements to support a decision.
Of course there may be more than one decision that the ontology supports,
and this answers the question of scope. The (intended) scope of an ontology
is the set of decisions it is designed to support.
We can now evaluate an ontology in terms of the quality of the information
it provides. Here quality is not better or worse, but meeting agreed
requirements. An evaluation would be around the effectiveness and efficiency
with which these requirements are met.
Some properties that you might consider that determine effectiveness of
information include:
- Relevance - is the information relevant to the decision at hand
- Clarity - is the meaning of the information clear
- Accuracy - how close to the truth is the information?
- Completeness/timeliness - is all the information available when the
decision needs to be made?
Some properties that affect efficiency include:
- Cost - how much does the information cost to provide?
- consistency - is the same thing referred to in the same way? (otherwise
there are costs and timeliness issues of reconciliation)
> But focused on what? Any kind of comparison, including evaluation,
> implies a criterion of some kind: "X is better than Y according to
> some criterion Z."
>
> What criterion Z makes one ontology better than another?
MW: Therefore one ontology is better than another if it meets the
information requirements where the other does not. On the other hand, one
ontology is not better than another if it exceeds the information
requirements. Also if two ontologies meet the information requirements, then
one is better than another if it does so more efficiently, which usually
means at lower cost.
MW: One of the things that falls out from this is that an ontology is not of
poor quality if it fails to meet requirements that were not part of its
specification (you cannot fail to meet unstated requirements). This is often
one of the criticisms of ontologies (it's a bad ontology because it was
designed to do this, but it can't meet my requirement which is that). There
is no reason to suppose an ontology will meet wider requirements than it was
designed to meet, so if you intend it to meet broad requirements, that had
better be part of its design.
MW: So in essence it is all a matter of quality (fitness for purpose).
Regards
Matthew West
Information Junction
Tel: +44 1489 880185
Mobile: +44 750 3385279
Skype: dr.matthew.west
matthew.west@xxxxxxxxxxxxxxxxxxxxxxxxx
http://www.informationjunction.co.uk/
http://www.matthew-west.org.uk/
This email originates from Information Junction Ltd. Registered in England
and Wales No. 6632177.
Registered office: 2 Brookside, Meadow Way, Letchworth Garden City,
Hertfordshire, SG6 3JE.