ontology-summit
[Top] [All Lists]

Re: [ontology-summit] [quality] Some Terms and Definitions

To: michael.uschold@xxxxxxxxxxxxxxxx, Ontology Summit 2012 discussion <ontology-summit@xxxxxxxxxxxxxxxx>
From: Mike Bennett <mbennett@xxxxxxxxxxxxxxx>
Date: Wed, 22 Feb 2012 13:24:55 +0000
Message-id: <4F44ECA7.10702@xxxxxxxxxxxxxxx>
That is a great summary and analysis of the different aspects of quality for ontologies. I hope we can build on this as some kind of formal framework.

I hope you will be able to make it to this week's Summit session.

Mike

On 21/02/2012 02:56, Michael F Uschold wrote:
Interesting discussion.  What I get out of this is the following:

Q1: How 'good' something is
      (by some objective and/or subjective criteria).

 

Q2: How consistent and repeatable the process for making something is
      (by measurable criteria).

 

Q3: How well something that is made meets its design specification.
      (by measurable often objective criteria)

 

Q4: How well something that is made is received by its intended users/audience.
      (by objective and subjective criteria).

 

 

These four things are quite different, but work together synergistically. For example:

 

A process to ensure high Q2 will likely include any number of applications of  Q3 to different steps and/or components  that are done/used to make the thing. For example, design and code reviews.

 

Combining high Q2 with high Q1 is powerful. 


Combining high Q2 with low Q1 may be commercially successful, but below a threshold, makes no sense – (e.g. Dilberts crap cartoon).

 

With respect to ontologies:

 

Q1 is a bit nebulous, many have tried to set out criteria for this. Some are generic, some should be specific to purpose.  Measuring the quality of an ontology with respect to purpose should be what Q4 is all about.

 

Q2 is good practice for ontology engineering, and should also be linked to ontology purpose.

 

Q3 is not too relevant today, because most people just build ontologies, they are not specified first.  An exception being Gruninger’s requiring his ontologies to have all and only those things that are required by formally stated competency questions.

 

Q4 is in early days, there is not a lot of work in this area. There are many kind of stakeholders, and many possible purposes of an ontology.  Often the stakeholders are the programmers and system architects, not end users in the traditional sense.  Q4 should assess Q1 measures related to ontology purpose (repeated for emphasis).


Michael

On Sun, Feb 12, 2012 at 8:07 AM, Mike Bennett <mbennett@xxxxxxxxxxxxxxx> wrote:
The term "quality" may mean different things to different people.
I want to draw out two of those possible senses and kick off some
discussion about the possible connections between them.

Here I am not trying to introduce anything new, I'm trying to
describe what's in the relevant literature. So these definitions
are not really discussion points, but people are welcome to
clarify and correct any vagueness with reference to the literature.

The two senses I would characterize as:

1. Quality in the dictionary sense of the word: how good
something is;
2. Quality in the sense used in the industrial term "Quality
Assurance".

In the second sense, Quality Assurance (QA) simply describes an
approach whereby one can ensure (and demonstrate that one has
ensured) what the qualities of some deliverable are. QA is not
about making better things, it is about better making things.
It's about metrics and measures.

The basic definitions and parameters of QA are defined in QA
standards such as ISO 9000. They are also described in national
standards such as BS5750 or the German SUV system.

Quality Assurance is about being able to "demonstrate control".
That is, a firm has formal processes in place by which they
manage their deliverables. These processes are designed and
optimized to ensure consistency in the process of creating
whatever their deliverables are.

We could refer to (1) and (2) as "Qualitative Quality" and
"Quantitative Quality" (you see why the choice of the word
Quality wasn't a good one!)

To illustrate the difference: Macdonald's Golden Arches has
arguably the best QA system of any restaurant chain. However, few
would argue that they make the best burgers. Qualitative quality
doesn't necessarily follow from quantitive quality. What does
follow is the business value in consistency. Anyone who finds
themselves in a strange city with hungry children in tow will not
take them to an unknown burger chain where they food may or may
not be excellent; they will take them to the place where they
know exactly what they will get. This is the business value of a
good QA system.

An accurate if tongue in cheek characterization of QA is given by
Scott Adams in the Dilbert cartoons: "Say you're going to produce
crap; produce crap; prove you produced crap" (please excuse the
language). In fact, if you substitute anything at all (any
quality at all) where Adams has the word 'crap', you have a
reasonable summary of how QA works. You formally specify the
things you are going to produce; you produce those things
according to a well defined (and auditable) set of formal
processes, and you end up with an auditable set of records
demonstrating exactly how those things were produced. This is why
the phrase "Demonstrate control" is the heart of QA.

Now you could of course specify that you are going to produce
excellent things. Rolls Royce for example has a reputation for
doing this. Once the firm knows where its market is, and what
sort of things it wants to and is able to produce for that
market, then it simply formalizes the qualities that it wants to
produce in its products or services.

How might we apply this to ontologies?

To the extent that different qualities of an ontology can be
formally specified, these can be input to a formal, industrial QA
process.

To the extent that one can formalize Quality (1), that is "what
makes a good ontology?", one can apply Quality (2) QA to ensure
that ontologies are produced which comply with those requirements.

There are other things one might want to ensure about the
deliverable ontologies. For example if one is extending an
ontology which is built according to certain microtheories, one
would want to ensure that those microtheories are consistently
applied in the new material. Sometimes this can be detected by
simple measures such as consistency checking; sometimes perhaps
it may not.

Validation and Verification:

These terms also have specific meanings when applied to formal,
industrial QA systems, which are narrower than their dictionary
sense. In software deliverables these translate to:

Verification: Testing the deliverable against its formal
specification. Test cases verify, against each functional
requirement in the specification, that that requirement is met.
Validation: Ensuring that the system as specified and delivered,
actually meets the customer's expectations (typically User
Acceptance Test). Here we find out whether the specify-build-test
set of activities actually resulted in what the customer wanted.

How these are implemented depends very much on the technology and
the architecture of course. There may be interesting challenges
in applying both of these for ontologies.

Design review / peer review

The word "Peer review" has a slightly different meaning in
industry, to that which it has in academia. A QA process does not
just consist of tests; frequently one designs into the process
some kind of review activity (typically a meeting), often called
a design review, also referred to as a form of peer review. This
is where some deliverable item in the QA process (usually a
design specification) is presented to a group of people who would
be in a position to critique it and verify that it is fit for
purpose. Part of the design of the QA process is to consider the
knowledge requirements for various forms of design review (e.g.
system architects, the QA department, coders in the appropriate
language), and ensure that at the relevant point in the process,
the right knowledge is gathered around the table for that
deliverable to be signed off. Another form of this is the code
walk through.

Again, there would be similar but different applications of this
thinking in the various stages in development of an ontology.
Perhaps you might review the taxonomy (or apply some of the
structural tests described in the literature), before developing
the whole ontology. Or not. A QA process has to be designed, it
doesn't just happen.

People often think that the imposition of a formal QA regime on a
technical development implies that some "waterfall" development
model must be applied. This is not the case. If you look at other
formal processes such as the Rational Unified Process (RUP), you
will see the same linkages between deliverables, applied such
that the deliverables can be changed and delivered to a faster
time scale. Similarly the "Agile" approach, when examined in
detail, shows that those linkages are still in place.

For ontologies, if these are to be updated freuquently and in
real time, even more imaginative ways of applying the basic
parameters of QA might be required. For example, one might choose
to use a "gardening" type of approach, whereby the QA is applied
after the event by making or proposing updates or corrections.
Here the QA process followed by Wikipedia is a good example of
how this may be done.

Anyway I realise that not everyone has been thinking about
Quality (2) / quantitive quality, and this is just one part of
the bigger picture. But I hope that, for those who have spent
less time in an industrial environment, this is a useful guide to
what those of us from such an environment usually mean when we
talk about Quality Assurance.

The key point is that the more things about quality in the
natural language, qualitative sense, can be formalized, the more
these can be made use of in formal, industrial QA.

Hope this is helpful,

Mike




--
Mike Bennett
Director
Hypercube Ltd.
89 Worship Street
London EC2A 2BF
Tel: +44 (0) 20 7917 9522
Mob: +44 (0) 7721 420 730
www.hypercube.co.uk
Registered in England and Wales No. 2461068


_________________________________________________________________
Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/
Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2012/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2012
Community Portal: http://ontolog.cim3.net/wiki/



 
_________________________________________________________________
Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/   
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/  
Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2012/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2012  
Community Portal: http://ontolog.cim3.net/wiki/ 


-- 
Mike Bennett
Director
Hypercube Ltd. 
89 Worship Street
London EC2A 2BF
Tel: +44 (0) 20 7917 9522
Mob: +44 (0) 7721 420 730
www.hypercube.co.uk
Registered in England and Wales No. 2461068

_________________________________________________________________
Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/   
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/  
Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2012/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2012  
Community Portal: http://ontolog.cim3.net/wiki/     (01)
<Prev in Thread] Current Thread [Next in Thread>