I failed to mention a couple of important factors affecting
One is the ability to eavesdrop on complex systems which results
in meaningful information. To the extent possible the system
should be able to tell you what is going on. This is one of my pet
gripes with connectionist systems. They don't even know themselves
what they are doing and certainly can't describe any internal
The other is the judicious use of dashboards. The most effective
ones are split between standard information you always want to see
and the other part of the display allows you to scan the system
and display more localized information in detail. This is the
'drill-down' function that allows close examination. It is
somewhat related to 'paraphrasing' but that is a difficult study.
Some work has yielded paraphrasing based on 'breadcrumbs' of
processes (verbs) and some relies on journaling. And the use of
'info' files helps new system users. These can be sprinkled amply
throughout the ontology and a switch can be used to turn them
on/off as desired. They can even be given -v (verbose) levels.
On 5/21/2013 12:31 AM, Osorno, Marcos wrote:
Thank you for the insights.
Great question and one
that has had my interest for quite a while.
Testing usually centers on two critical elements, 1.) what
are the requirements that must be met by the system, and
2.) is the testing to the spec. done by: a.) Inspection,
b.) demonstration, or c.) test.
Those certainly seem like good categories. It seems like I
encounter two general categories of systems: (1) generalized
knowledge systems (like Wikipedia) and (2) domain specific
applications of ontologies/schemas within other applications
(like Yelp or Google Maps). In the first case, I'd be curious
about general fundamentals for evaluating the requirements and
specifications of a generalized KR system. I think the second
case is more difficult because it requires analysis of the role
of the ontology within the context of the domain.
The questions you ask
are answered differently depending on these two primary
issues. For example, your question about KR is best
answered as addressing it as a derived requirement. The
design approach addresses the problem and may select one
of a number of different types of KR. (A solid reason I
prefer to talk about data structures and methods, rather
than implementation languages.)
Lately, I've been thinking about it in similar ways but more
closely related to engineering cost: availability of libraries,
complexity of supporting code, complexity of join operations,
availability of support IT staff/developers, complexity of
backend support systems, etc. However, that still doesn't really
help me nail down how well the model performs as a possible
representation of the world for the system nor does it help make
any sort of case for using anything more esoteric like OWL or CL
in lieu of simple one-off JSON /XML or DB ER representations.
The world of no-SQL makes delaying ontological decision making
even easier since I'm burdened less by the persistence layer
(though I still have to map the business logic). I'm drawn to
the concept of A|B and usability testing for models to see if
different models help users to better answer questions about the
the domain or derive deeper insights. We often tweak the UI, but
how do we capture similar KR feedback and tweak the model? How
do we test various alternative representations? Also, I would
argue that often the representation isn't the derived
requirement, but rather that the representation is fairly
central to many systems while the UI and other implementation
details are actually the derived requirements. I believe that
many of our newer web-based systems are effectively knowledge
systems helping represent the world for us to aid in our daily
lives and decision making more than they are brick-and-mortar,
meatspace applications. Yet, while we talk about usability quite
a bit, we don't really focus on the evaluation of
representativeness, insightfulness, etc.
The problem statement
starts the design process and shapes the subsequent stages
of the process. If you have an idea of the types of
problem you have in mind it will aid in the discussion.
Right now I'm dealing
with projects where the model is the product which is what's
making things complicated from an evaluation perspective.
The use case includes a variety of new schemas and standards
for sharing computer security information. The schemas are
well thought out and comprehensive. But since a generalized
model is the product, I'm not sure I have the tools/approach
to evaluate the model without re-inventing it as I go. This
means that I have to sort out theoretical use cases and
requirements for the evaluation of the schema which is fine.
The tricker part is populating the model with anything
resembling realistic data or a real use case. This is
troubling because at that point am I evaluating the model as
a possible representation of the world or am I evaluating
how difficult the model is to populate?
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J (01)