John,
Thank you for the insights.
Great question and one that has had my interest for quite a while.
Testing usually centers on two critical elements, 1.) what are the requirements that must be met by the system, and 2.) is the testing to the spec. done by: a.) Inspection, b.) demonstration, or c.) test.
Those certainly seem like good categories. It seems like I encounter two general categories of systems: (1) generalized knowledge systems (like Wikipedia) and (2) domain specific applications of ontologies/schemas within other applications (like Yelp or Google Maps). In the first case, I'd be curious about general fundamentals for evaluating the requirements and specifications of a generalized KR system. I think the second case is more difficult because it requires analysis of the role of the ontology within the context of the domain.
The questions you ask are answered differently depending on these two primary issues. For example, your question about KR is best answered as addressing it as a derived requirement. The design approach addresses the problem and may select one of a number of
different types of KR. (A solid reason I prefer to talk about data structures and methods, rather than implementation languages.)
Lately, I've been thinking about it in similar ways but more closely related to engineering cost: availability of libraries, complexity of supporting code, complexity of join operations, availability of support IT staff/developers, complexity of backend support systems, etc. However, that still doesn't really help me nail down how well the model performs as a possible representation of the world for the system nor does it help make any sort of case for using anything more esoteric like OWL or CL in lieu of simple one-off JSON /XML or DB ER representations. The world of no-SQL makes delaying ontological decision making even easier since I'm burdened less by the persistence layer (though I still have to map the business logic). I'm drawn to the concept of A|B and usability testing for models to see if different models help users to better answer questions about the the domain or derive deeper insights. We often tweak the UI, but how do we capture similar KR feedback and tweak the model? How do we test various alternative representations? Also, I would argue that often the representation isn't the derived requirement, but rather that the representation is fairly central to many systems while the UI and other implementation details are actually the derived requirements. I believe that many of our newer web-based systems are effectively knowledge systems helping represent the world for us to aid in our daily lives and decision making more than they are brick-and-mortar, meatspace applications. Yet, while we talk about usability quite a bit, we don't really focus on the evaluation of representativeness, insightfulness, etc.
The problem statement starts the design process and shapes the subsequent stages of the process. If you have an idea of the types of problem you have in mind it will aid in the discussion.
Right now I'm dealing with projects where the model is the product which is what's making things complicated from an evaluation perspective. The use case includes a variety of new schemas and standards for sharing computer security information. The schemas are well thought out and comprehensive. But since a generalized model is the product, I'm not sure I have the tools/approach to evaluate the model without re-inventing it as I go. This means that I have to sort out theoretical use cases and requirements for the evaluation of the schema which is fine. The tricker part is populating the model with anything resembling realistic data or a real use case. This is troubling because at that point am I evaluating the model as a possible representation of the world or am I evaluating how difficult the model is to populate?
Cheers,
Marcos =-=-=-= Marcos Osorno JHU/APL
|