Ontology for Big Systems

What are Ontologies?

Here’s a surprising fact: we use ontologies all the time. In fact, we’re all unwitting ontologists. The mental models we use to interact with our world, are a type of highly internalized, implicit ontology. Our mental models serve to organize and exploit the assumptions we hold about the world - the things that exist in it and how they’re related to one another.

For example, when we go to see a doctor, the visit is governed by a set of conventions and shared assumptions:
  • patients wait in a waiting room; 
  • the doctor examines patients, patients don’t examine the doctor; 
  • nurses assist doctors, etc.
Yet, while we might use such ontologies all the time, rarely do we notice, let alone care.  Of course, mental models are not the only type of ontology, most explicit conceptualizations of the world can be considered a type of ontology.

While we might not consciously think about it, whenever we do anything, we’re acting on a vast body of implicit knowledge and belief about what we think exists and is real. If this is true, and ontologies are so pervasive, then why haven’t you heard of them? Until recently, ontology was primarily of interest only to philosophers. Ontology, a branch of metaphysics, is concerned with answering big questions such as “What exists?”

Pondering the nature of being, while interesting, is not a priority for most people. Sure, we all answer “What exists” everyday, but we do so pragmatically, often intuitively and almost always implicitly. Our personal ontologies are so internalized that we are rarely aware that we use them.

Ontologies are also embedded in many of our everyday objects and systems. The forks we use are designed based on assumptions about human mouths, hands and the types of foods we eat. A transit system is designed according to assumptions about population density, growth, usage and rates. The objects and systems that pervade our lives carry with them an imprint of the beliefs of their designers.

Ontology engineering arose as an answer to a “problem” in computing. When humans interact with one another, we rely on a large body of assumed, shared belief about the context, including what kinds of things are in it and how they interact. The fact that we are humans, already means that we share a vast body of broadly similar experiences and knowledge.  When we interact with computers that lack this knowledge, they do or conclude things we find bizarre. But we can't, in every interaction, spend time identifying and formulating the contextual background knowledge the computer needs in order understand what we say, do, or represent as data.

So, the idea arose of trying to make this background knowledge explicit for computers. This means that for a given context, we make explicit those background assumptions that humans use to reason with. With such an ontology, a computer would now able to “understand”, or at least make assumptions and inferences about the part of the world we made available to it.

Over the past 30 years, as we’ve come to rely on an ever increasing web of socio-technical systems, we’ve encountered a slew of new problems. Organizations found that as employees retired or left, their knowledge would leave with them, and in many cases it would cost large amounts of money to maintain or evolve these systems. Similarly, people found that combining two systems was no trivial task. Often, implicit assumptions made by the different designers would contradict one another, making integration impossible. When machines need to talk to one another, or when we want to understand or use a system designed by another person (who might no longer be around), then those implicit assumptions suddenly matter a lot.

A fundamental task for ontology today is to make explicit the implicit assumptions that people or systems make about their relevant portion of the world. This can range from users independently, yet collaboratively creating tag clouds, to search engines providing directories or taxonomies, to organizations developing controlled vocabularies, deploying thesauri and to creating logical models of the world. This makes what we believe accessible to others in a clear, precise way. Forcing us to consider our basic assumptions and bringing to light any subtle disagreements or indeed errors.

In this sense, we engineer ontologies that represent aspects of reality for a particular purpose. The word “ontology” has been used to refer to a wide range of computational artifacts of varying complexity, ranging from folksonomies (tag clouds), controlled vocabularies, taxonomies (Yahoo! directory), thesauri (Wordnet) to logical theories of reality (Basic Formal Ontology, DOLCE).

As Leo Obrst explained in the 2007 Ontology Summit:

An ontology defines the terms used to describe and represent an area of knowledge (subject matter).
An ontology also is the model (set of concepts) for the meaning of those terms.
An ontology thus defines the vocabulary and the meaning of that vocabulary.

One of the most successful applications of ontologies has been in Apple’s Siri. When you ask Siri “find me a restaurant”, it activates a “Restaurant Ontology” which defines what a “restaurant”, “reservation” and “rating” are and how they’re related to one another. Siri uses this information to interact “intelligently” and book you that reservation.  IBM’s Watson also a number of used lightweight ontologies to distinguish between “people,” “places,” “times” and other categories when playing Jeopardy!

As our world becomes more complex, ontologies are a vital piece of a solution addressing the problems of Big Systems and Big Data. Depending on the intended use, ontologies can:
  • make explicit and accessible, implicit yet vital assumptions about our systems
  • enable integration among systems and data through semantic interoperability
  • improve model design, adaptability and reuse,
  • reduce development and operational costs
  • enhance decision support systems
  • aid in knowledge management and discovery
  • provide a basis for more adaptable systems

Finally, as we move into the knowledge age there is a growing expectation that our systems will be more self-describing and intelligent. In order to engineer such systems, allow intuitive use and meet expectations of all stakeholders, a more consistent and complete use of ontologies and ontological analysis must be made. The 2007 Ontology Summit provides a more thorough (and somewhat more technical) perspective on the exact nature of ontologies.

Why are ontologies relevant for Big Systems?