OntologySummit2012: Media Kit (draft) (32CX)
This is the workspace for preparing the OntologySummit2012_MediaKit. (32CY)
... (this page is undergoing significant revision (Apr 5 2012)!) (32EI)
What is the Ontology Summit? (38YP)
Started by Ontolog and NIST in 2006, the Ontology Summit is an annual series of events that involves the ontology community and other communities related to each year's theme. Founded by experienced, senior leaders in the community, each driven by a desire to collaborate across companies, projects, subfields to address issues relevant across the field and to advance the technology and practice generally. The summit is largely a self-organizing, bottom-up, volunteer driven effort, that solicits contributions from participants around the world in both industry and academia. Moreover, the community successfully exploits a variety of communication platforms to share and develop ideas, and work towards a consensus. (39QS)
Each year's Summit consists of a series events and continued discourse spanning three months, culminating in a free, two-day face-to-face workshop and symposium at the National Institute of Standards (NIST) in Gaithersburg, Maryland, USA. Discussions employ email, chat and teleconferences, including presentations given by experts about issues relevant to each years theme. A Summit Communique, capturing the most important findings is developed throughout the summit and finalized at the symposium, endorsed by participants. (39QT)
Bringing together a diverse and dispersed group of volunteers to self-organize and contribute toward a common goal is a particularly difficult collective action problem. A unique attribute of the ontology summit is that it accomplishes this outside the traditional conference framework. Its open IP policy, coupled with continued commitment and support by Ontolog, ensures that all generated material, including ideas, stories, arguments, presentations, audio recordings and transcripts are archived and available in the public domain. (39QU)
A group of initial volunteers organize and facilitate the summit activities, while seeking panelists and soliciting participation from leading companies and labs. Additionally, the community maintains low barriers to participations, welcoming both well-established professionals and those new to the field. Consequently, personal or corporate agenda are largely overwhelmed in this framework, allowing discussions to focus the challenges associated with each years theme. (39QV)
The Ontology Summit program is now co-organized by Ontolog, NIST, NCOR, NCBO, IAOA, NCO_NITRD along with the co-sponsorship of other organizations that are supportive of the Summit goals and objectives. (39QW)
Ontology for Big Systems (38YS)
The Cambrian explosion occurred 530 million years ago, as life on earth experienced a sudden increase in diversity and the rate of evolution. Over the past 100 years, weve entered the Cambrian age for information, knowledge and systems, coupled with a constantly evolving technology landscape. The amount of knowledge that is produced, published and shared by humanity has been growing exponentially each year. In the past decade more data has been collected, more video has been produced and more information has been published than in all of previous human history. (39QX)
At the same time, with the advent of the computer, digital representations, and the Internet, it has been possible to model more of the complexity of systems, connect more people. Moreover, an increasing number of people and organizations are driven to connect their systems to one another. With all this new information (aka Big Data) and all these new systems (aka Big Systems), there has also be an attendant growth in the complexity of systems that model physical phenomena and handle information, their size, their scale, their scope and their interdependence. (39QY)
To address the problems that have arisen during the current period of information and knowledge, we need novel tools and approaches. Some of the major challenges facing Big Systems stem not only from their scale, but also their scope and complexity. At the same time, there are novel challenges for Big Systems when different, dispersed groups work together toward a common goal, for instance in understanding Climate Change. This leads to a need for better solutions for interoperability among federated systems and for fostering interdisciplinary collaboration. For example, we would expect that systems developed to help evacuate a city in anticipation of a tsunami to operate seamlessly with our navigation systems. (39QZ)
This year's Ontology Summit is titled "Ontology for Big Systems" and seeks to explore, identify and articulate how ontological methods can benefit the various disciplines required to engineer a "big system." The term "big system" is intended to cover a large scope that includes many of the terms encountered in the media such as big data, complex techno-socio-economic systems, intelligent or smart systems, cloud computing, net-centricity and collective intelligence. Established disciplines that fall within the summit scope include (but not limited to) systems engineering, software engineering, information systems modelling, and data mining. (39R0)
The principal goal of the summit was to bring together and foster collaboration between the ontology community, systems community, and stakeholders of Big Systems. Together, summit participants exchanged ideas on how ontological analysis and ontology engineering might make a difference, when applied in these "big systems." We produced recommendations describing how ontologies fit into Big Systems, as well as providing examples where ontological techniques have been, or could be applied, in domains such as: bioinformatics, electronic health records, intelligence, the smart electrical grid, manufacturing and supply chains, earth and environmental, e-science, cyber-physical systems and e-government. As is traditional with the Ontology Summit series, the results will be captured in the form of a communique, with expanded supporting material provided on the web. (39R1)
What Are Ontologies? (38ZZ)
Heres a surprising fact: we use ontologies all the time. In fact, were all unwitting ontologists. The mental models we use to interact with our world, are a type of highly internalized, implicit ontology. Our mental models serve to organize and exploit the assumptions we hold about the world - the things that exist in it and how theyre related to one another. (39R2)
For example, when we go to see a doctor, the visit is governed by a set of conventions and shared assumptions: patients wait in a waiting room; the doctor examines patients, patients dont examine the doctor; nurses assist doctors, etc. Yet, while we might use such ontologies all the time, rarely do we notice, let alone care. Of course, mental models are not the only type of ontology, most explicit conceptualizations of the world can be considered a type of ontology. (39R3)
While we might not consciously think about it, whenever we do anything, were acting on a vast body of implicit knowledge and belief about what we think exists and is real. If this is true, and ontologies are so pervasive, then why havent you heard of them? Until recently, ontology was primarily of interest only to philosophers. Ontology, a branch of metaphysics, is concerned with answering big questions such as What exists? (39R4)
Pondering the nature of being, while interesting, is not a priority for most people. Sure, we all answer What exists everyday, but we do so pragmatically, often intuitively and almost always implicitly. Our personal ontologies are so internalized that we are rarely aware that we use them. (39R5)
Ontologies are also embedded in many of our everyday objects and systems. The forks we use are designed based on assumptions about human mouths, hands and the types of foods we eat. A transit system is designed according to assumptions about population density, growth, usage and rates. The objects and systems that pervade our lives carry with them an imprint of the beliefs of their designers. (39R6)
Ontology engineering arose as an answer to a problem in computing. When humans interact with one another, we rely on a large body of assumed, shared belief about the context, including what kinds of things are in it and how they interact. The fact that we are humans, already means that we share a vast body of broadly similar experiences and knowledge. When we interact with computers that lack this knowledge, they do or conclude things we find bizarre. But we can't, in every interaction, spend time identifying and formulating the contextual background knowledge the computer needs in order understand what we say, do, or represent as data. (39R7)
So, the idea arose of trying to make this background knowledge explicit for computers. This means that for a given context, we make explicit those background assumptions that humans use to reason with. With such an ontology, a computer would now able to understand, or at least make assumptions and inferences about the part of the world we made available to it. (39R8)
Over the past 30 years, as weve come to rely on an ever increasing web of socio-technical systems, weve encountered a slew of new problems. Organizations found that as employees retired or left, their knowledge would leave with them, and in many cases it would cost large amounts of money to maintain or evolve these systems. Similarly, people found that combining two systems was no trivial task. Often, implicit assumptions made by the different designers would contradict one another, making integration impossible. When machines need to talk to one another, or when we want to understand or use a system designed by another person (who might no longer be around), then those implicit assumptions suddenly matter a lot. (39R9)
A fundamental task for ontology today is to make explicit the implicit assumptions that people or systems make about their relevant portion of the world. This can range from users independently, yet collaboratively creating tag clouds, to search engines providing directories or taxonomies, to organizations developing controlled vocabularies, deploying thesauri and to creating logical models of the world. This makes what we believe accessible to others in a clear, precise way. Forcing us to consider our basic assumptions and bringing to light any subtle disagreements or indeed errors. (39RA)
In this sense, we engineer ontologies that represent aspects of reality for a particular purpose. The word ontology has been used to refer to a wide range of computational artifacts of varying complexity, ranging from folksonomies (tag clouds), controlled vocabularies, taxonomies (Yahoo! directory), thesauri (Wordnet) to logical theories of reality (Basic Formal Ontology, DOLCE). (39RB)
As Leo Obrst explained in the 2007 Ontology Summit: (39RC)
- An ontology defines the terms used to describe and represent an area of knowledge (subject matter). (39RD)
- An ontology also is the model (set of concepts) for the meaning of those terms. (39RE)
- An ontology thus defines the vocabulary and the meaning of that vocabulary. (39RF)
One of the most successful applications of ontologies has been in Apples Siri. When you ask Siri find me a restaurant, it activates a Restaurant Ontology which defines what a restaurant, reservation and rating are and how theyre related to one another. Siri uses this information to interact intelligently and book you that reservation. IBMs Watson also a number of used lightweight ontologies to distinguish between people, places, times and other categories when playing Jeopardy! (39RG)
As our world becomes more complex, ontologies are a vital piece of a solution addressing the problems of Big Systems and Big Data. Depending on the intended use, ontologies can: (39RH)
- make explicit and accessible, implicit yet vital assumptions about our systems (39RI)
- enable integration among systems and data through semantic interoperability (39RJ)
- improve model design, adaptability and reuse, (39RK)
- reduce development and operational costs (39RL)
- enhance decision support systems (39RM)
- aid in knowledge management and discovery (39RN)
- provide a basis for more adaptable systems (39RO)
Finally, as we move into the knowledge age there is a growing expectation that our systems will be more self-describing and intelligent. In order to engineer such systems, allow intuitive use and meet expectations of all stakeholders, a more consistent and complete use of ontologies and ontological analysis must be made. The 2007 Ontology Summit provides a more thorough (and somewhat more technical) perspective on the exact nature of ontologies. (39RP)
Challenges for Big Systems (3900)
As the term Big System covers a broad scope and includes many different communities, the summit focused on a set of challenges where ontology can provide value. Ontology can help not just model Big Systems and their components, but aslo guide how they can be connected to one another and combined to create new systems and services. (39CM)
Big Data (39D1)
One topic that has seen a lot of attention over the past couple of years, especially as the cost of computing has continued its exponential decline, is Big Data. While much of the buzz surrounding the Big Data wave has focused on figuring out how to handle its large volume, there are several other problems that must be addressed. (39CO)
What happens when a molecular biologist wants to combine their big data with that of a geneticist? What if we want to combine several maps, say add Wikipedia entries to landmarks or pictures uploaded by users? (39CP)
Designing Big Systems (39RQ)
To do any of these, we need to understand what the data is, that is, we need semantics. To successfully combine Big Data, we need meaning and this is where ontology comes in. Ontology provides a way of capturing the relevant meaning of the data, transforming it into knowledge and adding value. (39CR)
Another topic that looms large for Big Systems is in their design and construction. As we engineer Big Systems, we increasingly rely on developing models of the system and its components. A large engineering project, say the development of an airplane, nuclear power plant, or car involves many people, often dispersed over large geographic areas working together on the same system. Similarly, groups of people striving to understand Big Systems such as our Earth, its climate, our bodies and so on, also rely on collaboratively building and combining models of both the entire system and its parts. (39CS)
To do so effectively means that we need to build an ontology of Systems. What exactly is a system and how are its components related to one another and to the whole? The 2012 Ontology Summit dedicated a track to investigating this question, with close participation of the Systems Engineering and Modeling Languages communities. (39CT)
Integrating Systems (39RR)
A recurring theme in the two topics above is that of integration and federation. In a decentralized system such as the Internet, it is rarely possible to design from the top down. Rather, many organizations seek to build systems, applications and services that follow a federated model. They strive to integrate multiple, largely independently created data or systems into a coherent whole. (39CV)
To do so, it is essential that the intended meaning of a model or a system component is communicated, otherwise costly errors may arise. One such extreme is evidenced by the billion dollar mistake made by the European teams making the Airbus380, where two contradictory interpretations of holes caused thousands of kilometers of pipes to be recalled and halted production midstream. (39CW)
Interdisciplinary Collaboration (39D2)
Lastly, a significant side-effect of all this ontologizing, is that through explicit semantics, one of the great challenges facing humanity today can be addressed. As the volume of our knowledge has grown so quickly and fields have become so specialized, were losing a lot of time and money to silod fields. Many of the problems faced by humanity today cut across multiple fields, and very rarely does anyone one person have the adequate skills, background or knowledge to tackle a problem on their own. (39CY)
More than ever, the ability to connect people and teams in disparate fields, with disparate ways of looking at the world is of utmost importance. One promising approach has been to develop ontologies to help negotiate the different specialized languages used by different communities. Many exciting examples abound today, from the Open Biological and Biomedical Ontologies, to Sage Bionetworks, to the European Unions FutureICT project. Each of the projects aims to provide the necessary tools and infrastructure to connect people with different skill sets, different fields of knowledge to meet the challenges we face today. (39CZ)
The 2012 Ontology Summit provides a unique forum where many of these challenges for Big Systems were discussed and progressed. Engineers talked about their problems and needs, while ontologists provided suggestions for better modeling or understanding of system components. Big Data projects shared how they were successfully using ontologies, and where more effort was needed. Various communities presented different approaches to tackling the challenge of integration on the web, while others discussed how a meaningful notion of quality could be develop. (39D0)
Stories (39D3)
The following are a collection of some of the interesting stories that were developed through the course of the 2012 Ontology Summit (39D4)
Story for Big Data (39D5)
A key component of the current explosion of knowledge is the proliferation of vast amounts of data. With greater computing power, were able to encode anyone persons DNA, track our internet usage, credit usage, the experiments at the Large Hadron Collider and so on, each of these activities creates a staggering amount of data. (39D6)
While the sheer size and scale of these data sets presents a challenge, knowing how to intelligently combine the data means that we must accurately understand the world that this data represents. If we want to combine data from multiple sources, then it becomes all the more important that we understand what each source intended by the publication of the data. (39D7)
To do this, we need theory. There are limits to blind statistical analysis. We need theory and statistical analysis together. Data publishers need to make explicit what their data represents, the systems that consume and transform To intelligently use this data and combine it for useful ends, involves developing theories about those relevant parts of the world. Especially if we want successful data reuse and adaptability. (39D8)
There are a variety of groups working towards this vision. For example, the linked open data (URL) seeks to connect distributed data across the net. While there are many data sources available online today, that data is not readily accessible. The LOD cloud aims to create the requisite infrastructure to enable people to seamlessly build mash-ups by coming data from multiple sources. (39D9)
Similarly, there has been a surge of work in bioinformatics, including the Open Biological and Biomedical Ontology, Gene Ontology and other sources which annotate big data with explicit semantics. These initiatives allow research groups to publish findings on genes, gene expression, proteins and so in a standardized consistent manner. (39DA)
Another example is the FuturICT project funded by the European Union. Its ultimate goal is to understand and manage complex, global, socially interactive systems, with a focus on sustainability and resilience. FuturICT will build a Living Earth Platform, a simulation, visualization and participation platform to support decision-making of policy-makers, business people and citizens. (39DB)
Story for Model Driven Engineering (39DC)
One way to express a theory of (a part of) the world is to build a model. Engineers and designers have always built a variety of models to represent parts of their disciplines. Designing a car, a power plant, a transportation system or even the climate relies heavily on creating a computer model of the system. In the computing age, it has become far easier to shared these models and the promise of model reuse has become a desired goal. (39DD)
Different fields have models of varying sophistication, though in many the semantics - the meaning - of the parts of the model are governed by implicit or inconsistent convention. (39DE)
First in engineering and slowly in other fields, were witnessing a gradual shift to explicit semantics. The various sub-disciplines within engineering have evolved from using informal modeling, to using formal languages to model their systems, to underpinning said languages with explicit semantics, to recognizing the importance of understanding the underlying ontology of the elements of the languages. (39DF)
Current cutting edge research in ontology engineering involves teasing the ontological status of a system component. What does it mean for one to say that a car has a headlamp as a component? What happens to the component if the headlamp is broken or replaced? Is it the same headlamp, is it the same component? (39DG)
Various standardization efforts are underway as well, from the development of ISO 15926, to providing formal semantics for the Unified Modeling Language. Similarly, groups are working to build repositories of ontologies, or libraries of ontology patterns - snippets that formalize important aspects of reality such as part-of or is-a. (39DH)
Story for Interoperability (39DI)
The Internet means that it is far easier for different people in the different parts of the world to share and combine data, information and knowledge. If we want to realize the true potential of this interconnected world it means that we need to be able to combine not just our data, but also our models. (39DJ)
An initiative like Sage Bionetworks might allow a doctor in China to integrate diverse molecular mega-datasets, and reuse a predictive bionetworks built by a team in United States that deploys new insights into human disease biology by a team in France. Each different community views and prioritizes parts of the world according to their own viewpoints and interests. (39DK)
Similarly, within a single enterprise, the same product may be viewed differently by each of the marketing, engineering, manufacturing, sales and accounting departments. Making sure that these views are, if not harmonized, then aligned so that information can be successfully shared entails solving interoperability. (39DL)
Semantic analysis is a fundamental, essential aspect of federation and integration. Building value by combining the views of different communities means solving interoperability, and that means negotiating the implicit meaning used by each of these groups. (39DM)
The Object Modeling Group has recently put out a request for proposal to create a standard to address such issues. Similarly, within the systems engineering community, one examples is ISO 15926 which aims to federate CAD/CAM/PLM systems in industry, business and eco-system-wide (beyound boundary of enterprise) scales. (39DN)
Stories for Interdisciplinary Collaboration (39DO)
Similarly, as knowledge has become more specialized, different communities have developed their own bodies of knowledge. Bridging these gaps can unleash a lot of potential, foster innovation, reduce the reinvention of the wheel and accelerate the development of better tools. (39DP)
While each specialization may use its own jargon and technical language, the underlying reality is the same. Ontologies, in the form of explicit statement of the assumptions in each sub-field can help identify points of overlap and interest between different communities. They can serve as tools to facilitate search and discovery. (39DQ)
The Linked Science effort is a project that aims to create an executable paper. It hopes to combine publication of scientific data, metadata, results, and provenance information using Linked Data principles, alongside open source and web-based environments for executing, validating and exploring research, using Cloud Computing for efficient and distributed computing and deploying Creative Commons for its legal infrastructure. (39DR)
Another project, the iPlant Collaborative, is building the requisite cyberinfrastructure to help cross-disciplinary, community-driven groups publish and share information, build models and aid in search. The vision is to develop a cyberinfrastructure that is accessible to all levels of expertise, ranging from students to traditional biology researchers and computational biology experts. (39DS)
People (39DT)
- Summit General Co-chairs ... Dr. NicolaGuarino & Dr. LeoObrst (39DU)
- Symposium Co-chairs ... Dr. RamSriram & Professor MichaelGruninger (39DV)
- Ontology for Big Systems and Systems Engineering - Co-Champions: MatthewWest, HensonGraves (39DW)
- Challenge: Ontology and Big Data - Co-Champions: ErnieLucier, MaryBrady (39DX)
- Large-Scale Domain Applications - Co-Champions: SteveRay, TrishWhetzel (39DY)
- Ontology Quality and Large-Scale Systems - Co-champions: AmandaVizedom, MikeBennett (39DZ)
- Ontology for Federation and Integration of Systems - Co-champions: CoryCasanave, AnatolyLevenchuk (39E0)
- Communique Co-Lead Editors - ToddSchneider, AliHashemi (39E1)
- Co-editors': all other champions (39E2)
- Public Relations - champion: AliHashemi (39E3)
-- maintained by the OntologySummit2012 Public Relations Champions: AliHashemi ... please do not edit (32CZ)