Transcript of Ontolog Forum Panel Discussion November 3, 2005:    (I7A)

(Transcriber's note: Where the narratives of individual speakers leave out antecedents and references, I have tried to supply them as sensed from the context. It is recommended that readers, and listeners should try to have the Microsoft PowerPoint slide decks as appropriate at hand for reference. Also, where names are provided, I have done limited searches to provide URLs that are useful for giving some context to the reference.)    (HZT)

PY: This is the Ontolog Scheduled Discussion Session on November 3rd, of the year 2005. We have for a topic today Healthcare Informatics Landscapes, Roadmaps and Blueprints: Towards a Business Case Strategy for Large Scale Ontology Projects, and this is take 2 of the same topic as a follow-up session from the highly successful session we had August 25th, 2005. Originally Rex Brooks, who had proposed the sessions and who had co-organized with Bob Smith was supposed to be moderating for us, very unfortunately, Rex has been hospitalized as of last Saturday, so we should all wish him that he gets well real soon because we need him back. And I would also like to thank Dr. Bob Smith who agreed to take on the moderation of the session at very short notice. So it is all yours, Bob.    (HZU)

BobS: I like that you called it moderation, as opposed to Moderator, there's a subtle semantic difference there... basically my role is to be the optimist, looking at the positive side of the world, and I have virtual gavel here, kind of like a hammer. I've got a timer, and the timer goes from five minutes to ten minutes, and hopefully, because we don't have a large number showing up, today, probably a lot of people listening in later as we get to the prime time presentations to get to the audio presentations, and we just want to make sure that we get to the key ideas ... the absolute moderator I guess, so the major objective is to identify the key landscapes in healthcare standards and to clarify semantic interoperability issues among and between some of the key standards, and to identify appropriate ontology strategy and commitment to a business valuation effort. What I would like to do is start with Mark Musen's presentation and comments. The way Peter has been able to structure this, if you scroll down to item number in the material, the first hour we will catch up on the events that occurred since the last presentation 8-25-05 and take some of the questions unanswered questions and issues. The second hour we'll be dealing with more general questions and issues divided up in the international issues and intergovernmental business issues, and with that, uh.. Mark Musen... Can we have your comments so far?    (HZV)

MM: Sure, Thanks Bob. I want to apologize in advance. I wanted to be on the whole call this morning, but I'm going to be called away in about twenty minutes to a meeting I have to attend. But I wanted to let everyone know that there actually have been a number of things that have happened since the last time we spoke on this topic. One is generic, and one is specific to biomedicine.    (HZW)

The generic issue is that there has been, as many of you know, as many of you were actually present, we had the kickoff meeting for the National Center for Ontological Research last week in Buffalo. And we're looking forward to an opportunity to use this organization as an umbrella that can provide a means for seeking funding for work in ontologies and also as a way of interacting with ontologies in government and industry and academia and generally working to make the ontological research more prominent in the US and to provide a means for bringing together people who are working in this area.    (HZX)

One of the things that all of us recognize is that a lot of us who do work in semantic integration in ontologies are in different niches and what we need is a means for getting critical mass and we're hoping that the NCOR will provide that opportunity.    (HZY)

That's the first piece of good news over the last six weeks or so. The other piece of good news is something I couldn't disclose when we were talking last time on this subject, but I'm really very excited that the National Institutes of Health has awarded a grant to create one of the three new National Centers for Biomedical Computing, specifically in the area of Biomedical Ontologies. So you'll hear from Chris Chute in a little bit since Chris is one of the major participants in this to this initiative along with myself, people at Berkeley, Barry Smith at Buffalo, as well as driving biological projects that involve people at UCSF, University of Oregon and Cambridge. (Name? Peggy Storey?) who's at the University of Victoria in Canada, also is involved in a way regarding the technology we'll be developing.    (HZZ)

So, we're about to create the National Center for Biomedical Ontology. I don't have slides that accompany this discussion, our website, which is bioontology.org, and bioontology is all one word without a dash, provides a lot of background information about our center.    (I00)

I think what is probably most remarkable to those of us who are involved this work, is that not only has NIH created this national resource for Biomedical Ontology, just the idea that NIH would have such enthusiasm for funding anything with the word 'ontology" in the title seems just absolutely wonderful. In fact, we have heard, indirectly, that we are considered a hot property in that there is are just a lot number of personnel at NIH and a lot of Institutes that are particularly optimistic about the kind of work that our center will do.    (I01)

So I should let folks know that we are not a center that's going to create the next generation of controlled terminologies in medicine. We're not actually chartered to develop new content, but we are chartered to create technology to make that content much more accessible. And that means, not only clinical ontologies and controlled terminologies that will be accessible through the Mayo technology that you'll hear about later, but also biological ontologies. We are really going to run the gamut from the biological to the clinical and I think that's going to be very exciting.    (I02)

And so, we are obviously just getting started. We don't have anything to show except an organization website, a logo and an acronym that we fiercely debate, but we are very excited about our next five years of funding and our ability to bring together I think some of the most important groups in biomedical ontology in the country in an effort to create a structure that will lead not only to technology to make ontologies more useful in healthcare but also some outreach activity that I think will encourage the healthcare community to be able to build better ontologies and to use ontologies in amore productive way. So I'll stop there, and I'll obviously take questions from anyone.    (I03)

BobS: You mentioned outreach activities?    (I04)

MM: Yes, one of the things that's very exciting about these national centers is that not only are they chartered to create technology, but also to disseminate that technology so that people use it. And, so, primarily through the direction of Barry Smith's group, with lots of contributions from the rest of us, we will be holding workshops that ultimately will be designed to help people learn this center's technology and how to use it and in the meantime we'll have the major thrust of teaching about ontology, about ontological principles, and best practices for building good ontologies. In fact, in May, there's going to be an outreach activity that's actually Germany at Dagstuhl Conference Center the primary information technology center for academic conferences in Germany, and bring together a lot of our European colleagues and hopefully some folks from North America as well and we're planning on the kinds of events down the road where we hope to be able to invite people who are building ontologies and provide an opportunity to have people in the same room to examine some of the modeling decisions made and critique the choices and talk about alternatives.    (I05)

CC: Perhaps you should mention our one option as well?    (I06)

MM: Thanks (Chris?) As is the case with all of the National Centers for Biomedical Computing, NIH has a mechanism for collaboration with all centers, and obviously our Center for Biomedical Ontology is eagerly soliciting people who will want to collaborate with us and use NIH funds as a way of doing that. There is as well a mechanism in place whereby we would want people who have a project that might build on the center's resources or take advantage of the center's personnel to contact us as soon as possible. There's an RFA (Request For Application) that's out, and if you go to the website for the center you'll be able to get a link to RFA from NIH. Essentially, NIH is requiring a letter of intent in December and proposals that would be due January 17, at least for this round and there'll be future rounds we anticipate. And so, this is going to be a wonderful opportunity for people, we hope, to come out of the woodwork, and propose ways in which they could interact with the whole center's staff. Obviously, we're trying to really stress outreach, but not only in terms of these workshops but also in terms of our ability to collaborate with as many of our friends and colleagues as possible. And our new friends and colleagues, too, so that's a major focus of our work.    (I07)

Pat Cassidy (PC): Mark, a question.    (I08)

MM: Yes, Pat?    (I09)

PC: In any of the existing contemplated projects, do any of them include building an application that involves inferencing and reasoning beyond just the question of querying the ontological database itself?    (I0A)

MM: Well, I think all of us or most of us in the center using ontologies and other research activities, our particular center grant does not involve building systems that use ontologies for reasoning or decision support, but we would certainly welcome people who would want to collaborate with us and use ontologies for those kind of applications, certainly to complement other research activities center personnel are engaged in.    (I0B)

PC: Well, I think it would be helpful to have multiple applications using the same ontology for comparison purposes.    (I0C)

MM: Absolutely, Pat, and I think one of the things we want to be able to do is not only to have applications that demonstrate the importance of ontologies, but, as you suggest, having ways in which we can begin to evaluate ontologies in some sort of a concrete way. I think one of the things that worries us is that we see ontologies blossoming as a way of dealing with decision-support problems or doing natural language processing or data integration, but at the same time that we don't really have good metrics for ontology evaluation other than, sort of face validity. And any way that we can do that, either through new kinds of peer review mechanisms or through reference models or standard applications would be wonderful.    (I0D)

PC: And in the event that perhaps (we're?) given for applications that use ontologies, is it contemplated that it would be required that those applications be made public, so others can examine them to see how they work ... the details...    (I0E)

MM: Well, NIH, as far as the center is concerned has very strict requirements, which are consonant with the way we have worked in the past, that all of our software be made available in some sort of open framework. I don't know whether those requirements apply to collaboration projects, but certainly it would be in our interest of collaboration projects to use that kind of mechanism where people can actually get their hands on the work and use it. I should also mention that the means by which these collaborative grants would be issued is really under the control of NIH, so the center obviously needs to evaluate projects and make sure those projects are consonant with the center's goals and that the center's personnel will be available to contribute and to help out with the collaborate projects, but ultimately, the awards that are made will be based on review groups at NIH and programmatic needs of NIH Institutes.    (I0F)

PY: Mark, one more question. You mentioned a program in Germany. Could you repeat that, please?    (I0G)

MM: I thought you would ask me that. I don't have the date in front of me...    (I0H)

PY: The month?    (I0I)

MM: The end of May.    (I0J)

CC: May 21 through 24.    (I0K)

MM: Darn, you're good...    (I0L)

BobS: Collaboration at work.    (I0M)

(Chuckles)    (I0N)

PY: May 21 through 24, 2006.    (I0O)

MM: 2006, right. The information can be found on our website.    (I0P)

BobS: Excellent. Thank you very much Mark.    (I0Q)

What we would like to do now is go through the attendees, and simply ask the question: Who are you? Where are you from? And what do you want to get out of this session in terms of expectations, interests, or questions? And I would like to start with Mark, Mark? Since you're listed in the attendees near the top of the wiki page?    (I0R)

PY: Before we start with Mark , can I ask who is already online, and who cannot see you name up there? Under the Attendees List? Did anyone join us since we started?    (I0S)

Caller: Tim Cook,    (I0T)

PY: Tim Cook, All right, Tim's the only one. Mark?    (I0U)

MW: Marc Wine? ... I think he hung up.    (I0V)

MM: No, I'm here.    (I0W)

MW: Oh, I'm sorry.    (I0X)

MM: Trying to get rid of me quickly, aren't you, heh heh? (chuckles all around) I'm on the faculty in the Department of Medicine, Stanford. Also with a courtesy assignment in the Department of Computer Science I've done work for a long time in the area of decision-support and knowledge-based systems which has led me to the area of ontologies as way of dealing with a number of issues including structuring of knowledge based decision-support, building electronically based patient records and data integration, and these are some of the major uses of ontologies and as you just heard, I'm moving in a direction to start contributing to a national infrastructure that will make use of ontologies, particularly valuable to people in biomedicine.    (I0Y)

BobS: Thank you Mark. I'm Bob Smith, the moderator, and Professor Emeritus at the California State University Long Beach, and right now I'm a patient, in bed and have been in bed and hospitalized and I desperately would like to see a person have these information systems and ontologies evolve and get through HL7 to RIM4 or (?) and I sure would like to ICD 11 as opposed to ICD 10 or as I understand it, what we're dealing with now, ICD 9. I'm involved with triage, medical triage, and, obviously there's a huge opportunity and a need after Katrina and recent emergency responses to have groups on the ground that know how to triage and that are well-informed by health standards on up into the work that Mark Musen is doing and describing.    (I0Z)

Chris Chute?    (I10)

CC: I'm an internist and an epidemiologist and presently I'm chair of biomedical informatics here at Mayo Clinic and I've been sort of living at the intersection of clinical epidemiology research studies and the advent, if you will, of genomic medicine. We're recognizing it's the data representation problem that really underpins this whole thing and as a consequence, I moved into the standards world some time ago. I'm presently co-chair of the HL7 Vocabulary Committee, and in fact we are hosting a joint meeting as we speak here in Rochester, of HL7's Vocab and Modeling Groups. I'm also a board member on the new Health Information Technology Standards Panel formed under one of (HHS)Secretary Levitt's grants to ANSI, in this case, for data harmonization, and a whole bunch of other standards committees that I won't bore you with.    (I11)

BobS: You're right at the cortex of this vortex. All right, Peter?    (I12)

PY: Peter Yim, I'm one of the co-conveners of the Ontolog Forum, along with Kurt Conrad and Leo Obrst, that's been around for three and a half years now. In my commercial life, I am CEO of CIM3 the ISP and Infrastructure Company that is trying to host collaborative work environments to help people solve complex and urgent problems by making distributed collaboration and collective intelligence ... okay.    (I13)

BobS: Thanks Peter and we certainly appreciate all the fantastic work you've done... Conrad Bock?    (I14)

CB: Hi, I'm Conrad Bock of NIST. I'm in Ram's group actually, who was on the panel last time, and I'm sitting in for him. I'm primarily interested today in just learning more about the healthcare informatics landscape. Although I don't have a lot contribute to understanding that, I'm waiting to hear to what people have to say about it.    (I15)

BobS: Thanks. We really appreciate your present activities in learning about the landscape. Let's move on to Kurt Conrad (KC)?    (I16)

KC: Thanks Bob, I'm an independent consultant here in the Bay Area, and one of the areas I have a lot of interest in is in identifying systematic approaches that are articulating and dealing with the management issues associated with ontology-based solutions. I find all these conversations to be imminently and immediately useful. Thanks.    (I17)

BobS: Thanks you Kurt, Sath? (Sathyaprasad Bandhuvula (NIST)) Sathyaprasad NIDT?    (I18)

PY: Uh yeah, maybe missed    (I19)

BobS Marc? (Marc Wine-MW) MW: This is Marc Wine, and Good Afternoon. I'm with the GSA Headquarters Office of Intergovernmental Solutions and my role is coordination of Intergovernmental Health Information Technology. I'm focusing on knowledge sharing, coordinating and supporting Health IT Communities of Practice and identifying and supporting coordination of collaborations, collaborative partnerships for Health IT initiatives, activities, projects that are in line with the nationwide missions, goals and practices.    (I1A)

I'm participating as a member of the FHA CHI Council (Federal Health Architecture Consolidated Health Informatics Council http://www.whitehouse.gov/omb/egov/c-3-6-chi.html ) of HHS. I've participating going all the way back to the beginnings of the Consolidated Health Informatics Standards Development. I'm participating in the ANSI Health Information Standards Panel Work Group, one of the works groups. I'm involved in coordinating with RHIOs (Regional Health Information Organizations), including the RHIO Federation, that Health Information Management Systems Society (HIMSS) has begun to develop and implement, and I've been conducting liaison activity with various significant stakeholders such as Medicaid, that will be having its annual State Medicaid Directors Conference next week. I'll participate in a roundtable discussion there with an objective to listening and learning about what are the Medicaid Directors' interests and in particular, what are their needs or gaps where support and coordination across governments in building partnerships could help them cost effectively develop health information systems and we're trying to help records and interfaces.    (I1B)

That's a little bit about my background. My overall, long-term focus is in federal health systems policy planning and development. I'm also, and underscore, I'm coordinating the federal Health IT Ontology Project group. The federal HITOP and I'll be making a presentation a little bit later in this discussion on the first meeting that HITOP held to outline its approach to a mission and strategies.    (I1C)

BobS: Very good, Marc . Brand Niemann? Is Brand with us?    (I1D)

BN: Yes, I'm Brand Niemann with the Environmental Protection Agency and I'm also chair of the federal CIO Council Semantic Interoperability Community of Practice and last week I was asked to serve on the NCOR (National Center for Ontological Research) Executive Committee and I'm looking forward to that.    (I1E)

BobS: Thank you. Pat Cassidy?    (I1F)

PC: I'm Pat Cassidy. I work at Mitre and I'm concerned with ontologies and I'm mainly concerned with upper ontologies. My current focus outside of Mitre is that I currently serve as chairman of the Ontology-Taxonomy Coordinating Working Group which is a work group of the Semantic Interoperability Community of Practice and my concern generally as well as with health is that any ontologies that are being developed coordinate very tightly with the higher level concepts that they use that will enable them to interoperate with each other and to that extent anything that I can do, I will be happy to in coordination with anyone else who is working on these things.    (I1G)

BobS: Good. Thank you, Pat. Gary Vecellio? (GV)    (I1H)

GV: Hi, I'm also from Mitre Corporation. I actually work in the Technical Center in a DOD FFRDC and I'm just trying to spin up on healthcare informatics, thanks.    (I1I)

BobS: Thank you, Gary, Pat Heinig? (PH)    (I1J)

PH: I'm Pat Heinig. I'm with the Environmental Protection Agency, also. I'm a Senior Enteprise Architect, working at the Departmental Level. Also, I'm consulting with one of the offices here on emergency response architecture and belong to the CIO Council, trying to represent semantic technologies and ontology in general to that body, much as Brand is doing also. I'm getting ready to participate as the co-chair of the Federal Health Architecture's Data Architecture Working Group, since that working group is just now starting up. And, of course, from an EPA Health standpoint in health strategy, I try to keep with where the ontology field is going and would apply in this area. Thank you. BobS: Thank you, Pat. And, Tim Cook? (TC)    (I1K)

TC: Yes, Thanks. I'm a principal with CHASE Health Informatics Incorporated. I'm here today representing the PATH Research Group at the University of British Columbia and the Architectural Review Board of the OpenEHR Foundation.    (I1L)

BobS: Okay, thank you very much, Tim, and it looks like we have a very good representation of the health information landscape, or at least a good chunk of it. Any other questions at this opportunity... otherwise we will go into Chris Chute's presentation.    (I1M)

If you go up towards the top, in the first hour, click on Chris Chute's presentation    (I1N)

Chris, It's all yours, sir.    (I1O)

CC: Thanks a lot. What I wanted to focus on here is this whole question of interlocking reference terminologies and how ontologies can really cross share.    (I1P)

If you look at the second slide: Understanding the Clinical Process, this is simply a notion that "Gosh ... everything begins with patient information, we go to medical knowledge and then if we're very clever, we can reinsert that knowledge back into practice, so it creates a wheel, if you will, of continuous knowledge and the question invariably is, "What holds that wheel together?" And we've got a bunch of junk there in the middle: Shared Semantics, Ontology, Vocabularies and its all the same thing at the end of the day. It's consistent content representation, consistent concept representation, and that's what ontologies are good at, at least if we use them thoughtfully.    (I1Q)

The third slide... the whole continuum thing... that's what I call the chasm of despair and it's essentially an illustration, a fairly cartoonish one, of activity and productivity on the two poles of the informatics spectrum, that is to say, in basic science with things like the gene ontology and the efforts that Mark Musen's new center is very very tightly connected with and on the right hand side, in the green space with clinical medicine, the SNOMEDs and the like which Mark's center, of course, is embracing, but, it is how you make these two sides work together so that as you introduce, say, genomic concepts into clinical practice, the clinical side doesn't go and, say, rewrite the gene ontology or something equally silly.    (I1R)

So, this is a whole question on slide 4 of science and communication as language and it gives you a few examples here on invoking concepts that out of domain on that third bullet and an example is LOINC. LOINC, as many of you may know is a system for clinical tests and test naming and the like. While, you might have drug sensitivities or you might have other references to the drug levels, and it begs the question, if you're going to build an ontology that references drug concepts where did the drugs come from, and right now, of course, they come from whole cloth, they don't cross references to the NDFRT or some other, to RxNorm or to other emerging ontologies that might have drug names and by analogy in SNOMED or, for that matter, MESH, we refer to anatomy concepts and we reinvent them, so the whole premise of cross sharing ontologies is that there would begin to emerge something akin to the Foundational Model of Anatomy out of Seattle, Cornelius Rosse's effort, which would define, say, a canonical representation of anatomy that would be used by SNOMED, that would be used by Gene Ontology, that would be used by these other things and you get to this whole question of composition and small information models.    (I1S)

Another dichotomy that's occurred and that we recognized in spades in clinical ontologies is this issue of granularity and detail ... and there are all kinds of ways you can carve up the universe but, is the DRG term bad for you, or is the SNOMED term actually better? The answer, of course, is that they are both useful and both have their use-cases, but the point of this slide, number five, is to show that they exist along a continuum, and, with great cleverness, if we look at the next slide, number six, we see a field of observations, all those little plus signs, triangles and circles. Imagine that they are patient observations of Mr. Smith or Mrs. Jones, or Ms. Somebody, and the idea is, how do we re-aggregate those events into decision categories, public health bioterrorism categories for reimbursement, ICD9 types, categories, or whatever and this whole question of how do you apply logic rules and aggregation rules to accumulating or joining or operating on more detailed, granular, we call it, representations in a clinical space to a high level summary or classification space without saying that "classifications are bad, or terminologies are bad" and then you sort of just coexist. The Content versus Structure--number seven-- this simply points out that we live in a world where concepts and their meaning can be profoundly altered by their context and the illustration here is in the bottom side you've got an Information Model in blue with Heart Disease in a box and the label on that box of course is Family History and the point is that this is a trivial information model that profoundly changes the semantics of what is in the box and it's simply recognizing that when we talk about ontologies, particularly in a clinical information space, we have to recognize that they occur in a larger context and that their semantics and meaning are often determined by the Information Model in which they exist. The top side of the chart is simply showing that, gosh, you' get same kind of compositional kinds of issues where you're dealing with compositional phrases that modify the semantics.    (I1T)

The eighth slide simply shows some real world examples drawn from the HL7 Reference Information Model and the SNOMED Terminology Information Model and they are both talking about the same darn thing, in one case using an Information Model Paradigm, in the other case using a terminology paradigm or ontology paradigm. Which is right? Well, they're both right, but if we're going to have consistency in interoperable semantics it's useful to get that concept straight.    (I1U)

So, then, the ninth slide, and I think this is near my last slide, is the whole notion of saying, "well, if we're going to have these reference ontologies, where do we stop? And the example I use here is the name of an enzyme, Sulfotransferase, though it doesn't matter what it is, but it's simply recognizing that that enzyme exists in all mammals and they're slightly different and that even in a human being in the work we've done in pharmacogenomics here at Mayo has illustrated that there are probably 12 or 13 Sulfotransferases in the human genome and then you into the whole problem of snips (SNPs) or Single Nucleotide Polymorphisms where, you know, a Sulfotransferase in me may not be exactly the same as a Sulfotransferase in Mark Musen, so what is truth? It gets to hurt the brain.    (I1V)

The whole question of this last slide is promoting the notion of a common interchange format, the LexGrid Model which is referenced on the bottom is very tightly partnered with Mark Musen's Center, the cBIO center I reference. We've got all these vocabularies and ontologies kicking around and how do we get them cross-link them, how do we inter-link them and how we deal with versions over time and how do we deal with standard software calls and common terminology services which is the HL7 specification I reference here, now an ANSI standard. So building a web of terminologies so that the very notion of cross-linked, cross-walked, integrated terminologies is sustainable in a grid-like format on the internet and the terminologies are there sorta, kinda when you need them so we begin to can think of an ontology as a web resource in the same way we think of Google to really begin to have harmonization so the Semantic Web can effectively be brought forth in a scalable way. I think that's five minutes.    (I1W)

BobS: That's great. I'd like to point out that I heard Mark Musen describe a logo as a discussion item?    (I1X)

CC: Well, that's actually the cBIO logo he was talking about and there was some banter on the email about whether to throw the LexGrid logo up here, and I opted not to do that just not to confuse the universe. You can see it at lexgrid.org if you're dying to see it. But it's really this question of the premise behind the LexGrid is the inter-linkage of standards and content and information. And I think the discussion Mark Musen was referencing was the discussion of the cBIO logo a slightly different logo.    (I1Y)

BobS: Yes, it was the LexGrid logo with the 3Dvenn diagram of standards, content and what's the other one?    (I1Z)

CC: I'm blanking. I suppose I could go look at it. Tools, it's tools. Yes thank you.    (I20)

BobS: Tools, yes, I saw a huge connection with other presentations as an important part of the landscape and the development of standards. But, let's move on. Do we have any questions for Dr. Chute?    (I21)

DW: By the way, this is David Whitten. I've finished with my problems, so I'm available now to be here.    (I22)

BobS: Great David. We'll have an introduction for you a little bit later.    (I23)

Any other questions for Dr. Chute?    (I24)

Okay! How about Conrad Bock?    (I25)

CB: I'm not entirely sure this is on topic, but it does concern roadmaps and that sort of thing we're talking about an area I think is important somewhere along here. And, it's generally around what sort of language for subject matter experts is suitable for them. So, going on to second slide, I was looking at the transcript from the last session and I noticed Mark Musen's comments about NCI (National Cancer Institute) Infrastructure and the Arden Syntax, and the message I got from that, while it may not have been what was intended, the message I got was that there is a bottleneck getting the information from the expert with the knowledge into the machine. And that NCI has a bottleneck around highly trained specialists, and Arden, while widely adopted in some circles, actually doesn't have enough content to be useful.    (I26)

And, to me, this is the key obstacle or one of the key obstacles for successful ontologies. And, its the same one faced by expert systems since their heyday in the eighties, and, to my knowledge was never actually overcome. Now, having spent a few years here in Manufacturing Technology, I've really come to think that Subject Matter Experts (SME) don't really adapt well to existing Knowledge Languages (KLs). Even when I've spent my time in training them. Even when they're trained, it's a bit of a square peg in a round hole. And, so, my general perception is that Knowledge Languages really are based in computer programming and that they were adapted for Subject Matter Experts to use, but not really with a lot of attention to the mental models that experts really have.    (I27)

PC. Can I ask a question at this point? Have you attempted to use, for example, some of the controlled english dialects ?    (I28)

CB: There is some work going on that I'm aware of , but I'm not sure what the results are yet. I guess I'm thinking of the kinds of Knowledge languages that are typically called that.    (I29)

PC. OWL is quite hopeless from the point of view of perspicuity ...    (I2A)

CB: I guess I should refer not necessarily to specific languages as they are shown on the screen, but they have a set of concepts they're organized around. Maybe there's, quote "user-friendly' interfaces, but you've got to look at that interface, look at that specific syntax. It's a set of concepts around them that I think are not suited... But I'll go on and maybe this will become a little more clear.    (I2B)

Next slide...    (I2C)

BobS: Uh, slide five?    (I2D)

CB: That would be slide four. For example, the notion of classes, which regardless of the syntax you're using in which that concept is presented, I find very confusing to Subject Matter Experts, even though it is quite natural for scientists. My perception of it is, and I'm only here representing myself, is that this really originated in allocating and parsing blocks of memory as you look at a computer program and decoration of classes, that's what they are for, to determine the shape of memory and how you access it.    (I2E)

You don't find the notion of class in conventional logic. I think this might surprise a lot of people, but the sets and predicates which you see, say, in the semantics of OWL. They don't have properties. In fact, OWL can be looked at as its own sort of ontology separate from logic that just happens to be mapped into logic, and refers to logic as semantics.    (I2F)

PC: Wouldn't he Aristotelians disagree with you? I mean the whole notion of Aristotelian classes, which have properties and attributes, I thought was its origin, not so much parsing blocks of memory?    (I2G)

CB: Maybe that's a mater of history, of this particular history. I think that was a layered on afterwards. Well, this is actually a matter of the interpretation of history, not so much.... regardless this is the content of what I'm trying to say here. My perception is that these things were invented by computer scientists who were familiar with computer languages, and later on, it was sort of adapted for Subject Matter Experts. Maybe we should come back to that one.    (I2H)

I found that a number of colleagues here focused on instances and relations between instances defined by navigating properties, that is roles. This means really thinking in terms of roles. And to explain what that means, let's go on to the next slide.    (I2I)

Slide five. This is an example that just happens routinely, and that everyone deals with when trying to build models and classes of any significance, that have any structure. In this case, I'm talking about engines, mechanically engineered artifacts that could be electro-mechanical, or they could be anatomical, they could molecular, and in fact, later I will give an example about someone who came here doing some molecules, but this is relationships. This picture is in UML, but it could be in OWL, And ... it could be in any class-based language. And it shows a typical breakdown you might want to show parts into a whole so you want to assemble an engine and a wheel in a car or a propeller and an engine in a boat.    (I2J)

Well, when you try to do that in a class diagram, or any kind of class-based language, you get this situation where you see all these things have happened the stuff in blue at the bottom – all the things that happen that you kind of didn't expect. You find that you have wheels ... because of the way this has been set up, you have wheels that are powered by engines in boats, you have power to wheels in one car being delivered to the wheels in another car or from the engine in one car to wheels in another car. The same thing, the point is you have the same thing in boats, where the boats propellers being powered by engines in cars. And you have wheels on boats.    (I2K)

So, when anyone tries to apply class-based modeling alone in to this kind of structured object modeling, which is the problem, you will run into this. And if you look at the sixth slide, this is kind of what you might find if you really insist on trying to kind of figure a way around this in class modeling. But it still doesn't work. You kind of specialize all these things through property restrictions in OWL trying to make boat engines only power propellers and that sort of thing. You still end up... it doesn't cover all the restraints you wanted it to, and even this complicated structure still allows the engine in my car to power the wheels in another car.    (I2L)

PC: Is that because they are not doing any individual level, instance level reasoning? They're only doing class level reasoning?    (I2M)

CB: Right. The essential problem is that the classes and instances aren't bound together in any way. Which is where I come back to classes as a memory map. When you look at an ordinary computer program, and you see the class definitions, of course there are no instances, because when you're writing you computer program and when its runtime is completely separate. In the definition of the class, and the things about it, you can't refer to the instance.    (I2N)

PC: In the sense that you can't refer to instances when you're writing the class.    (I2O)

CB: Right, and so, to illustrate that let's move onto slide seven. Actually that'll be illustrated in the next two slides, but, slide seven is an example of what I would think of as the SME's use of the point. In this case it happens to be a UML 2 composition diagram. This embodies all the constraints that eliminate all that stuff those weird things that were happening in the class diagram. Because, basically, it contextualizes all that in the little box that has the color engine on it. That means an engine as used in each particular instance of a car. It doesn't mean engines in general, it doesn't mean car engines, it means each engine as it is used in an instance of a car. And what this diagram then ends up meaning is that for each car that engine is hooked to the two wheels in that car and then all the constraints that you wanted out of the class diagram you actually get here and in a rather simple way.    (I2P)

If you go to slide eight, I show a little mapping between the SME's view and the computer scientist's view. You see this red mark is supposed to indicate what's missing. That is, you're trying to show that there's a constraint on what happens when you navigate from an individual car to an engine. What's the object that plays the engine role. What's the object that plays the wheel role in this particular car and you're talking about a link between those two things you found by navigating in the same car, and those relations between sets of instances, which from a computer science is kind of hard to say. It's kind of long, and it involves a rule that's actually quite intuitive and simple to a Subject Matter Expert, when you look at the diagram on top and you know what that means without much more explanation.    (I2Q)

And this little diagram took me, working with systems engineers who work on large projects like rockets, it took me a couple years to get this mapping across. Because they were making a UML profile. And, if they'd stayed with the top level view, and hadn't wanted to know everything alout UML, they would have been okay, but they wanted to understand everything, so well, we went fron a class diagram and we had to explain all this mapping stuff.    (I2R)

DW. Is this the same basic difference between saying, "Every cat has its own tail," and saying "Every cat has the same meow?"    (I2S)

CB: Yeah, it is sort of like that, like the second thing you said would be "Class: cat is related to class: sound: meow," while the first thing you said is more like... let's go to slide nine. It's more like what you just said.    (I2T)

It turns out that the SME view is actually quite close to the first order logical view because when you write out a first order logic statement you find you can't write the first order logical statement without talking about the instances or the classes or the predicates at the same time. So, if you look at this mapping, it's much more straight forward. This box at the top there's this car, which is a class, but what we're really talking about is not the class but each instance prototypical instance and you see the little arrow that's pointing down to the car predicate, essentially grabs one of those instances which is like every instance, prototypical instance, but that starts navigating down. This grabs those, um, prototypical instances and starts navigating down to find ... the instances that are engines for that prototypical instance, then you get down to powers statement which is the thing that's missing in the class diagram, where you are actually linking the things that played a role in each individual car instance ...    (I2U)

So I've actually found that these class-based models are actually sandwiched between two groups that have existed for much longer and actually have their own way of thinking about things that bind concepts together rather than takes them apart and one is first order logic that's been around for quite some time, and the engineering disciplines which have also been building things forever. It's only more recently that there's been this injection of these sort of single-minded classes, and I'm not against classes in general, it's just that single-minded classes shouldn't have been injected in between them and I think it doesn't work so well.    (I2V)

PC: Have you had an opportunity to see how Subject Matter Experts use Concept Maps, I've heard that some like those diagrams.    (I2W)

CB: No, I haven't. I'll have to look at that.    (I2X)

PC. What I recall of Concept Maps is that they are still doing class level kind of logic instead of Wittgenstein level logic. I guess Passé down in Florida is working with Concept Maps translating them into OWL, I've heard he understands first order logic really good.    (I2Y)

(Chuckles) I think its Zeta Language.    (I2Z)

CB: I think you're probably right. There are languages that might be considered Visualizations of Logic, and ways of binding instances and classes together. I'm sort of referring to the sort of mainstream KLKR Languages more than the outliers. We should look more at the outliers. That's a good idea.    (I30)

PC: And I think the contrarian folks, the first order logic controlled Englishes, that make this more intuitive as in slide 1.    (I31)

CB: Yeah, the ones I've seen do this interesting thing that ... when they use the same word more than once, and, in each case, it refers to what in logic would considered a different variable, they put a little number next to it. So, if they use that same word again, and what they mean is that object that they referred to before, or when they use the same word in the same way in some other place, they use the same number. So there's tricks like that to disambiguate English.    (I32)

PC: And when there's only one, you can use those little things for back reference    (I33)

BobS: Gentlemen, could I interrupt and suggest Pat, could you hold the questions a little bit? We're under a bit of a time constraint.    (I34)

CB: Okay, I'm almost done. The next slide was just about, was on rules which I think fall into the same kind of difficulty. My sense, and I guess this is side comment more historical ... to me, they arose to simplify procedural control structures and they introduced a lot of their own control problems. But in any case, they tried to deal with complicated procedural control and to break it up. Again, they aren't present in conventional logic. This presents logical implications which is a very different thing than, say Prolog rules, or logic programming efforts. They're also not present, as far as I can tell in SME discussions, at least not by themselves, they're always embedded in some kind of procedure, where there's a particular context in which to make a decision.    (I35)

And we were to look at stuff like this, my suggestions would be to look at rules or processes that are comparable, or at least in the same syntax, like planner-like languages allow queries and assertions in any order, so essentially a particular rule becomes a procedure which happens to have all the queries in front and all the assertions in the back. And, also Process Constraint Languages like PSL basically put processing rules on the same footing. And even if particular processing rules for language may not be good for SMEs, the conceptual structure of them, I think, would be better because they don't make such a rigid distinction between rules and processes.    (I36)

(advance slide)    (I37)

So the question is, I was very intrigued by Mark's presentation of the grassroots movement going around OWL and I think that's really great and I hope it overcomes these obstacles, but there's some question if that would actually happen, and whether ontologies are going to run through the same problem that other systems have.    (I38)

I think it's important somewhere on the roadmap of this work to have something addressing extensions and development ontology rule languages that are developed from the SME's viewpoint and with attention to their mental model. And... especially the way that Subject Matter Experts think of things together where as computer languages seem to think things apart. And I have a few more references that document these things on slide 12.    (I39)

BobS: Weber and Shanks ACM models, is that one of the 80s?    (I3A)

CB: No, that's more recent. Actually some colleagues of mine suggested these papers, and actually they're not so relevant, but might be interesting, and of course, I could have had a long list here. There's all the prototype-based software-based development that happened at Sun, and that's where Java overtook the prototypical, instance-based languages that exploited the middle ground between instances and classes where, really roles sort of are, and where I think the Subject Mattter Experts are. And those, also, I think would be important sources.    (I3B)

BobS: Where's this work in prototypical instances?    (I3C)

CB: Well, the most well known work is in a language called "Self" but in general the prototypical instance-based stuff is in the 80s, early 90s, and you can google that sort of thing. But really, sometimes what people call those, say an SME is designing a rocket ship or a car, they actually think in their head as if they are actually working on an actual car, an actual instance of a car, an actual instance of an engine, and they're plugging these things together and they don't think, "Oh, I'm plugging the engine to the wheels here, so," ah, they don't think "Oh, gee, that's going to effect a boat because boats don't have wheels." And if there is something class-like in what they are saying, for instance if the weight of the engine has to be two times the horsepower ... they have some equation that is more constrained, which is a class-like thing to say, it's not about this instance, its about all instances, they'll attach it to that so that they have more flexible ways of merging the class and instances information together ... and deal with it. Sometimes I've heard that called 4D in that down the line there's a real car with a real part number attached to it and they want to track the individual part for having wear, for a recall sort of thing. They'll think of that as a sort of 4D thing over time, that the design is an instance, you know, the refinements of that for manufacturing is an instance in the actual car that's made. Classes are just so foreign to that way of thinking that you know, you tell them that there's going to be these usages of properties and linking of properties and values, for the values for properties and its so confusing.    (I3D)

So, I think that if we're going to reach that goal of ... the open directory kind of thing that Mark was talking about, where SMEs are just logging in and talking in their own languages, engineering stuff, they're talking TO each other, and through a knowledge engineer for something that would scale to, you know, millions or hundreds of millions of concepts we have to have somewhere on the roadmap, a way to deal with what I think is a mismatch between the Knowledge Languages and the SMEs.    (I3E)

BobS: Excellent, excellent. And it sounds like you're somewhat optimistic, but you want to see some really strong NIST-related formalisms that you can use to leverage these sorts of better languages and better class definitions for UML efforts and related HL7, for example, workshop that was held last week.    (I3F)

Your presentations leads up really nicely to Marc Wine's presentation that looks at HITOP, and some efforts to set some standards and some questions taking your concerns into the grassroots through a large number of intermediary organizations. Marc, are you ready?    (I3G)

MW: Yes, thank you very much and the previous presentations were insightful and engaging. The Health IT Ontology Project Group met first on September 23rd. And, I'll say there's a pillar of membership with the significance of the knowledge and experience they bring. In particular, it should be noted that the agencies and their responsibilities within them that they bring to the Health IT project group. The members of the group are fully federal membership ... Mike Fitzmaurice, and I think it's worthy of noting the current participants, as I say, they're experienced and the relationship they bring to the project areas and resources supporting the nationwide health information strategy is significant for the ontology and semantic web tools communities. Mike Fitzmaurice, the advisor and the senior director of the agency, of the health research and quality of HHS, is with us on the HITOP and, of course, Brand Nyman ... uh, Niemann, excuse me Brand...    (I3H)

BN: I didn't recognize my name there ... MW: It's Brand Niemann the one and only...    (I3I)

PY: Are you still on slide 2?    (I3J)

MW: I'm leading into my slides.    (I3K)

PY: Okay...    (I3L)

BobS: We're at slide zero...    (I3M)

MW: Nancy Orvis from Department of Defense, who brings a wealth of world wide DoD Health IT systems data management experience, Tom Rhodes and Ram Sriram are from NIST, and Ram's office, of course, is providing leadership in the testing and validation of the major initiatives, projects, actions, models that are directed by Dr. Brailer's office. Also, David Whitten of the Department of Veteran's Affairs and the VA Medical Center in Houston brings that corner of experience, so I call this a pillar of membership for the federal Health IT Ontology Project Group.    (I3N)

The purpose of the introductory meeting was to identify the goals and expected results of the federal HITOP group, and that explain the experiences and interests related to innovations and the use of ontology software tools as they would relate to Health IT actual applications and then decide the next steps to take toward effectively recommending tests of ontology software in key Health IT projects.    (I3O)

I'm going to move to slide number 10 of the formal slide presentation that you may be looking at.    (I3P)

In the first HITOP meeting, the group reviewed some key ontology actions supporting Health IT Project Development. And, I've noted here that ARC at HSS is funding four million dollars with the FDA and one point five million dollars ($1.5M)to National Library of Medicine to move critical drug safety information from the manufacturers to FDA using an HL7 standard format of content for FDA approval. Once approved, this information would be posted publicly on the DailyMed website for this function. And, the project will continually improve the standardization of drug vocabulary and lead to improved patient safety and quality. So to summarize that this is the effort to standardize pharmaceutical drug terminology.    (I3Q)

On slide 11, the Agency for Health Research and Quality is funding the National Library of Medicine with two point four million dollars ($2.4M) to undertake mapping of ICD9 diagnostic codes and CPT-4 procedure codes to SNOMED and undertake a compilation of HL7 standards terminologies and incorporate them into NLM's Metathesaurus.    (I3R)

So, if we can go back in summary and look at the evolution of standards approval and adoption by the Consolidated health Informatics initiative led by HHS, which is now incorporated into the Federal Health Architecture Council's efforts, on the next slide, the Agency for Health Research and Quality is funding the National Library of Medicine with two point four, oops, I went the wrong direction...s    (I3S)

Slide number 12, excuse me. AHRQ is finding CMS, Medicare Medicaid for $300,000 to build and maintain a metadata registry of terms that Consolidated health Informatics standards that have been adopted by HHS, VA and DoD as the principal partners, the first primary partners of the CHI. Again, spinning off the CHI eGov project, AHRQ will fund NIST with $300,000 to build and populate a web-based landscape that shows who is doing what in health data standards in the U.S. So, moving forward,    (I3T)

BobS: Could I ask you a quick question? MW: Yes, please...    (I3U)

BobS: Conrad?    (I3V)

CB: Yes, I'm here.    (I3W)

BobS: Is AHRQ funding NIST for a $300,000 landscape within MELS or is that eHealth?    (I3X)

CB: I expect this would be IT. I think this is a Ram sort of question. Yes, these are more Ram level questions.    (I3Y)

BobS: Also, the University of Maryland has a Center for Health Information and Decision Systems contract to build a similar roadmap of funding and seven (7) categories and they signed a contract with HIMSS to sell on an annual subscription basis, this landscape of digital resources across the United States in great detail. Ah ... Who's doing what? Where and with what resources and what results? And it sounds like there's a lot of overlap between those, one being free and one being for sale. This is an issue I'm sure Marc Wine is aware of.    (I3Z)

MW: I'm very happy that you brought that up to note the effort there of that mapping. Between the University of Maryland and bringing that as a tool through the HIMSS RHIO Federation. They're setting up there, the head of the RHIO Federation, on online dashboard of around eight different functions that will engage, help engage potential partners and healthcare entities in the RHIO development across the country and knowledge sharing for being able to learn about the latest innovations in health information technology ... to make use of those in their organizations and as well as technical project planning for building electronic health records, health information exchange systems, setup that would be interoperable and standards-based. I'll be working more with the RHIO Federation to fcilitate, help coordinate, support these developments.    (I40)

So, it's important that we note this relationship to the ontology development. That would be an opportunity, perhaps, for the ontology community to further engage as a channel, the connection network with the RHIO communities.    (I41)

On slide 13, as NIST pointed out in our first federal HITOP meeting, that for long term robust solutions, ontology tools should be able to classify standards, to describe how standards are related to each other, so the previous mapping, interoperability functions that we referenced in the previous slide are essential here. These interoperability features are relating standards to one another and to build upon the work that CHI has accomplished. And further refining standards, or further identifying and building consensus standards that were left for further evolution and development of the standards organizations community.    (I42)

Understanding Semantics will be essential to long term solutions for interoperability. And Semantic and interoperable ontology software and becomes ready for testing and health IT major priority projects, NIST has proposals for a major interoperability testbed for healthcare ontologies, the HITOP group learned. Sharing ontology tools, aligning different technologies, comparing ontology tools for overlap and dovetails ...    (I43)

BobS: Do you know how that's being accomplished? The overlaps and the gaps?    (I44)

MW: Pardon me?    (I45)

BobS: How the gaps and overlaps are being dealt with in NIST?    (I46)

MW: If our representative on the panel here from NIST... has any insight into tthat yes. At the time, Ram was able to more basically highlight the goals at that early time.    (I47)

BobS: Oh, excellent.    (I48)

MW: Do we have any insider comment on that or any early progress on that?    (I49)

CB: Presumably that would be achieved through some translation into some interlingua that covered all languages or logics or something like that. You could get them into an equal footing and then identify which parts were overlapping.    (I4A)

MW: Finally in the Actions, on supporting Health IT Project development, the Federal Health Architecture working group is identifying sets of data standards. Further work needs to be done, as I mentioned, to address mapping and CHI, regarding implementation and relationships between standards.    (I4B)

And, therefore, mapping is going to be essential as a driver, a leading edge driver of the most accessible and affordable and exemplary approaches to the use of standards for advancing interoperability. Product Health Record Systems.    (I4C)

Okay, are there any questions about that or would anybody like to add any further insights about ongoing actions between ontologies, citing ontologies to health IT projects?    (I4D)

Okay, slide 15, the federal HITOP group identified four major goals for its work.    (I4E)

Number one: develop a statement of mission for the HITOP. I'm beginning to plan, schedule the next HITOP meeting which will be, I've recommended November 22nd for that meeting and i'll be preparing an agenda and plans for it.    (I4F)

The second major goal for HITOP is to communicate collective knowledge, supporting usage of ontologies.    (I4G)

Number three: the HITOP group is interested in communicating with a larger audience the importance of promoting and raising the awareness of the initiative for understanding how ontology software would benefit interoperability, not only across locations of health systems and databases within the healthcare sector, but long-range, promote interoperability, advance interoperability and standards across lines of business among the different sectors in the economy.    (I4H)

Finally, the fourth major goal for HITOP is to develop a roadmap on the state of the art for the use of ontology tools to achieve semantic interoperability for the high priority Health IT applications. Ones that have been mentioned include clinical decision support systems and electronic health records systems. I would also want to add the application of electronic prescribing, ePrescribing.    (I4I)

PC: March, if I could interrupt. This is Pat.    (I4J)

MW: Hi Pat.    (I4K)

PC. The emphasis of this program on ontology tools, and number four seems to be almost a contradiction in terms, that emphasizing ontology tools will allow you to achieve semantic interoperability. Semantic interoperability will be achieved only when you have a standardization of ontology content regardless of tools. And, it's sort of like saying you can achieve communication with by standardizing your word processors. You need a language that people understand. Doesn't matter what the tools are.    (I4L)

MW: I understand your comment.    (I4M)

DW: One of the things I think that Marc is trying to do by bringing up the tools as a component, is because communication can't happen if you don't have people speaking the same language, and, so to speak, using the same speech apparatus. There's a lot of cases where you software running in various kinds of environments that don't communicate because they don't have not just a common text format, but also a common online protocol. If you focus solely on the static communication methods, and you don't speak of the dynamic ones as well, you're missing significant component of a realtime system.    (I4N)

PC: Two points. I agree with what you're saying, okay. You need the bitwise standardization, for communication between computers, in order to get at the semantics, but that's the easy part of the format. The difficult part is standardizing on content. And the really, really difficult part is, once you have your knowledge represented, figuring how to use it effectively in order to solve problems. That's the tough part and we haven't even gotten to number two yet, which is what concerns me.    (I4O)

BobS: But in the larger sense, ONCHIT spent a two year effort at formalizing and defining the problem, using (?Meyer's) Catechism to some extent, so it's more semantic interoperability meaning in an effort to formalize these tools that don't address the problem. And the people who understand the problem, understand the issues and the technical aspect of what we heard from Conrad and Chris Chute and Mark Musen earlier. So I don't think it is simply the tools for the sake of tools, and I think it's tools for the sake of interoperability to achieve goals of national importance, largely defined or attempted to be defined by Dr. David Brailer's proselytizing.    (I4P)

PC: As the analogy goes, if all you have is a hammer, the world looks like a nail. But you still need nails to hang up a particular picture in a particular room. I'm totally in agreement with what was said before. I just wanted to make sure that we deal with issues in terms of our binary communication methods and then dynamic communication.    (I4Q)

MW: Absolutely. That's the groundwork.    (I4R)

CC: Yes. This is Chris. This whole question between tools and content, of course, rages continuously. I think those of us that have been grappling with this problem for the past twenty (20) years, have recognized that at the end of the day, you really need both. And let me give you two examples.    (I4S)

The variation in human language prior to Gutenberg or whole notion of printing and press books, was actually quite large, and it reduced, didn't go away. The web is actually homogenizing language even further so you get this interplay between tools that allow semantic interoperability at some level and ontology authoring tools are in that category. I mean part of the problem is, you can't create a language or standardize on a language absent tools that permit it to scale and disseminate.    (I4T)

It's very easy to design a language that no one can speak.    (I4U)

DW: Exactly. But you still need the language or you can't communicate.    (I4V)

BobS: Alan Rector's two separate illuminations that it has to be useful and it has to be usable. I kind of like that. I'm sorry to interrupt Mark.    (I4W)

MW: Everybody, that was a useful additional set of comments.    (I4X)

BobS: I recognized Mark's voice and Chris Chute, but who else was speaking?    (I4Y)

DW. David Whitten.    (I4Z)

BobS; Thank you David.    (I50)

MW: The conversation you just had there Bob, was... It's essential to repeat that to keep in front of the ontology community. It's the groundwork that must surge ahead.    (I51)

DW: Especially when we're talking about health information. That's not to say in the same bodies, but the variety of problems that people have in terms of healthcare and biosemantic information, you require tools to keep track of the huge amount of information, as I think Chris can elaborate on far more than I can.    (I52)

BobS: Thank you, thank you Mark.    (I53)

MW: Uh, I'm going to go to the final slide of my presentation, moving from the four major goals of the HITOP group, to the four step strategy for identifying high-priority health IT projects that would be collaborated with ontology software.    (I54)

Number one, HITOP seeks to present a description of the goals for using ontology in health IT applications to the Center for Ontological Research, the NCOR . I'm going to be interested in learning what might have transpired at the NCOR meetings on this.    (I55)

Strategy number two is to pick up from CHI vocabulary work, referenced earlier in my comments in my presentation.    (I56)

Strategy number three will seek to bring together Subject Matter Experts on the Consolidated Health Informatics body of standards with NCOR's leading Subject Matter Experts.    (I57)

Number four will seek to coordinate public-private partnerships for recommending, planning and developing actual work testing the use of ontology software tools in high-priority health IT applications such as decision support systems, evidence-based running through electronic health records, interoperable with health data repositories and other support functions.    (I58)

That's my address, and I hope it added value to the meeting today.    (I59)

BobS: Thank you very much, Mark. Any questions for Marc Wine before we move to David Whitten's comments?    (I5A)

Thank you again, Mark. We look forward to further reports on progress in HITOP and other issues.    (I5B)

David Whitten, sir?    (I5C)

DW: Well, Marc, Brand and myself, David Whitten, are all interrelated to this governmental and intergovernmental groups, talking about, from my perspective, a lot of divisive technology that already exists for health information ... for health information technology.    (I5D)

I'm not someone who understands a lot of policy as Marc does, but I can talk about some of the specific issues having to do with how do you tie ontologies into practice in terms of a health information system. I think one of the things that everybody knows , but I going to just repeat it again ... Until you understand your audience, communications really doesn't occur. You've got to decide where the audience has needs, and where the communication, where you're trying to draw the conversation to go to. In the context of this, it's a very large system. It has several different ways that terminologies and ontologies are tied together in it.    (I5E)

One of the things I'm personally interested in, some of the issues about how do you tie these ontologies into a best practices approach, to providing the computer system to people. One of the things that's been discussed earlier has been this idea of organizing information about selection lists in such a way that people can make a decision about a diagnosis or they can make a decision about a particular procedure that's being performed and have it organized in a hierarchy of ideas, specifically to make sure that when we record information about healthcare that information recorded is the information that's necessary for the care of people, for recording of the history of that person's care and also side issues that I'm sure insurance companies care about which is to make sure they're reimbursing for ... appropriately for the kinds of care given and received.    (I5F)

So this specificity is a system called a lexicon which is variation of the view on the left. In the lexicon, there's a lot of information about concept maps, major concepts. Anybody who's used the view on the left is familiar with this, but it's generally a large categorization of different kinds of procedures with various different kinds of little problems that people may have. We also are tied to the CPT codes that are available from the American Medical Association so that we can keep track of the kinds of procedures that are performed within the level of granularity of the CPT code system.    (I5G)

There's different kinds of ways that health information systems and people that are practicing research on them can be involved. One of the concerns is you have to understand the granularity level as well as understanding the formality of it. When you have the CPT system, a lot of people are frustrated by it because it is not a very low level of detailed granularity for some of the procedures. In the same way, you don't necessarily want all of your computer systems to keep track of things at such a low level of granularity that all you have are instance-based reasoning. For an example if you have the recording that somebody is taking aspirin and you have the particular dosage and the particular route and a particular form: is it a tablet? do you take it orally? is it an ointment? The different ways a medicine can be administered, on the one hand may be very significant, for instance, if it's an ointment that's caused an allergic reaction. If it's a rash, then an ointment is a whole lot more likely to cause such a skin allergic reaction than if it's just something that they took orally. So, depending on your particular need, and depending on your particular audience's need, the level of detail and granularity matters.    (I5H)

So, again what we're talking about this whole process of trying to tie an ontology into a technical system, if you don't understand the level of granularity you need, if you don't understand your categories you're trying to use, you can spend an awful lot of time doing things inefficiently that just by changing the representation will dramatically or radically affect the complexity of the problem.    (I5I)

One of the ways that the VA uses this specifically is ... they have what they call drug-drug interactions, but they also have what they call drug-drug class interactions so that if you need to say that particular kinds of drugs interact with anything that is going to have a diuretic effect, then simply by using the drug class you can simplify a problem that involves taking one drug and comparing it against hundreds of different drugs and you simplify it to comparing one drug to a different class.    (I5J)

This is tied to the whole idea of organization, classification, categorization which we're all familiar with in an ontology system. But, the same way, instance-level reasoning sometimes does make a difference as was pointed out earlier when we were talking about cars and each car has its own wheels and the wheels are driven by an engine, but you notice even in his example, he didn't go into the details about the right wheel versus the left wheel, he just grouped them into front wheels and back wheels.    (I5K)

If you understand what you are trying to produce for somebody, and you make sure that your representation matches the need, you still have to place that, you have to situate that logic into an existing ontology or you're going to be doing reasoning that doesn't apply, or, it isn't ... that its fragile reasoning, it's just that it's not going to be be available for other processes that need to do that same kind of work.    (I5L)

The other problem with categorization is just a purely human issue. Any time you have a categorization, you're going to have to have somebody who understands what the categories are, and now when you have a new element coming in, you have to categorize it properly. Is this kind of categorization something where the relationship is only understood by somebody who's a logician or is this kind of organization something that only makes sense if you are actually doing it, such as particular ways of drawing blood, or are there particular ways of doing a lab test. You've got to make sure that your details don't overwhelm the person who is needing to use the information.    (I5M)

One of the typical ways of transferring information in hospital systems and clinical systems is a method called HL7. HL7 stands for Health Level 7. It's the application level on the top tier of the seven tier architecture of OSI. When you have two different instruments both of which are communicating information about a lab test that's been performed, they may send some HL7 information that is significantly different because the target audience, what they think the person needs to know, is different. Well, granted, if they're both doing CBCs, Consolidated Blood Profiles, red blood counts or white blood counts, or something like that, those counts may actually be coming in the same message in the same place but one instrument might tell more about the methodology to do the count, is it laser counting? is it by weight? There's lots of different ways that you can actually count red blood cells—volume and density, and so forth—so the HL7 messages may actually reflect all the complexity of this particular machine's way of doing it, or they may just be a very high level and say this is the name of the lab test and this is the result of the test.    (I5N)

Of course when we talk about lab tests, we also have to talk about normal ranges, which is saying this particular value is coming in and according to this machine and the normal range is between 30 and 75, and the high range that needs to be paid attention to is from 75 to 100 and a normal low range is between 0 and 23. These numbers are not specific to any lab test I'm talking about. Just trying to give the idea that you have with your data values you have this auxiliary information that's being sent, and depending on the particular physician and depending on the particular need, some of that information is relevant and some of it is not. So, one of the things we have to do when we talk about translating health information is we have to understand the audience, the communication language we just finished talking about, we have to understand the tools being used to transmit this information and we have to have some kind of consistency. If it's not an upper ontology, then a consistent general ontology that we fit these things into. Because if we don't have a general ontology that is understandable by most people in this field, even if it is a very specialized ontology, if we don't have that generalized ontology, then we're tying ourselves to one particular representation that's coming from one particular device. So these methods are part of the tools that are used in tying an ontology into the day to day processing of a hospital's health care records. These methods start to become important.    (I5O)

Is what I'm talking about clear, or should I continue?    (I5P)

BobS: Uhm, any questions?    (I5Q)

DW: Okay. So one of the significant things we have to do, when communicating information and tying it into an ontological framework, is that we hae to deal with is, what are our constraints? How are constraints being represented? How those constraints are being represented not only on the high level classificational system, but also how those constraints are being dealt with in the day to day processing.    (I5R)

There are times in the processing that just having an ontology available or just having a limited vocabulary available can help you significantly in your processing of information that's stored. One of the classic situations is: "I've a got a list of terminologies, perhaps, I don't know, a list of 3700 drug names that are known to be related to fall risk, if I have all of those lists in one list, and then I just have an index into the list, I can actually store that index into my system rather than storing the full names without losing any information. All I'm doing there is the classic trade-off between space and time. You know, is it better to store the number, because I can use the number to retrieve the full name, so, therefore, a number from zero to thirty thousand is still just five digits, so I'll only store five characters at most. If I'm storing the actual names of those drugs, those names may be sixty, seventy or eighty characters long, depending on whether they include dosage forms, and depending on if they include medication administration routes, all the classic naming issues which occur when actually naming the drug.    (I5S)

So, I can do this trade-off of storing an index instead of the name but what am I trying to do here? I'm not trying to use this index as just a way of saving space, I'm also trying to use this as a way to precisely identifying which particular drug has been administered so that I can do things like interactions between the incidence of falls and the dosage form that was used when a particular drug was received by a patient. I may be using this information as a way of trying to track the environment that the particular care component takes place, if, for example, I do some analysis, and I find that person is not taking any drugs that are of a particular increase in fall risk, but there's still a large percentage of falls, then, either A) I've got to change my classification, because there are some drugs that are not in my classification system saying that these have a risk of falling, or there are some issues with procedures involved in a particular area that are increasing falls. One of the obvious things, do you have slip resistant things on the floor, do you have trained nurses who are taking people into and out of the beds or are you depending on untrained aids? These kinds of things are part of the real bottom level issue of how do you provide care and how do you provide it in a way that is consistent and that you can determine what's going on in terms of that kind of environment.    (I5T)

So, one of the things that was mentioned earlier, I want to reiterate is this idea that people, when you're talking about Subject Matter Experts in general, people know their particular language that they're using to describe something, and their particular subject area, but they may not actually be able to deal with other people who have different ways to classify. If you talk about your typical pharmacist and you say, okay I need to know the categories drugs fall into, one of the natural categories may not be that these drugs are a fall risk. That's an operational definition about drugs. The different reasons why a particular drug may increase the risk of falling by a patient may actually be quite varied, maybe because it restricts blood flow, the blood doesn't get to their brain as well while another may be because it just weakens muscles. It may be that it increases the need to go to the restroom, and as a result of that the person is getting up and out of the bed more often than they would otherwise.    (I5U)

Any ontological system that does not provide methods to have these ad hoc operational categorization systems is going to fail in the medical environment, is going to fail in a healthcare recording type of environment.    (I5V)

BobS: Could you repeat that please? Any ad hoc...    (I5W)

CW: any system that provides an ontological categorization system, right? So you're able to create categories, you need to be able to handle ad hoc categories. If you don't handle ad hoc categories, then what's going to happen is you're going to have to augment that system with some other mechanism to be able to group things together just because this particular application needs it.    (I5X)

CC: That's the whole premise behind what we call aggregation logics.    (I5Y)

DW. Okay.    (I5Z)

BobS: And who is this.    (I60)

CC: Chris.    (I61)

MW: Excuse me, David?    (I62)

DW: Yes?    (I63)

MW: Your point there about Subject Matter Experts and relating that to the body of knowledge, interpretation perspective, degrees of experience related to the human body in your particular area or category of expertise, medical or otherwise here, what you highlighted was the challenge, the large challenge just to... not only technically, but relating the technical to the human factor in communicating common definitions, what are common definitions interpreting terminology and then building that up, stepping that up to development toward software tool and actual integration of health information technology applications ...    (I64)

DW. If you don't take into account your audience, your tool will never be as effective as it needs to be. I agree with you entirely.    (I65)

MW: that's my point, that the multi-layered challenges, the depth of each one impacting the next phase of this development of the end game. Advancing interoperability and standards and software that will help improve people's, will help empower people's ability to handle their own healthcare.    (I66)

DW: One of the classic issues, I want to go into computer science for a minute ... one of the classic issues that comes up all the time in healthcare environments and healthcare computing that's not addressed very well in training people in computer science, or in programming or in computer engineering, is the quantitative scale difference between information flow and the amount of information details that are required to keep track of, in a health environment as opposed to something ... oh, I don't know, something like the stress analysis for architectural stability. A building itself, there may be a model there, they may have some way of describing how that building is going to handle environmental conditions when it is actually built, but the huge variety of things involved in the human body increases your scale dramatically. I don't remember the numbers, but I know that Chris does. The Foundational Model of Medical Anatomy which I think of as a kind of ontology, it's huge with thousands of terms and thousands of relationships between those terms. Chris, how big is that again?    (I67)

CC: It's in the tens of thousands of concepts, and the hundreds of thousands of relationships.    (I68)

DW: Exactly ... and it's not like they are just proliferating these concepts and relationships because they feel like making a large setup. This is trying to be an accurate representation of what's going on in terms of human anatomy.    (I69)

In the same way, when you're trying to model what's going on in a healthcare institution, the institution may not have hundreds of thousands of people that are working there, but it would have quite a few thousands of drugs and different lab tests and of ways that those things are interacting. The effort for trying for some kind of common language to communicate what's already known is worthwhile. The issue, and this has been described in LOINC, which is L-O-I-N-C, which is the national classification system for laboratory tests, that kind of LOINC numbers to the laboratory tests done in particular environment by a particular company or by a particular hospital is important because when you do those ties together then you can have a standardized lab test and then compare to your standard drugs. Therefore you can express the idea that when you take certain drugs it's going to affect certain lab tests. So, even though the lab tests, as an example if you take one of those drugs that increases the number of white blood cell counts in your body, when do a lab test for white blood count, if its abnormally high , that's generally understood to say that there's an infection going on in the body. But, if you actually know that they're taking a drug that increases white blood counts, it can be an alternate explanation.    (I6A)

One of the things we have to be aware of, especially when we're talking about computer systems that interact with healthcare professional is we don't want the computer to be making decisions, but we do want the computer to be able to provide information to the healthcare professionals so that the decisions that they make are made with the proper information.    (I6B)

MW: David, If I may comment, and highlight your point there, that's highly essential for educating and supporting new adopters of electronic health record systems and other information technologies. Especially as the sophistication and integration functions with clinical support, evidenced-based for example, comes into play in the field. This kind of issue was of supreme importance five years ago, in earlier days when implementation and training for VA systems. Not to speak for the VA, but to relate perspective from my experience then.    (I6C)

BobS: I can also chime in for many of the HIMSS CIO community, five years ago they saw technology as a major solver without necessarily involving Subject Matter Experts, and patient (?)s. I think they've become much more mature.    (I6D)

MW: Leaps and bounds.    (I6E)

 BpbS: And hopefully, we'll continue. My clock says we have about four minutes left in our two hour period. Is Brand still available?    (I6F)

BN: Yes, I have one slide and I've posted it. I knew coming in at the end, I would have to be very declarative.    (I6G)

DW. Sorry Brand, I didn't even look at the time.    (I6H)

(chuckles)    (I6I)

MW: We couldn't have a better man for the job.    (I6J)

BobS. Where is the slide Brand.    (I6K)

BN: It's linked to the wiki there. BN: At the bottom where Peter posted it. By Marc's comments, I guess. Do you see it there?    (I6L)

BobS: Brand Nieman's comments?    (I6M)

BN: Yes.    (I6N)

So, do I have the floor, Bob?    (I6O)

(interlude)    (I6P)

DW: I'm quite in favor of Brand taking over.    (I6Q)

BobS: Go for it, Brand.    (I6R)

BN: Thank you    (I6S)

(interlude)    (I6T)

I'm associating with all these professional ontologists, so I'm learning to become more declarative and spare in my expressions. I declared October as Ontology Month. I had this government computer press conference. It didn't make the headlines, but they're coming out with a nice story.    (I6U)

Anyway, the ONTAC support group is making very good progress under Pat and Dagobert's leadership. So, we have a draft work plan, which will be discussed next Thursday afternoon. We presented it to the Informatics Solutions Forum, Marc and I and Susan, and I proposed there, an Avian Flu Ontology and Information System Pilot on which Olivier and Marc have done some follow-up, I understand. They looked at what was in UMLS-SN for Avian Flu and then we discussed it at the NCOR Meeting.    (I6V)

And it dawned on us that actually, it, of course, wouldn't change what they have on Avian Flu ontology, unless new concepts appeared associated with all the discussion going on in relation to a pandemic. So we discovered that what we might need to do was have a Concept Alert looking for new concepts that come if there is an information explosion, if there is a pandemic. I got a proposal from Michael Bellinger from JARG. He'll present that at our December 6th workshop that Susan and I are doing.    (I6W)

On the GCS Conference there were two things that I learned. One was Eric Petersen on doing a very elegant presentation on doing an ontology for the Data Reference Model (DRM ) and he said, "You know, we really would have to use OWL-Full to achieve 5th Order Normalization of the way we treat structured data, and you can look at that, get some feedback from that. That was revelation to me.    (I6X)

More excitingly, we have cast about for a totally ontology-driven application and two of them we found, BioCAD and VisualOWL which Connor Shakey particpated in the NCOR event, and we refer you to his work.    (I6Y)

Most importantly, coming out of the NCOR event, I learned from Barry that there are ontological problems in HL7. I think we would need more information, Marc and I, if we're expected to take any action or do any more in terms of bringing that to the attention of people who are funding networks so, I toss that out for more discussion. I really appreciated what Barry Smith introduced in terms of principles for building Biomedical Ontologies and I refer you to his slides, from Genome Biology.    (I6Z)

So I think I get the message how Barry and others are going to build hhigher quality ontologies and higher quality assure them. And I'm certainly all for that. I would just put out the question to this group and others. Are we going to have to apply, I think we are going to have to apply, these same kind of principles to other areas?    (I70)

Finally, I was the winner for the giveaway for the book, "Ontologies for BioInformatics" I couldn't put the book down after I got it, and we invited Ken Baclawski to present at our October 6th (December 6th?) Workshop, and Pat and others were asking not only about building ontologies, but then actually using them and what Ken and his co-author have put in their book, reasoning with ontologies under uncertainty in actual systems applications that have been following that – the Bayesian Web and I wanted to call your attention to that.    (I71)

And, that's it!    (I72)

BobS: Fantastic! What a tremendous two hour session. We certainly have covered the landscape from concept to tools to even response and very optimistic. I obviously didn't perform my moderator role sufficiently, but that's life here in ontology city. I would like to open the floor to anyone who has some concluding comments.    (I73)

PY: Can I make one comment?    (I74)

BobS: Yes, Peter.    (I75)

PY: Like quite a few of us, I was at the NCOR event also, and I would like to resound one of the inaugural speakers, John Walker, from NSA, who was telling us sort of his whole life story on ontology, and essentially, the thirty second version of it, started out with nobody understanding what ontology means, to make people maybe knowing the term, but doesn't care, and approaching the time when everybody and his mother-in-law is coming in and saying that they are providing ontology applications or doing ontologies. Obviously, people would take advantage of any buzz word to do marketing or extend their ulterior motives, but one thing I would love to see this group, which has I think prided itself on its integrity and the expertise we have on ontologies, that we keep disambiguating these and tell people exactly what we mean when we say ontology especially now that people are fairly familiar with the whole spectrum, that we don't go in and just say "I'm doing an ontology," but rather is it a formal ontology, is it informal, semi-formal, how are we representing it. I mean the context and so on and so forth, and I hope that this community as a whole tries to sort of keep people straight.    (I76)

BobS: Clarity, Consistency, Disambiguation, sounds like a good entré to our next several presentations of the Ontolog Forum.    (I77)

Again, I would like to thank everyone for attending and I look forward to continuing this dialog and introspection on this important topic.    (I78)

(Thanks and goodbyes all around)    (I79)