Dear David,
Thanks for your inputs.
I’m curious about the kind of standard you’re speaking of here. Although the medical community has a variety of information standards, and HHS/NIST have standards around Meaningful Use, there is little in the way of true standardization in the market (as you point out later in your post). In fact, outside of the use of certain taxonomies and vocabularies which aren’t all necessarily required to exist in EHRs (CPT, LOINC, ICD, SNOMED, etc.), there is little in the way of anything approximating an ontology-based standard in EHRs.
But if you require an ontology to be perfectly abstract, with purely conceptual nodes that only a few philosophers agree with, and rife with implications that have been theoretical sources rather than actual data, then I suppose a case could be made that the EHR ontology is much less rigid than it could be. But the reality is: it is being used extensively every day dozens of times by nearly every practicing doctor in the US health care system.
I evaluate the ontologies based on what can be done with them, and what they cost versus what they produce, and how widely they fill a need. On that basis, the EHR ontology is by far the BIGgest one I know.
HHS developed a specification for the XML rendering of EHRs. It is called the “Continuity of Care Record” if you google it up. That XML spec is VERY precise re names of the columns, domains and ranges, medical significance, and the various kinds of notes that doctors used to scribble into their paper records. It’s far more precise XML than the great majority of XML specs I have read and used. Yet it has unstructured columns in it as well that can be exploited when databases full of EHRs are available for data and text mining.
The CCR specs document rigorously defines the XML forms and symbols. It’s a 100 page dense pdf, and just browsing quickly over it, you will find XML used in a Lisp-like way to structure object-attribute-value paths for finding a “logical” diagnosis. The constraints on the provider/user force answers to questions in a sequential way, guided in part by previous answers from higher up the tree.
That standard CCR spec is enforced through product testing as per the CCHITT (Sp?) testing documents – every EHR product must be certified through testing of the XML against the spec, and must follow it exactly. There is a suite of certification tests that is produced (I suppose by HHS) and which each and every EHR system must pass. Without certification, it can’t be used by the docs – uncle won’t pay them to buy it and use it. So ONLY certified EHR products that produce spec’d EHR readable XML data are available to health providers.
Also, names of presenting conditions, names of physiological objects, lab tests, prescription meds, and many other medical objects are part of the CCR spec and CCHITT test specs.
CCHITT is a certification standard that has been passed by at least 93 EHR products, all of which are certified against the standard CCR-documented EHR XML specifications.
That is why EHR standardized ontologies are BIG in the sense that lots and lots of providers are using them right now. I don’t have the URLs, but if you request them from me by email, I can attach the CCR, the CCHIT criteria specs and test scripts used in certification.
I have found, in my previous experience, that the benefits you mention (e.g. improved collections, reduced costs) are brought about through systems that are far from ontological (especially when Meaningful Use Stage 1 requirements were coming into force).
Yes, those benefits result from the integration of EHR products with billing, time accounting, and other cost tracking programs. They aren’t the result of the EHR itself, but of office integration. It is a historic coincidence that the financial tracking software that was needed by docs in the mid 2000s couldn’t interface with the older medical records databases, which were much more diverse, and much less rigidly defined than the new EHRs. So in that sense, docs get benefits from that integration.
For example, see my write-up on VistA medical databases, and on how each installation used different conventions. Request it from me by email if you would like to receive a copy. See Patent-7-209-923-B1 for that write-up, which is also available from Patent2PDF.com after you type in the patent number.
For examples from VistA, Boolean fields were answered T and F in some places, Y and N in others, 0 and 1 in others, and 1 and 2 in yet other installations. That example is one of thousands of confused fields in the older VistA databases which have to be reconciled if a standardized EHR vocabulary is to be used. But it gets a lot more complicated than just Booleans. Drug names can have synonyms, generic equivalents, trademarked names, and lots of synonymy and antonymy.
The EHR products I have worked with follow that tree shaped plan for question answering very rigidly. At this time, there is no EHR known to me that supports the doctor flexibly covering his specialty questions without going through that entire tree. Only a few fields are really suited for the health provider to put in observations that are less rigid. If any product does so, it wouldn’t pass the CCHIT Certs. So there will have to be system level evolutions to make progress from this first EHR rollout.
The original motivating reason behind all this EHR orthodoxy was to minimize health costs by standardizing all the paperwork, which as you know, takes a lot more doctor, nurse and tech time than treating the patient. You can’t easily data mine a database where every provider puts in his own terminology – and medicine is rigidly categorized, defined and named by the long history of health care development. So compared to most fields, I would expect it to be easier, not harder, than the usual vocabulary acquisition problem. Not so to date.
Junior (W) Bush’s HHS wanted to develop a uniform representation of health conditions, with the thought that govt standardization would bring down health costs eventually by asserting economies of scale. Obamacare is Bushcare on steroids. Whether it works or not is still out, but the cost to health care providers has been far more than anticipated. That is why they hate filling out the EHR answers to every question ever considered possibly relevant to a disease state by any insurance clerk.
The problem not yet solved is exactly how to reap those economies of scale. The first phase requires docs to do a lot of data inputs, and they aren’t used to that for treating patients. If the plan works as conceived, there will be lots of medical knowledge gleaned from mining the zillion cases that go through the US health system every year. Otherwise, it will be a case of over technologizing a problem with a Pollyannaish zeal for adulation.
So EHRs comprise an experiment with higher costs now, for the possibility of better, cheaper treatments in the future. It may or may not turn out to be a good bet, but the docs are suffering through it for now. We will likely see a lot of early retirements among those docs who haven’t made the change to EHRs yet so they don’t have to reinvest in their practices and learn more bureaucratic procedures to get paid. Patients will lose a significant number that way.
So all in all, I consider EHRs well enough specified to be a true ontology, though not the best example. Whether the Bush/Obama experiment works or not, we will learn something from this experience, and not all of the knowledge will be welcome.
Being a little Pollyannaish myself, I expect things will eventually go well, though expensively, but will need to be reshaped to fit market requirements before the costs can be controlled and kept below the benefits. That is a market for ontologists to look at so we can see what will happen in the next five or ten years of EHR integration.
-Rich
Sincerely,
Rich Cooper
EnglishLogicKernel.com
Rich AT EnglishLogicKernel DOT com
9 4 9 \ 5 2 5 - 5 7 1 2