Transcript for the [ontolog] Invited Speaker session - August 11, 2005    (OHV)

[ontolog] 2005.08.11 Session Transcript    (OHY)

This is a draft of the session transcript. Help is solicited from the community to clean it up. Please edit below by repeating entire paragraphs, and then inserting the proposed updated version right below the original one. ... Please bulletize your update (with "*"), highlight your changes (say, by putting changed phrases in square brackets; or leave a "[]" where phrases have been deleted), and identify your input at the end of the proposed paragraph with your name and a date in parenthesis.    (OHZ)

(Draft) Session Transcript to the Ontolog invited Speaker - Professor Jim Hendler - “Semantic Web Q&A” - Thu 2005-08-11    (OI0)

PeterYim: Let’s start the session now with good morning and good afternoon everyone. This is the Ontolog Invited Speakers Session, Thursday, 2005, August 11. We are glad to have Professor Jim Hendler from the University of Maryland with us and the subject of this session today is Semantic Web Q&A.    (OI1)

First of all, we’d like to thank Professor Mark Musen from Stanford for extending the invitation on behalf of our community to Professor Hendler and let’s ask Mark to introduce our speaker today… Mark.    (OI2)

MarkMusen: Sure, thanks Peter. It’s really a pleasure to introduce Jim Hendler, who many of you know, at least virtually. Jim is a Professor at the University of Maryland and directs activities related to the Semantic Web and Agent of Technology at the Maryland Information and Network Dynamics Laboratory.    (OI3)

Jim is prolific in so many ways, and is someone who I think represents truly the best of what DARPA was intended to do. I think for many of you, you recognize that for many years the idea of putting semantics up on the web was sort of a gleam in people’s eyes, not really achievable, something that people talked about very informally and Jim really had a vision for how to make this happen. I think it’s really a testament to Jim for his ability to go to DARPA, get a few million dollars, and turn the world around by recruiting a large number of really brilliant people who worked with him for a number of years; and created all the fundamental technology, that now is coming together to make the idea of semantics and knowledge processing on the internet really a reality. Jim has comeback to academic life though he continues to work very heavily with various government agencies, and for many people, Jim is the Semantic Web; certainly he is that vision. It is a really a great pleasure to introduce him to the Ontolog Forum and to have an opportunity to give him the chance to talk to you directly.    (OI4)

PeterYim: Thank you very much Mark, so before we pass the floor to Jim, may I request all the rest of the audience to put your phones on mute unless you are speaking, and also to make sure you don’t put us on hold because that way you might send music into the line. We are recording this session once again and Ontolog has been opened IPR policy; so this session will be recorded, will be archived, and will become available on the internet.    (OI5)

Professor Hendler, your floor.    (OI6)

JimHendler: Thanks. You know when I was first asked if I would do this, I started to prepare the usual slides and stuff, and then I saw who was on the Ontolog Forum most of the time and realized at least over half of you have seen at least one or more of my talks, and so I figured, and also looked over some of the past speakers and it seems to me that just a Q&A session might be productive. I also know that I’ve been on a lot of such calls as these with where, you know, you get to the end of it and there’s only a minute or two for questions and that many of you have burning questions.    (OI7)

Let me just try to set two boundaries on this. One is sort of on an upper boundary. There's a lot of things that get published with respect to the Semantic Web and what one might call philosophy. You know, there was recently somebody trying to defend Clay Shirky by talking Wittgensteinian, something or other. You know I’m perfectly happy to go one-on-one on that stuff, but I don’t think that’s the most useful thing to do. So I would try to avoid deep philosophy, and then the other thing is that a lot of you I know have questions that might be better answered one-to-one. So if you have very narrow questions about a particular application or way of doing something I’m perfectly happy to take those questions email; just mention to me that you were on the call so I will be able to place what the context was.    (OI8)

Attendee: Could you state then in the affirmative what you would like to focus on here?    (OI9)

JimHendler: Well, everything else – issues that you think are generally interesting. When I say your own concern, I don’t mean – Hey I’m doing a migration, tell me about migration. I mean something like, if I’m using SCOs (Shareable Content Objects) and I want to represent the such-and-such field of the so-and-so; or you know, in OWL, the inverse function when applied to a data type has the following funny subset. I’ll feel free on this call to give a very fast answer and suggest those go elsewhere. So I’m really looking more for conceptual discussion, discussion about you know tools and techniques and whatever you guys want to hear.    (OIA)

Attendee: You open for questions now?    (OIB)

JimHendler: Yeah, I’m ready.    (OIC)

MichaelUschold: I sent you an email before and I don’t know, did you get it, I’ll read it out, you may have had a chance to look at it?    (OID)

Here is the question. Currently I need to build an ontology to do reasoning and general functional computation; so OWL DL is not adequate to this purpose; there’s no standard rule language, etcetera that does computation, there’s a lot of work going on but there’s nothing standard. So what I can do is use a deductive database tool such as those produced by Enterprise or Ontology Works, that would get my job done for a point and solution, However, due to the differences between LDL and F-logic, for instance closed world vs. open-world and the [ignan’s] assumptions, that means I’m kind of forced to choose. I need to build my ontology in one or the other language and then it’s not going to be as interoperable. I really don’t want to commit now to using one approach and then have to redo the ontology later. I don’t want to have to maintain two separate ontologies, I really want to develop and maintain one; and then export those necessary to different uses and applications. So really, am I asking for too much? What you recommend that people do in this situation in the short term, and how are these issues gonna be addressed in the long term?    (OIE)

JimHendler: So, not surprisingly, you’re asking good questions, Mike. You know, there are a few different parts of it, right? And, of course, a lot depends on application issues and things like that, and if I asked this question in some other area, you know, you might have similar issues of, you know, do I use a relational or object oriented database when I need some of those. So obviously, some of the answer in a given project is gonna be project dependent; how key is it to use standards, things like that.    (OIF)

What I would say is things like that the current deepest issue in the area, I think, is coming up with how the rules and the ontology, as a declarative structure, are going to interoperate and live together happily on the web; many of us believe we have to do this. The problem is that there’s really two different things you want to do with ontologies. One is sort of the declarative framework aimed at being a standard domain model for people to use across one area, and the other one is the processing of that, using that to do things; and those two are different. So, OWL is very much designed for the former and F-logic and some of these other rules Approach as XXB really go for the latter and there are some overlap between the two, so it’s a tricky issue what to do now. What I would say is currently I think OWL, because of the nature it is a standard, gives you a very good lowest common denominator where to start on a lot of this stuff, so put the declarative stuff that you can put in OWL, in OWL.    (OIG)

Now there’s a lot of work going on for automated mappings to OWL for various kinds of logic and things, those things are still a little bit more researchy. Some of the applications you work on, Mike, are very happy in that research, some of them aren’t. The problem is when you talk about going to a particular tool, or a particular technique, or a particular way of doing F-logic, then you have the problem of the interoperability; that is what we built OWL to avoid was to have, you know, something that would let us move one.    (OIH)

You know, in the medium and the long-term, you know that the W3C is starting to look at rule stuff, I suspect they’ll come up with something in the space that’s in the middle of… you know, on one-hand it won’t be the worlds best rules language, on the other hand it will extend OWL. What I am not a big fan of is jumping right to something as complicated as common logic and something like that and expecting the commercial world to accept that in one piece… one step. So using it for academic work, using it as the basis for which you formalize things, that makes a lot of sense to me. Expecting somehow that non-proprietary tools sharing that in a large space are likely to happen, that’s a little harder.    (OII)

ChrisMenzel: Just to clarify, you’re exactly right, but common logic wouldn’t be designed to do the thing you’re worried about it doing. It really is a framework for specifying languages and it can be used to define languages much weaker than FOL (full first order logic), so just to throw that as a clarification. Nobody working on common logic is thinking that it would be used as a framework that proprietary tools should embrace, and so on; it does have this much more theoretical role that you’re describing.    (OIJ)

AdrianWalker: I’d like to pick up on this if it’s okay. Sorry I interrupted somebody there. The rules interrupt the [Bulicy Workshop] had a number of accepted papers, and one of them, the last paper suggested that this question of interoperability across rule systems could be addressed as a much higher and less ambitious level, simultaneously higher less and less ambitious. That is to kind of recognize that there are always rule systems out there with heavy commercial investments behind them, and they’re not compatible, but they are going to have to work together on the Semantic Web. And that particular proposal was that basically one sets up a message framework for different rule systems and OWL to interoperate across the web. My question associated with this to Mike and to Jim is, do they see that as a bad short-term approach, or a good short-term bridge, or something that could get things working and later on lead to stricter standards?    (OIK)

JimHendler: I’m not sure that’s an exclusive war, in my opinion. In the sense, Adrian, that there will always be proprietary applications for some things where people need either high-end performance or things like that. There are things you do on the web where you don’t use your browser over the internet where you use some other tool for file exchange because it’s more appropriate to some particular task on the web, but there’s also a lot you do on the web.    (OIL)

I think there’s a tendency in some of these questions to be expecting too much of what will be common. You know, if you look at HTML, for example, the first version of HTML is missing an awful lot of what we use today. A good standard evolves and changes and grows as it needs, it just doesn’t get done and then stay static. I truly hope that OWL will be used enough that people will develop enough experience with it to help answer some of these questions. The same thing with the rules question, we’re still working on exactly ‘what is the case for rules on the web’ and how to do that.    (OIM)

For exchanging rules the use case is obvious, there are many use cases, but does that rule exchange require an RDF type framework. OWL, there was a strong argument for being able to link terminology in different ontologies, that was a design goal from day one; you know with rules how important this is. So, we’re still working through a lot of these issues, I expect things to change over time.    (OIN)

MichaelUschold: So I can take that kind of as a yes, it would be a good idea to get rule systems interoperating at a message exchange level, without any standardization, because you know there are always big commercial rule systems out there, and it would be great if they would interoperate. If they interoperate loose coupled with messages that could begin to get the people thinking in very concrete terms about how to sort of move the astragal standardization down into the rule system, so it gradually evolves toward something that is more closely coupled than just messaging; and so is that something you would look for?    (OIO)

JimHendler: There’s part of what you said I would agree with, parts of what you said I’m not sure I would agree with, and parts that I haven’t really thought hard about. Let me see if I can separate those.    (OIP)

So, let me go back to OWL for a second, see I have a very different view of OWL than some people, which may sound funny to some people. I don’t view OWL as an ontology language per se or as a very good KR language. OWL is a language for exchanging ontologies much more than a specific language; so if I’m dong my own application, I view OWL as the way I export and import knowledge from other people, I don’t view it as the only thing I’m allowed to use on the whole world for knowledge. So by analogy I would say the same thing with some of the rule stuff, that I expect different people to work with different rule systems and to have some kind of interchange. But now let’s work on the message version vs. some kind of ‘let me publish my rules version’; that I have less personal experience with. I have some random thoughts about it and so I’m not sure I can agree with you in one step, that therefore a message-based rule interchange is the right way to go.    (OIQ)

In fact, we tried to do message-based ontology interchange for a long time and look how much success OWL has had in the short time it has been out than some of the things that preceded it that were more message interaction-based.    (OIR)

RexBrooks: [Interrupting] Can I bring in a real-world problem for a second. I’m sorry to have cut you off sir, but the reason I am asking right now is because there is a simultaneous meeting going on for the Naming and Design Rules of Federal XML Usage. I know that the specifics of messaging is important, but one thing that’s coming up in terms of there rules, and that’s a ‘rules question’, is that in an XML eschema, if you want to have a large eschema, then you pay price to have it parsed and processed, and if you include it or import it you’re stuck with including the whole thing, you just can’t pick and choose. And I’m wondering if Dr. Hendler would have an idea about how feasible it might be to provide someway to use RDF and perhaps OWL to be able to pull those resources out of eschema and be able to assign a name space and use a particular term from that name space without having to go through all that problem; because they’re trying to figure out how you get enough flexibility so if you want to you can have the eschema for every element, and in that way you only have to import the element that you need vs. having a ton of elements and you have to import all of them to get the one you need.    (OIS)

JimHendler: Let me quickly say that that was actually probably the single biggest differentiator between RDS and XML was the assignment of your URIs and how that’s done. XQuery, XPATH are attempts to fix some of that, and I expect sooner or later we’ll see some kind of XPATH naming meets RDF naming, but I don’t know when. I know people who are working or thinking hard about it, but there are there are two very different approaches in the world at the moment.    (OIT)

One is very document oriented. So XXD inherits from XML the notion that you’re looking at a document and you’re checking that document; that your driver’s license has a photo on it, and a name on it, etcetera but it doesn’t kind of assume that that name will somehow be related to the concept name. Then when you start using XXDs with databases, a lot of that stuff becomes implicit, cause you’re sort of using XXD as if it were a real eschema language, a database schema language; and then when you’re doing that you really want to be able to name the individuals and not view your whole database as a whole document. And that’s where suddenly the XML world has started to come into what RDF was designed to do. RDF has given everything its own URI from day one for exactly this reason; so there are a number of things going on.    (OIU)

Now, currently within the worldwide web consortium, if you read the charter for the XML eschema data types, it says explicitly that they must come up with the unique URI naming eschema. There have been many efforts on the part of the RDF community to push them a little on than, and some of that’s happening. One of the reasons is when you have a complex schema… so people often ask me, how come OWL can’t use complex eschemas, and the answer is that OWL can’t, by definition, can’t use anything that doesn’t have a URI. By charter XML eschema had the charter to come up with the naming process for XMDs, and they have been dragging their feet. I’m told stuff is happening in that space, I haven’t checked in recently. So what I’d say for now is you’re probably looking at something that requires putting some kind of procedural device in between the XXD and the RDF; we do that in my group all the time, we write little pearl scripts, things like that. I’ve been pushing my students to try and standardize some of that, you know, in our lab work, come up with some little tool that makes it easier to do that, and we’ve had limited successes in that cause, of course, it’s not our primary concern, but it’s definitely the case that that’s an important need, and that it’s not that hard to do an individual case. What’s really the problem is you have different expressivities at the different places, so you can’t build a perfect two-way mapper and you have to decide where you’re willing to take loss.    (OIV)

RexBrooks: Thank you.    (OIW)

Attendee: (no name announced) I would like to ask to follow-up on Mrs. Nicholas Roquet, by the way, to follow-up on some comment that somebody said to be able to pick and choose from various ontologies so that you could perhaps construct a new ontology. You also made a comment earlier that view about OWL that it’s really made for kind of importing and exporting and importing knowledge, and OWL is really the representation language for the knowledge that is exported/imported and you kind of combined these then, then you really could view OWL really as the representation for in the end some kind of a workflow of operations and ontologies that you could describe say as using OWL as for example where the processes would do countless operations on ontologies and the inputs and outputs of these processes would be the ontologies that are being manipulated. With calculus, we’ve kind of seen some examples of the operations that you might have in there, from the Semantic Web Best Practices documents, kind of like using the classes as values document that nationwide is editing. There are a lot of examples about using annotations as a way of making statements in one ontology about things in another ontology, while remaining for example DL, so that we can do some interesting reasoning for it. Then there is not really much of a inverse operation about well if I have now mentioned annotations about intelligence allover the place, for example picking and choosing things, now I would like to kind of project all of that now into a new intelligence that is DL, so that I can reason about the things that I have picked and chose and I’m interested about these things and leaving everything else so that I remain DL for example. Now producing the intelligence to say well this is what you’ve got and could we wrap that into some kind of OWL process, and the operations it seems to me that would be interesting to do in these kind of processes would be things like, of course, annotation, embedding, which is kind of like the inverse of annotating and what just mentioned, and even things like reasoning, like classifying and inferring into other things, and whatnot. Is something like this, or any of those ideas just off the wall, or is there something that is happening that might perhaps help get there?    (OIX)

JimHendler: So, there certainly not off the wall, on the other hand, you know, again let’s be careful what space were in, cause when we talk about OWL or about ontologies in general… so the better thing is talking about ontologies in general with OWL as a particular instance of a language for getting some commonality, a standard. Right, that’s what it is. I didn’t mean to say before that you can’t use it for knowledge, rep or modeling, it was designed for that, but it was designed to be something where it formalized those things that within the community we had the most consensus about. Right, that’s very different for most of the hard stuff, so there’s a very active research agenda that OWL creates. What I think to me what is most exciting about it, is that it makes it… well two things. So putting on my ‘researcher hat’ now, why at DARPA I was pushing some of this stuff, one was to get it out the door cause people need to start using it. The other was because I felt that a lot of the research in this community had gotten focused on the ontology as an end rather than a means. And exactly the kind of thing you’re talking about, where you talking constructing ontology out of pieces of other ontologies, when you’re talking about putting things together and checking somehow whether things you’ve taken from a bunch of different places are consistent. All of that strikes me as exactly the right kind of research we need to do to keep the Semantic Web growing, to get into new things. It’s not like OWL is done now we move away from ontology, right. OWL V.1 is done and someday I hope there’ll be OWL V.10, and in OWL V.10, I would hope there is a lot more support for things that people who have been working on these things have done… Just as a simple example, the import mechanism in OWL is necessary but it is not sufficient. You must have ways of taking pieces of things and putting them together, you must have ways of referring to things outside your ontology in DL that don’t somehow make it automatically become an annotation property. But we don’t have good consensus on how to do that, now people are starting to do research on that area.    (OIY)

I have a doctoral student, he’s actually a Spanish student working in my lab, who’s just done a doctorate on exactly kinds of things, so I think it pushes new research. Now again that research has to transition into tools, those tools have to show their use in real use cases, and that has to get out there, so…    (OIZ)

Attendee: (no name announced) I will agree completely with you on these, but to me it sound like well, you know, it would make sense to have something, you know, mightily powerful that you could actually save these things into. Like for example, common logic so you could actually say well at least we have a way of describing what these things are or making different proposals for describing whether you mean to pick different bites and pieces and build a Frankenstein Ontology and make sure that with reasoner, well it’s not Frankenstein, it’s human. Well, right now it’s kind of like open up in the air about well how can we do this and there’s really not much of a ground we can build on top of to say these things.    (OJ0)

JimHendler: I guess I agree with everything up until that very end there. I mean, now you have… well so, let me give you an example. When I was at DARPA, someone came into my office and said, listen, now that you are putting out all these ontologies out you need an ontology merging stuff. Look at my great work I’ve been doing for all these years on ontology merging. I look at him and I said, here’s a website with 200 ontologies, they are written in a syntax you can understand completely and totally and it is all within the coverage of your language, please go show me how your stuff works on that. And they came back to me very upset and said, well all those people did stuff wrong. Well, you know, my answer to that person was okay, then how can you claim generality of your technique outside your own research group, if everybody you tried it on has been trained by you., worked with you etcetera, so there’s always tradeoffs in this space, right. The most powerful thing probably requires some fairly complex knowledge that you can’t expect someone else to have across the world. So common logic, and things like that, I’m not against it, and in fact I’ve played in that community and I’m very in favor of it going on, but I see it primarily as stuffed help make the research keep moving on the important topics, so we can develop things like extensions to OWL, or a new ruling, when there’s something we understand better what they do what there schedule was. OWL worked because we had 30-years of experience fighting with each of experience fighting with each other about details about ontologies.    (OJ1)

Attendee: (no name announced) Right, but now that we’re kind of like exploring ideas about we haven’t had much of experiencing, then I believe it gets more important then to be able to find a way where we can keep on exploring these ideas in ways that are practical because they are totally supported, and it’s not because it’s a tool that is going to cost an arm and a leg because there’s only one vendor on the planet who have that kind of technology. But it’s something that you can reasonably find, you know, not perhaps twenty different vendors, but maybe a few of them who have some reasonable implementations of this thing. And so far what I see is well it’s happening on more or less on the open source, you know, movement where you find fairly interesting new tools out there that helps us kind of nibble at the edge of exactly how practical are some of the ideas, but it’s a bit of kind of a hit and miss game, and difficult to really make good guesses about exactly which combinations of these things might actually work.    (OJ2)

JimHendler: I agree with you completely, but that’s why we have created conferences, that’s why we have created journals, that’s why, you know, these things happen. I mean, you don’t get to a standard in one step, right? Ten people can get together and publish a standard, right? There have been many ontologish standards before. What we did in OWL is we tried to use a very open process with a lot of people, two government investments on both sides of the ocean, lots of people playing. We got enough out of that, that we were able to bring it to a major consortium, you know the W3C, which has a model of very, very high impedance. Most things that come into the W3C never get out, right. We needed a lot of consensus coming in to get there. Now again, we started with some research languages, we put together this thing called DAML, we got a lot of people working on it. There was the competing at OIL, we got the people together. So again, a lot of it is the way you break chickens and eggs is by making proto-chickens that lay proto-eggs and sort evolve to what you need. You know you have to go through open and close first, so somebody who’s a tool vendor can build a special purpose tool in this area and certainly some of those are out there. You know things need to prove their worth, either in the market or the research community, ectectra, and they tend to move between these things. I don’t think I’m saying anything interesting in that, that’s just very high level, but what I think is the tricky part here is we tend to forget the narrowness of our own community with respect to our capabilities. OWL, if I could make any change at OWL, I would probably drop about half of it. I would make it a much, much simpler language, even though it would feel to me like cutting off my left arm because I use so much of it, but the reason is because people have to be able to pick up this thing and start going; and then they get motivated to do more. I often say think that imagine that if you had tried to build the web out of XML, you couldn’t copy and paste, the browser would have been way harder to do, creating your own webpage would have required creating document frameworks and DTDs for your organization and stuff. Instead something much simpler came out and motivated people to work hard to get to the point where they could understand why you needed something better. Now SGML in the form of XML, which is much closer to the original SGML than HTML is, you know, is moving very well. I think in the logic world and the semantic world we’re seeing similar type stuff, we have to walk before we can run, we have to get other people crawling before they can walk; and that is starting to happen. Companies like Oracle are now taking about supporting RDF. They are not yet talking about supporting OWL, but if they support RDF they make it a lot easier for those people who are doing OWL things to base their stuff on a firm foundation. They will eventually be doing OWL and I hope at that point I will be doing stuff in languages what will be OWL V.6 someday, or in rule languages or rules in OWL, things like that. So very much I’m agreeing with what you are saying, but I don’t think you can just ask for something to be mandated in one step, which I know you are not saying, but we do have a very active research community.    (OJ3)

PeterYim: Thank you Nicholas for the great question. Let’s maybe take inventory on who has questions so we can sort of call on the people one of a time.    (OJ4)

AdrianWalker: Peter, if I may just jump in. Jim was sort of two-thirds of the way through answering a question and he was sort of side tracked with a question. I wonder if Jim wants to complete that.    (OJ5)

JimHendler: Jim would love to complete that, remind me what I hadn’t answered yet.    (OJ6)

AdrianWalker: Okay, so I had sort of put on the table the idea of OE existing rule systems that are out there with heavy commercial investment beginning to communicate via messages, and you had said that you Jim had a three part answer, and you got to parts one and two, but you never go to part three.    (OJ7)

JimHendler: Right, so the part I agreed with was, I like the idea of trying to bring those rule systems together. The part I wasn’t sure about is the message passing, and I guess that is also the part I was disagreeing with a little bit, because you’re being kind of categorical about message passing being the right way to do it; so it’s really not that I disagree with that, it’s more that I’m saying that needs to be explored.    (OJ8)

AdrianWalker: Only in the sense, Jim, it something simple to start with just in the spirit of what you’re saying.    (OJ9)

JimHendler: That’s fine, but again, my one caution on that is look at something like KQML, a very powerful message framework. Or, I’m blanking on what the name… there was a DARPA standard for knowledge exchange… K, anyway, not KIFF, the hum, it’s going to drive me crazy now that I can’t remember it because I worked on it for years…    (OJA)

AdrianWalker: KQML?    (OJB)

JimHendler: No, not KQML, it was the… it was the middle where the approach to having, you know, ontologies able to say things to each other, so I could send a message to another ontology saying please assert that this and this is true and here’s the language I talk, and things like that.    (OJC)

AdrianWalker: And it wasn’t KADF either, was it?    (OJD)

JimHendler: But again, there were several things in that space, XOL and some others that were all kind of pushing around that same area, and they were all very much focused on a message exchange approach. But the problem with the message exchange approach is we tended to have this problem that was sort of like having one telephone, or you know, ten people with fax machines. Hard for some but you know, you had to get a certain critical mass going in the message world before you can really get people sometimes to put the critical effort into converting their stuff; that’s getting a lot easier now. So, you know, with some of the web service standards and things, but I think messagy approaches are a lot more affordable than they were, but I think exactly how to do it and what they look like and things like that, people need to get some proposals out there, people need to start using them.    (OJE)

AdrianWalker: A possible plus of this would proposal is that the messages would be human readable, as well as obviously executable.    (OJF)

JimHendler: Again, Adrian, I don’t disagree, I just say asserting something is different than getting some out there, getting people using it, getting… you know, someone mentioned open source tools before, I think it was Nick. One of the reasons why the open source tools are having more success at the moment, is exactly the case that people aren’t quite sure what they want to do with this stuff, it’s fairly new. They’d rather grab a few open source tools and fool around for awhile and then when they think they’re understanding what’s going on, then they’re ready to go shopping, or to look for commercial things, and we’re seeing that happening at both the RDF and the RDF eschema and a little bit of the OWL level.    (OJG)

So I think that very much with this rule stuff we’ve had many… I think standardizing a format for a rule exchange, is something where the community is fairly similar to where the OWL community was; it’s missing some of that focused investment, but that may or may not matter. But I think with respect to things like how do you really use each others rules, how do you take something that has procedural attachment against someone else’s, so again, I can’t read someone else’s OPS 5 and just plug it into my system, because it tended to be hardwired into the rest of the application.    (OJH)

AdrianWalker: Also the entrance engines are different.    (OJI)

JimHendler: Etcetera, but if somebody starts producing some open source tool kits in this space, I promise you a lot of people will use them. I mean, the few rule things that have been released open source that I know of, have user communities who are fooling around with them, and playing, and doing some of this exchange; and in fact a lot of the people who would be involved in any kind of standards effort in this phase have probably primarily learned how to use this stuff from fooling around with those things. They’re not trained in this and that’s part of what makes it…    (OJJ)

AdrianWalker: There’s still a difference in the situation, which is the huge previous investment by big commercial companies in rule systems. It doesn’t look as though that’s going to be replaced by something that somebody developed open source, you know, and…    (OJK)

JimHendler: I agree, but we’re talking about the interchange languages, we’re not talking about necessarily the rules languages themselves. So to get that to happen we need better use cases for interchange, two is we need to explore the properties of those and develop things that work in that space, and three is we’ve got to get some cheap and dirty easy to use tools for toolmakers out there.    (OJL)

AdrianWalker: But these would be basically messaging support tools rather than…    (OJM)

JimHendler: Well, that certainly is one way to go and you know, if you get those things out, people will start using them and if someone else comes out with a different non-message oriented one, that’s something to compare to.    (OJN)

AdrianWalker: So as I said in the other areas, start and pull develop tools tighter standards.    (OJO)

JimHendler: You know, a very, very smart person, Berners-Lee once told me his secret of success, he said build small but viral, right. His definition of viral was roughly speaking, these are my own words now… your friend sees you using it and says hey I’ve gotta get one of those and your competitor see you using one of these and says, Oh my God, I’d better get one of those.    (OJP)

AdrianWalker: Yeah, and in six degrees of separation you’ve got the world. Yes it’s good. Chuckle    (OJQ)

Professor Hendler: He done it. But the simple is the thing we tend to forget. Right, we want to get it right, and there’s only a certain amount to getting things right you can do. Let’s let some other people get questions in, cause….    (OJR)

PeterYim: I was trying to ask that maybe we get people lined up then we can go through the questions.    (OJS)

EMichaelMaximilien: I have a questions?    (OJT)

PeterYim: Let’s take names first… Michael Maximilien from IBM, Marc Wine, Peter Yim I have a question. Michael, your turn.    (OJU)

EMichaelMaximilien: My question is I am wondering if organic ontology construction may not be the right approach for the web? Just to make that case, I’m wondering what are your thoughts on this? And just to give you an example, I was at a talk yesterday at XXXXX in Silicon Valley, and we had a presentation from Yahoo, Ted XXXX (sound went vary faint), and one person assured their thicker and I’m sure you know about thicker and delicious, and what they’ve done is essentially used the community to create, if you may, an ontology of the pictures that they have and using that they were able to find some different pictures that represent parakeet of bird and turkey from Thanksgiving and Turkey the country. I’m wondering if instead of trying to build formal ontology as using tools like OWL, shouldn’t we try for the web especially because of the volume of people and the intelligence of the people using the web organic approach is not the best work or maybe it‘s a better work than what we’re trying to use right now.    (OJV)

JimHendler: Yeah, I’m a little confused about which one you’re referring to as the organic approach, the folksonomy type approach or the…    (OJW)

EMichaelMaximilien: Exactly.    (OJX)

JimHendler: So here’s the thing, you know, here’s my quick thought experiment on that stuff, okay. Number one, is if folksonomy really works, won’t it just become keyword recognition, i.e. isn’t the logical extension to folksonomy Google and not something better than Google; because in fact, you’re not putting anything in that the machine can know how about how these terms relate to each other, what you’re doing is getting a lot of human agreement. Now I happen to believe that the human readability of terminology is absolutely crucial, I fought bitterly to get as much of that as possible into OWL and OWL tools. The fact that I can read your ontology in my tool let’s me as a human do a lot of things, so I’m a firm believer in that. Also if you look at OWL, OWL is built in a way that exactly ontologies can point at other languages and other things. So think about something like this, right… I’ve had this talk with techneuroty guys, I find them foolish, personally. I mean they’re doing cool stuff, I’ve had the talk with the flicker guys, I think they understand it a little better and companies like asemantics are trying to move into the space which is okay. So you get a bunch of people to agree to a particular language, that’s fine, you just become XML eschemas, right? I mean you’re not real except you have an easier way to write them if you argue that CSS is easier than XML, but that’s another whole argument.    (OJY)

Let’s say you have easy tools for folksonomies, okay, so now you have ways people can markup information about which people are involved in things. Well now, what’s the first thing you want to do with that? Well you’d like to have a rich vocabulary of people. You’d like to be able to say, what property of people might be a useful property for reverse indexing, so two guys with the same email address could be assumed to be the same person. Two people with the same name probably shouldn’t be assumed to be the same person. You know, where your homepage is a useful thing to know about a person. Well know you’re in kathoth. Kathoth is exactly the middle between a folksonomy and a formal ontology, and it uses some OWL to do some of the things it needs formal for; the inverse functionals, the transitives, etc.; and then, by the way, you have people now, my group does it, lots of people do it, who build their more formal ontologies, the people in that organization with back pointers to [footh], which therefore you could collect information the folksonomy type way, import it to another one and bring it in to your formal ontology where you can actually reason about who’s related to who, or which are the same people, or who works for what organization, which you can’t do in the folksonomy level. So I don’t see these things as competing, I see them as cooperating and I see what’s really going on in my mind is this capital Semantic Web vs. small Semantic Web argument is, one, a lot of people who don’t understand semantics but understand that Tim Berners-Lee is a smart guy, want to claim his term; and two is people like Shirky who’ve argued everything Tim has ever done they’ve argued wasn’t going to work right including the web. You know, I can’t tell you how many people have tried to make the 4-hour go away. The problem is you can’t, because you can’t have the web without it, you can’t cut a scalable distributed text type system.    (OJZ)

So, you know again, I think we have to find the use cases for the different levels of semantics. I think we have to find tools… I just recently had this argument with Marty Cunningbam, and I showed him some of the tools we were building in my lab. I said, why is it harder to use these tools than to use the folksonomy tools, and he said, actually it is easier to use these tools and he and I are meeting in a couple of weeks with some of the techneuroty guys and things like that to talk about how we put these things together. So, you know, the problem is that we in the ontology world we’re a little too focused on creating the ontology and not what you do with it. When you start doing a tool where you show that you read an ontology from somewhere, markup photos with it and then it can be indexed correctly against those things, so that you don’t have to use a folksonomy to tell Turkey the country from turkey the food. Right, if I’m linked to the food ontology it’s unambiguous. A lot of it is how do we harness all this stuff together, that is my belief, and OWL was designed from day one, DAMAL, OIL, less OIL, more DAMAL and the shoe stuff that I was doing in the 90’s was all based on the assumption that linking things together, getting that network affect was the crucial thing.    (OK0)

EMichaelMaximilien: The viral affect that you mentioned.    (OK1)

Professor Hendler: It’s the viral affect, but it’s also the network affect. How do I get someone to care about the fact that my ontology exists. Well, you know one way is they want one and mine is 80% of what they need and they just steel it, because I’m publishing it open and other people are copying and some people are pointing and things like that and now what you have is the network affect.    (OK2)

In a talk I gave recently I showed a very simple ontology that basically says feline leukemia is leukemia defined into the National Cancer Ontology where the organism that gets the disease is cat as defined in physic. Well, I’ve just pointed at 87,000 formally defined classes in my one little assertion there. How do we harness that, how do we use that, that’s another issue, but I believe I have a reasonably good definition of feline leukemia using those other two things. So I see some big skeletal ontologies being very useful. I see a lot of things that will be in a kind of medium form, this Skoth language which has problems from the way they did the KR when you come at it from the perspectives of those of us who like things formal, that has the very nice feature that it maps very cleanly to a lot of people’s ontologies. So I know the Library of Congress is going to release some of their stuff in it, Library of Agriculture is going to release their stuff in it. Well now you’ll have those things on the web linkable to and by the way, now your OWL ontology can use that stuff as annotation and gloss; so is that a bad thing? And you can bet that the folksonomy guys will be pointing at those the way they already point to WordNet. So I think what we’re gonna see is that stuff naturally grows together and that this OWL and RDF schema stuff is a little bit more the high-end of that story, and that the low end doesn’t compete.    (OK3)

EMichaelMaximilien: It’s just amazing that in one year they were able to gain so much popularity and so much people actually using it…    (OK4)

JimHendler: Yeah, you know, I heard that and I was actually very upset about that until I happened to have been at a conference lately where we actually looked at the numbers, and in fact, more people are using RDF than are using their stuff, by a large margin. If you live in the Bay Area you don’t notice that, so you know, one of the questions one of the techneuroty guys was asked… what about internationalization, and he said, well, you know, at the moment there’s almost nobody using this stuff outside the US, and he said, in fact, there’s not that many people using this stuff outside the Bay Area except for the blogging stuff and most of them aren’t really doing anything other than, you know, what’s automatically built into the blogging tool. Flicker’s doing better, but even Flicker has the problem that you know, you put in the Chinese characters for China and I put in the word China and they don’t go anywhere near each other; those pictures don’t find each other. And how do you solve that problem, well you need to start adding some KR and what are those guys doing? In fact the Flicker guys are talking to the RDF guys quite a lot these days.    (OK5)

BrandNiemann: Yes, Hi Jim, thanks for dialoging with us. Just a slight update, I’ve been working with the Oracle people and besides being able to do RDFs now, which we’ll show next week in our workshop, they’ve announced planned support for OWL, V.11 of the database, and I can put a postal link, but they announced that they will allow OWL ontology to be stored in a set of Oracle tables and then they’ve announced querying for it is a prototype already and I asked them to show it to me and they plan to release it in V.11 of the database. They’re up to V.10 now, but I want I want to get your comment on is I’m very impressed with the way they can provide storage for RDF and the fact that Oracle is so ubiquitous that in the federal context all anybody has to do is upgrade the next version, 10G, Rel. 2, and every agency that’s using Oracle now has access to RDF capabilities. So it seems to me that if we can just get enough people creating RDF, or converting to RDF will be a big jump up and I wonder what you thought about that strategy.    (OK6)

JimHendler: I’m a firm believer in it. You know, I was happy that companies like Takana and the Johari stuff was starting to show that it was getting acceptance within the contractor community. Again, Oracle having this stuff still means that the defense contractors who build the larger systems have to learn how to integrate within to use it. But if you actually go back to my very first slides I ever gave at DARPA about this stuff, most of which are not public, unfortunately. A lot of what I said in selling the program was… you know who builds the government in DOD Systems, is not the contractor, the contractor does the integration, right, if you don’t get this stuff into Microsoft or IBM or Oracle, or companies like that, then you haven’t broken this thing, then you’ll always be fighting the non invented here. Why isn’t XML not invented here? Well because lots of people support it. I think the RDF stuff it was pretty clear to me that the government need for integration of these things was not unlike an outside need, but was much more pronounced, and I really felt the government investment… this is one of these cases where I don’t think this stuff wouldn’t have happened without DARPA, NSF, the European Union Investment, etc. I think what would have happened it would have taken a lot longer because of what… companies will use whatever works when it’s in a useful form and it’s near by and they can see a business case, but it’s crossing that chasm. I really see the Oracle stuff and, you know, I certainly hope they’ll stick to it. Adobe has a commitment to RDF if they stick to it, you know, things like that. We really see the infrastructure happening because of those companies doing these things and so I think that’s great.    (OK7)

Oracle… You know, again, there will always be two problems with Oracle being sort of the only game in town. One is it’s expensive if you’re a university, or a small company, and two is no big company can be ahead of the game as an innovator. So for example, the fact that Oracle can store your OWL still means you’re gonna need to look for somebody who can build an OWL for you, or whose tools export OWL or things like that. But boy it’s a major… I can’t tell you the sigh of relief that could be heard among the guru’s of the Semantic Web community when Oracle announced…. [laughter…]    (OK8)

SusieStephens: I think I should say hello because I’m Susie Stephens.    (OK9)

JimHendler: Your Susie, oh hi Susie. Yes we love you, you’re wonderful. Seriously, I think the convergence of a lot of things is going on here. I think the other thing that’s happening is the services community. You know at DARPA we started the OWL less stuff a long time ago. I got that going I think 6 or 7 years ago now, and one of my predictions was, you know, you can look out into the future and see where companies will need this as more of the services start to get out there in use. While we can’t exactly predict how the services will be used you can see some of this coming. So I think that’s also nice as we see need to put services together with data, you see some kind of format starting to form in the middle that work nicely and you see the corporate things starting to happen; so I’m actually pretty, pretty happy.    (OKA)

SusieStephens: It kind of a fun thing at Oracle at the moment because there’s so much excitement about RDF being contacted by so many different people, and urgently I just had to focus on life sciences, but started working with brand and PETRA and people in the government and getting contacted by banks now as well, so it’s actually elating to watch it evolve.    (OKB)

JimHendler: For me it’s scary.    (OKC)

Attendee: It would be interesting to know if the Dave Nickel how much Dave Nickel had to do with Oracle supporting OWL, or whether it just turned out to be a synergetic interest of his now.    (OKD)

SusieStephens: I have to confess I don’t even know who Dave Nickull is, so…    (OKE)

Attendee: You mean DuaneNickull?    (OKF)

Attendee: DuaneNickull, I’m sorry. Yeah.    (OKG)

JimHendler: For Adobe you’re talking about, correct?    (OKH)

Attendee: Oh, I’m sorry, Adobe-Oracle.    (OKI)

Attendee: Well Adobe is very committed I heard from Jim the comment, you know, if Adobe continues with this I can tell you we are very committed to ontology work, and the continuation of RDF.    (OKJ)

JimHendler: Yes, sorry, I didn’t mean to in any way imply that I thought Adobe wasn’t. I was using, but all of the above, I mean, you know. I’m working with Adobe, you know, talking to them a lot about various aspects of this stuff. You know, I know they’re committed, that wasn’t…    (OKK)

Attendee: Are you talking about doing things like sparkle and embed it into say Adobe Acrobat, you know like when you want to search something in Acrobat, right now it is just words [talking over each other]    (OKL)

JimHendler: Can I, can I deflect that question, because in the lack I notice we had a couple more queued up. You can’t ask me that question… I’m sure the Adobe guys would be happy to tell you.    (OKM)

Attendee: Sure    (OKN)

Attendee: Can I just jump in with a very quick question for the Oracle knowledgeable folks. Is RDF support in Version 9 or Version 10?    (OKO)

SusieStephens: The RDF data model is available in 10G Release 2, which came out on Linux about three ago and it’s going to be available on all other on all other Oracle supported platforms in another three weeks.    (OKP)

Attendee: Thanks for the information Susie, thank you.    (OKQ)

Attendee: So maybe actually to the Oracle guys and ask if they are going to support things like Ontolog [XXX1:00:17], querying like that, languages like Sparkle and others as part of providing our support.    (OKR)

SusieStephens: To the best of my knowledge, all of our plans related to the next release of the database is still confidential, so we really can’t talk about those at this point. But also our development plans haven’t really completely been finalized for 11G yet, so even if I was say something was going to happen there is not a 100% guarantee that that would actually be carried through at the moment. So on one hand unfortunately I can’t tell you very much, on the other hand, if you want to see sudden functionality in the next release of the Oracle database send me an email within the next week or two and you might know.    (OKS)

Attendee: What is your email address?    (OKT)

SusieStephens: It’s Susie.Stephens@Oracle.com    (OKU)

Attendee: Thank you.    (OKV)

PeterYim: Thank you Susie. So let’s bring the discussion back to the Semantic Web Q&A. The next person in line is Marc Wine. Marc from GSA.    (OKW)

MarcWine: Good afternoon and thank you for the cogent discussions. Thousands of physicians across the country soon will be required to adopt interoperable health information system in electronic help records. Given what you said by paraphrasing Tim Berners-Lee about build small and make it viral. What would be your suggestion for in synthesizing the adoption of ontology tools and Semantic Web approaches in the healthcare IT sector?    (OKX)

JimHendler: Oh, that’s a heck of a question. Let me start by saying I’m not an expert in that at all. I can quote a couple of friends of mine whose names I am about to forget, but several people were working in the early XML days and then sort of moved over to RDF in the healthcare record area. You know, there are a combination of issues there, right? One is that a lot of what we’re talking about is aimed at wide-shareability on the web and of course, that’s both the greatest promise and the greatest threat of this stuff when you talk health related information because the protection issues and things like that. Let me put those aside for now. But obviously until there’s decent ways of making sure that the information…a you know, that the policies can more with the work, then it’s really hard to say much about deployment.    (OKY)

I think from the point of view of some of the stuff I’ve noticed, and talking to a few, not so much the doctors but people help support them, the IT people for them. One of the things that everyone complains about is you know, when you try to work with, each insurance company gives you an asp, they give you a web way of working with them, the problem is there is nothing you can do upfront to make them all look the same, or to help make them meet your way of doing business; so you end up sort of outsourcing your office staff. So some of my thinking in this space is that probably the place where the incentive needs to lie is in getting the stuff so that the front-ends for this become more natural and more adaptable. Having said that I don’t really have good ideas how I’ve actually, without getting into a lot of detail of little things I’ve seen and stuff like that, which I would be happy to do with you sometime since we live in the same town. I think the answer is that very much we have to figure out how to deal with this network affects stuff. It’s very clear that the ability to have the sharing is important, but it is also counter culture to how it’s done now primarily cause of the privacy, so, you know, I don’t know how you balance the two. I’m not sure I’m actually answering your question… am I?    (OKZ)

MarcWine: Yes that was a good broad-based answer.    (OL0)

JimHendler: I would say, you know, actually think some of the service stuff has a lot of potential also, because again, in the conversation we had before about rules and things we were talking about messages vs. format, etc., but part of it messages have a nice point-to-point field for them. I can send you something – we can exchange – you can call my information system – there is a single point of entry at which policy can be explored, and so one of the things we have been talking about, not so much in a healthcare but in a related area, is very much this notion of use the ontology to manage the space on top of an infrastructure that’s really much more of an information exchange infrastructure point-to-point. I can’t say I have deep things to say about it, but my guess is that it’s going to have to be some kind of hybrid like that at the moment.    (OL1)

MarcWine: Yes those comments they’re certainly, you know, going to be a broad array of this types of businesses and end-users interfacing with health information systems as we go forward under the strategic framework for implementing a nation-wide health information network, and they’re going to be relying vitally, critically, vitally on definitions, on an understanding, on the incorporation of standards for interoperability; which I strongly believe these tools can help with to the point where lives are depending upon the use of these functions and tools.    (OL2)

JimHendler: Marc, let me give you a much more specific answer in a sense, okay? [inaudible] is less in your business area is more general. You know, one of the things that I learned looking at the National Cancer Institute is playing with this stuff. So they already had folks doing Thesaurus work and things, much like the NOM does more generally.    (OL3)

MarcWine: … presentation of their work at the Consolidated Health Informatic Meeting last week, in fact…    (OL4)

JimHendler: Okay, … but one other thing that’s interesting about it is now since they’ve moved to OWL and more people have been able to see and share and use that stuff, they’re getting much more feedback than they every got before. They’ve got people who are actually trying to look at their stuff, both from a formal level and from a use level; partly from mandate, because people have to use it, but mainly because they were the only game in town, right? You either could build it from scratch or you could use theirs. As more and more vocabularies get out there, as more and more people can find and use things, you know, then the issue becomes where do you look for stuff, things like that. So I think part of what people who want to get a jump on this stuff, want to motivate people to use this stuff, it’s very much like the web. The people who put up the first definitive website, but what I really mean by that is credible website, or something like that. So the first people to have stuff in the space that said… you’re looking for healthcare information, look, we’re a hospital. They started to get a lot of people looking at their pages, which of course motivated other people to start playing with putting up more pages; forced other hospitals to put up their specialties. I have a feeling that some of that same stuff has to happen in this information space, either by becoming the equivalent of a link site to say – look there are a lot of medical ontologies out there, some of which are good, some of which are bad, let’s create a mechanism by which we curate a collection rather than curate the ontologies; so let’s just become the place people start coming when they’re looking for healthcare information ontologies.    (OL5)

MarcWine: Excellent example… excellent example… the Cancer BioInformatics Grid in the space that you referred to is one excellent example practically there.    (OL6)

JimHendler: That’s right, but again I think that getting stuff out there is crucial, or if you don’t have the resource to develop the stuff, then for a much lower cost, something like Veterans Affairs, or places like that can start creating registries that are trustworthy… trustworthy is not quite the right word, but again, like the early website, right? You wouldn’t link to something unless you knew the people and you knew it if you were a hospital, whereas everyone else was just out there creating pages saying, look at a million things you can see about health, and people pretty rapidly started realizing it was better to go to the ones which were curated by people who knew what they were doing and those sites started to grow. I have a feeling that the best way to motivate people to use this stuff is to make it easy and that means having some place they can go where there’s a registry or a help desk or things like that. In fact the DOD did that with its XML registry. Originally they were going to have something where, you know, there were only going to be a couple of legitimate eschemas, and it was all going to be mandate and da da da, and the users rose up and said NO. Instead they ended up going with the registry based approach. Now if you’re in the DOD and are looking to do XML you always start by going to the registry, because you get in trouble if you build your own when there’s one already there. So it’s motivating interoperability by a social rather than a technical means, and I think the healthcare stuff really needs some of that same kind of stuff. I wish the Library of Medicine would step up to that plate, I’ve talked to people there about it; and you know, again one of the programs they have is so much of their stuff is they can’t give it away. They have to have licenses and things and they’re very strict about it and very careful, and I’m am not sure why I understand that putting something on the web behind the web is different than, you know, where you have to signup to get it is somehow that much different than mailing somebody CDs with the stuff on it and letting them put it up, but I do understand legally there are differences; but I mean there are these issues here. Again, I think that the more that gets out there, and you now know from the previous question, that, you know, with companies like Oracle will help you support this stuff it means you don’t have to build the infrastructure from scratch, you can build on top of stuff that is there, and really for a fairly low cost you can become a community resource; and I think most people will be motivated to get into this stuff simply by the value going up and the cost going down.    (OL7)

RexBrooks: Well, you gave me one idea of how perhaps a little database, a help IT sharing database that includes listings of places and projects where healthcare entities can go and find these particular tools…    (OL8)

JimHendler: Yep.    (OL9)

Rex Brooks: … spreading that around in itself that would be a good motivator for a doctors of heath IT and EHR systems to get onboard.    (OLA)

JimHendler: … and what’s more is I bet everybody who’s building any will be eventually will be knocking on your doors saying will you please let me in. So instead of you having to go out and say ‘what vendors have something in this space?’ They’ll try to get into yours.    (OLB)

RexBrooks: Almost…    (OLC)

JimHendler: Thank you.    (OLD)

RexBrooks: … we’re doing that.    (OLE)

JimHendler: Who is this?    (OLF)

RexBrooks: This is Rex again.    (OLG)

JimHendler: Oh yes, okay Rex.    (OLH)

PeterYim: Peter Yim here, Marc, thank you for the question. Peter Yim here, I’m actually next. I have a couple of questions for Professor Hendler.    (OLI)

One, I love your point about open source tools being crucial in getting all this going. My question would be, do you see some systematic support for development of these tools in the communities; and I guess that brings me to my second question, which is the sort of broader support and resources that is pumping into getting the Semantic Web or related semantic technologies off into the real growth which I think is important, the timing is crucial at this point. But if we look around, I mean the demo project funding just sort of stopped and a few meetings ago at Ontolog, I mean some participants from Europe is lamenting the fact that the US seems to be sort of withdrawing their funding, while Europeans are putting more and more emphasis into developing this. I mean, do you have comments on that?    (OLJ)

JimHendler: Lots. Um, you know, lets take it a step at a time. Let me do the first one first.    (OLK)

Attendee: Open source.    (OLL)

JimHendler: Yeah, I wish there was somebody with a resource to do more of that, there is a lot of discussion about that right now. There is a lot of people pushing to get some of this stuff up. Mostly it’s happening in the normal source forges, and people… like my group has all these tools right now that we’ve kind of matured them as far as a university can mature something, which means they’re probably a million dollars away from where you could put them in front of the user community with some kind of support, right, and the question is, you know, how do we get things across that. In the US the model of how you do that is companies are supposed to confine these things, license these things, yet you try to get them into the industrial sector. In the European model companies and universities team up more in the middle, so you have actually less basic research funding modules, how you define things, but certainly in IT. In Europe you have less basic research funding, but more of what we might call transition funding, so I think what you’re seeing is that the Semantic Web rapidly went from research to the practice, which is why you’re seeing a lot of funding of it in Europe and less funding in America. In America, once Oracle says we’ll support it Congress goes to DARPA and says, you know, Oracle’s doing that, what are you doing here? Sort of making the case for why… early questions in this forum, you notice I kept saying we have to keep the research alive, it’s got to say ahead of this, so that’s a trick. Some of it is National Science Foundation, some of it that again look at how many things now are starting to form in the ontology space, there’s the Ontology Centers and things like that, and what’s nice about that, and what’s nice about that is gain with the standard they’re forced to at least have some version of their stuff coming out that way, so we are seeing more sharing through these informal ways things tend to get shared in the US.    (OLM)

You know, there’s a lot out on the web, I mean right now if you Google for a term and do ‘file type: OWL’ or ‘file type: RDF’ you actually tend to be starting to find stuff more and more often; I’m actually surprised sometimes by… you know I say here, let’s look at workshop and now I’m using Google and I’m not using any kind of toy that some research community is playing with. But that said, you know, how do we get next generation stuff in here, how do we keep that going. I’m very frustrated, I mean, you know, I helped create the funding in this field, now that I’m out in academic again I’m in the one country that doesn’t seem to be funding it very much. You know, I get the intelligence community is funding it, the DOD is funding it, I mean everybody’s doing small amounts of it, but mostly in what you might call the 62/63 space, so the basic research to keep this stuff going is sometimes a little frustrating at the moment. All that said, that is in some ways a good thing, you know, as the stuff gets out, as people start using it, it motivates the next generation of it. People start coming back and saying you know, how do we get this stuff to the next level, how do we, you know… Hey, now that we’ve got an ontology in our organization and we need tools to maintain it and support it, we’ve got that starting to happen, now we’d like to use it for data cleansing. You know, we’ve got these databases and we just realized we can use our OWL ontology for a purchase order to figure out which are good purchases orders, but somehow that doesn’t work unless somebody builds a tool that turns the OWL into a transaction rule, or something like that. So again, I think we’ll see that cycle happening.    (OLN)

That happened with the web, but in fact look how many academics are doing research on advanced web tools at the moment… answer, close to zero, if you take out the Semantic Web. We may well see that in America, our funding model just says let Microsoft take this over and let other companies take it over and, you know, who needs the research. But that said, you know, again I think there is enough of us that know how to do things. I think there’s enough hard problems in this for the future that you know when you send in your proposal for the fourth time on ontology mapping type stuff, and how it can play and you can say, look at the four different depositories being run by these different large government organizations and how hard it is to map things between them, you start to have more creditability. Then the use cases aren’t hey where’s some academic, saying we know that AI trust us. So I think there are a lot of factors at play here. You know I have my pessimistic days usually on days when I’m sitting here looking at my funding and I have my, you know, optimistic days, which are usually days when I’m sitting there thinking about it in a larger picture and looking at things like the Oracle announcement and things like that; but I sort of wish I was in Europe right now… uNESsential just got another nine million dollars contract.    (OLO)

PeterYim: Who else is in line for questions?    (OLP)

NicolasRouquette: I’d like to ask a question. I cannot help but kind of put back the problem perhaps… the most if you will authorative status on the later than perhaps others and since you’ve been there from the beginning, in the sense that to me when somebody might say we need proposals on ontology mapping, another one ontology services, another one ontology matching, another ontology one on, I don’t know, exchange or versioning, and whatnot, and in the end it kind to me seems like well there is a bigger picture missing about well it’s kind of like saying well, we’re manipulating ontologies as it they were programs for example, but we haven’t even figured out exactly what is the science of programming. What does it mean to have said non-calculus or mathematics or a basis for saying well what are the different kinds of ways in which we can manipulate ontologies to explain what does mapping mean, what does matching mean, what does versioning mean, and so on. Or not necessarily that it might be the only way to say it, but if we could pick out, like you mentioned, you know, the fewest number of concepts there, or the simplest way of describing how these operations might be expressed in some sense, it seems to me that that might be the place where if we could get it right, that might make it easier then for somebody to say, do I need a proposal to build a ontology matching system, or is there enough now knowledge about exactly what the problem is that in fact we can jump right ahead and build actually a product to do it; kind of like Oracle building OWL, you know, storage for OWL. Maybe I didn’t quite explain myself, but…    (OLQ)

JimHendler: No, Nick. I understand what you said, I’m not quite sure how to turn it into a question. I mean, other than saying, I agree with you.    (OLR)

NicolasRouquette: Maybe what I was trying to ask is, what would you… would you… recently you were asking about well what would be the kinds of things to think about for say OWL, V.2, and it seems to me like the issue is well we haven’t really actually thought about it. If we wanted to collapse all the different things that people do with ontologies into say ten or twelve concepts, what would those be?    (OLS)

JimHendler: So one of the things I think is incumbent on the research community, and here I include both academic research, industrial research, the thought leaders of companies that need the research, is that the research agenda for the Semantic Web and for semantic technologies in general is getting buried under the initial success of the technology. You know, again, the web analogy is as browsers hit the commercial world a lot happened, but somehow computer science departments almost never teach web courses anymore except as a service course in HTML. Where’s the thing to push forward the architecture of the web for the future? Where are the things, both in the semantic space and outside, is a real issue. I’m involved in a number of activities personally that work on that, but you know, our funding in all the different countries is very stove pipe, so people who are involved in the medical community really are the ones who have to help the medical community understand the importance of this stuff to them. You know, somebody came to me from the physics community and says how do we get physics to do what life sciences did and really start to put some stuff on the table about our needs to get some attention. I said they didn’t do it by magic, they did it by starting to get stuff happening and commanding the attention of their funders, saying, you know, we need this stuff, help us make it happen. And funder, I don’t mean just government, I mean if you were in a company it’s figuring out where it can happen. I think one of the things that’s tricky is most people think that the Berners-Lee layer cake is equal to the Semantic Web roadmap. Right, it sort of says we do this, then we do this, then we do this, then we do this, then we’re done. But of course, that’s not true. I mean at every one of those levels there’s what do we do, how do we get it out, what do the tools look like, what are the next steps, how do we do this ontology, all the things you said in the ontology space. I think getting some groups to write white papers in that space. I think getting some people to understand that picking the low hanging fruit is critical first-step, but not sufficient in the long-term.    (OLT)

I often start my talks on the Semantic Web and maybe I’ll use it today to end one. I often say you know I give about half my talks to business groups trying to convince them this isn’t research and half my talks to business groups trying to convince them that this isn’t just some business thing.    (OLU)

This really is an exciting area because it has both stuff that’s ready today, stuff that can be appropriately applied at the level we know how to apply it right now; you know we can build the data stores at the level that most people’s portals are, but at the same time for the future there is a lot of stuff we don’t know how to do and we have to workout how to do, and we need people working in those areas from all the different groups that are involved in this stuff. You know the Scientific American paper was sort of a vision, sort of a five or ten year starting place vision, but you know we’re five years into it now. You know, there’s still people working on next visions, but they tend to be in sub areas and things like that. I’m spending more of my time these days in that part of my life looking at policy and stuff like that, cause again I think that’s a crucial part of getting this stuff into the future. But you know, people in the ontology community really now have a starting place to start saying okay. Not, how do we take the agenda we’ve had for thirty years and keep pushing it, but how do we take that agenda, how do we look at what’s happening in the real world and how do we find the points of content so that we can focus the research on those. DARPA is desperate for people to help them understand. The reason the DAMAL program went away wasn’t because DARPA doesn’t like this stuff anymore, it’s because it was a five program, the program reached the end, it got two one year extensions and that’s as long as anything gets funded at DARPA unless a new program manager comes in and says here’s why it’s important. DARPA will fund this stuff in a minute as soon as somebody comes in there with a good case, so it’s really incumbent on a lot of us who want to see this technology flourish, who want to eventually see someday when the common logic is really something that’s widely understood and used to keep this thing moving, but we’ve got to make the case of… we can’t just come in and say it’s all broken because you’re using a non-expressive language, or you know, let’s redo the whole thing in F-logic, right. We really need to find a way to pull these things together and say here’s the vision and here’s how we keep moving to that vision and here’s some of the next steps; somebody pick it up.    (OLV)

DavidWhitten: Isn’t this part of the problem that you just kind of expressed where you said that the community is fragmenting off into very specialized things, but what you just said is that the funding is not able to fund the larger general stuff, only they would fund the very small specific things.    (OLW)

Attendee: I’m sorry, who was this again?    (OLX)

DavidWhitten: I’m sorry this is Dave Whitten..    (OLY)

JimHendler: David, I don’t think that’s what I said. Um, it’s more that you have to make the cases for the different things and different people fund the different cases. Business funds to making a profit. Business needs to go after low-hanging fruit, if it’s short-term, or after something where it perceives a larger market in the longer-term and we have obviously convinced a few businesses that this is the right stuff to be doing; but that’s how, you know, you get corporates interest moving. At the research level, okay, DARPA’s job is to find the crucial long-term problems in the US Department of Defense and solve them, period. NIH’s problem is to look at the needs of healthcare and health sciences in America and figure out what research investment to make to solve them. If somebody comes to NIH who can talk credibly about the role of this stuff in medicine like Marc does, it gets them interested. Now you can either talk about the million dollar case that you have or the hundred million dollar case that needs one of those agencies to go after this in a large way. Obviously that latter one is a harder thing to do, it’s why someone goes to DARPA as a program manager for, it’s what NIHU you spend time there as an IPA doing, but all those things require people in the community to either do that, which is my case, which is the rarer case, or more typically, help somebody at one of these places understand the importance of this stuff in their own use cases against their needs and really help make it happen. And I think we’re actually in pretty good shape, I mean, having said all that almost every funding agency in American is funding some of this stuff, just not usually in a program office that’s called the Semantic Web Office. I actually think that’s a good thing. I actually counseled NSF against setting up a Semantic Web program because I knew what would happen was they would put X-dollars in that and that and that would be the X for all Semantic Web programs. Now if you look at all the things where Semantic Web is a part of it, I mean some big bio thing, that has a small Semantic Web part, but you total up all those dollars, there not terrible, but it’s not a focused investment in the next generation, but somebody has to help them understand what those next generations are and why they should be funding them.    (OLZ)

PeterYim: So I guess it’s a good time to have Professor Hendler wrap this up so that he can go to his next appointment, Jim…    (OM0)

JimHendler: Yep, so thank you all for the very thought provoking questions. As I said, people who have more specific questions, or who want to follow up on these, there are the two different things. I guess Peter will tell me how to get involved in the forum, and those of you who want to send me specific things, please just mention that you participated in this. I get a lot of email from people asking me questions about Semantic Web and I sort of prioritize them as those who put the time and effort into listening deserve the first answer.    (OM1)

PeterYim: Thank you very much Jim. On the Key Session Page we’ve already got Jim’s email address, but obviously it’s great if you can post it to the forum so that the rest of the community can gain this shared knowledge. Jim, with your permission, we can ask you to list, you set it for mail into your inbox or digest of you want to.    (OM2)

JimHendler: No that’s fine, that’s fine.    (OM3)

PeterYim: … and    (OM4)

JimHendler: If you’ll send me the details after, I’ll make sure I’m where I can read that stuff.    (OM5)

PeterYim: … and I had this conversation with Jim earlier this morning that we would be looking forward to having him with us again when we go to a next panel discussion where we would look at the standardization from the more traditional standards like the ISO 11179, all the way to the data modeling to the semantic modeling standards that we are planning on a session like that in short order. So on that note, let me thank Professor Hendler on behalf of the community for spending time with us. This is the Ontolog Forum, Thursday, August 11, 2005 and we have Professor Jim Hendler at the Ontolog Monthly Invited Speaker Session.    (OM6)

Thank you Jim, and thank you everyone for… bye bye    (OM7)

                   - End -    (OM8)