ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Topic maps and the "wheel" of "logical semantics": w

To: Pat Hayes <phayes@xxxxxxx>
Cc: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Patrick Durusau <patrick@xxxxxxxxxxx>
Date: Wed, 02 May 2007 10:39:15 -0400
Message-id: <4638A293.4040007@xxxxxxxxxxx>
Pat,    (01)

Pat Hayes wrote:    (02)

>> Pat,
>>
>> We may actually be making progress! See comments below.
>
>
> Indeed. I sense a convergence. (For those watching from the sidelines, 
> this is remarkable!)
>
Truly!    (03)

>>>>> Logic does not *require* reasoning; but in any case, deriving the 
>>>>> conclusion that <this>=<that> is a very simple kind of reasoning.
>>>>>
>>>> Oh, part of your logic-fixation again. Yes, you can characterize a 
>>>> mapping of <this>=<that> as simple logic if that makes you feel any 
>>>> better.
>>>
>>>
>>>
>>> Its not a case of feeling better, but of whether or not this is a 
>>> logical inference; and it is. Quite a lot of OWL applications are 
>>> used almost exclusively to come to conclusions of this form, using 
>>> inverse-functional properties. Quite a lot is known about techniques 
>>> of (reasonably) efficient equality reasoning, which I imagine would 
>>> be directly applicable to your area of concern. I think its you who 
>>> has logic-phobia. :-)
>>>
>> Sorry, not guilty!
>>
>> The Topic Maps Reference Model does *not* specify or require any 
>> particular method for determining that two (or more) identifications 
>> are of the same subject. (If we failed in that regard I would like to 
>> know so we can fix it.)
>>
>> Let's take another specific example: The gene for the chemokine 
>> lymphotactin was was discovered by by three different groups, who 
>> promptly named it SCM1, ATAC and LTN. It is also known as LPTN and 
>> has an "official" name of XCL1.
>
>
> Quite. This sort of thing happens all the time. I am Patrick John 
> Hayes and Pat Hayes and P.J.Hayes and Hayes, P.J. and many others. Its 
> only even noted as an issue in technical areas where there might be an 
> expectation of a single 'official' name for a thing.
>
>> There are at least two ways that we could devise a mapping that would 
>> result in a "merging of those different identifications.
>>
>> 1) A researcher reading the literature notices that ATAC identifies 
>> the same subject as XCL1 and enters that mapping in a topic map. (I 
>> am grossly over simplifying, the string "ATAC" is obviously 
>> inadequate for  identifying this gene. A google search today returnds 
>> some 4,900,000,000 "hits" and not all of them are about this gene.
>>
>> 2) Assuming the existence of an ontology upon which to base 
>> inferencing (yes, I know gene ontologies exist) and assuming that all 
>> these terms havebeen mappped to that  (note singular) ontology, then 
>> you could use an inferencing engine to produce the same result.
>
>
> Well, you don't need the singular. There can be several ontologies 
> involved: one of them might be nothing more than a term-to-term mapping.
>
>> As I said above, topic maps are not constrained to use only one or 
>> the other.
>
>
> Quite. I didn't intend to imply otherwise.
>
>> So, why would I ever want to use #1 and not #2?
>
>
> Why not allow the possibility of using both? But to do that requires 
> that your formalism is provided with a clear logical semantics. Which 
> is all that I have been urging on you: that your notation be provided 
> with a semantic theory, so it can clearly be related to the logic used 
> by inference engines. To give it such a semantics is not to 'logify' 
> it, or rule out its use by humans (in fact, despite the dark 
> reputation of logic, we found that giving RDF a precise formal 
> semantics actually helped many real live people, including developers, 
> not least by providing quick ways of resolving otherwise interminable 
> debates.)
>
Oh, sure, I never meant to make it an either/or choice.    (04)

Augmenting the activities of authors/users by whatever means possible is 
always a plus.    (05)

>> Well, let's look at : "The success (or not) of HUGO nomenclature," 
>> Genome Biology 2006, 7:402. (http://genomebiology.com/2006/7/5/402) 
>> That is the source of the foregoing example. The authors found that 
>> there is no clear tendency for authors to adopt official symbols. And 
>> that 14% of genes are never referenced using the official symbols.
>>
>> That doesn't bode well for any system, such as an inferencing engine, 
>> that depends upon a particular nomenclature mapping for success.
>
>
> Im not sure what you mean here. Logic uses names, of course: but 
> inference engines don't depend on particular nomenclature mappings. 
> The whole point of having equality in a logical language is to allow 
> for the possibility of there being multiple such mappings, and being 
> able to state relations between them. Rather simplistically, perhaps, 
> in classical FOL= ; but more recent logics such as the IKL we 
> developed for IKRIS have much more elaborate name-mapping facilities. 
> In fact, the entire purpose of IKL was to provide for interoperation 
> between systems which use different logics and different ontological 
> and naming conventions.
>
>> That is *not* a dig at logic or  inferencing  engines, just 
>> recognition that the initial condition for sucess, a common ontology 
>> to which genes are mapped, is highly unlikely.
>
>
> That isn't an initial condition for success. There might be inference 
> paths through multiple ontologies (which may include things like 
> wordnet) which allow the conclusions to be drawn: and that is much 
> more plausible. For example (I know zilch about gene technology) if 
> there is some condition on a gene which is such that only one gene can 
> satisfy this condition, and one ontology entails that the gene 
> satisfying this condition is called 'ATAC' and another ontology does 
> the same thing, perhaps using a different inference path, but uses the 
> string 'XCL1', then the engine, and we, can infer that ATAC=XCL1 
> without being explicitly told it. In a much simpler setting, the FOAF 
> project uses this kind of inferencing to decide that the Pat Hayes on 
> one web page is the P.J.Hayes on another, based on the fact that they 
> have the same phone number or home page.
>
Ok, perhaps I have been mis-lead. Would not be the first time. ;-)    (06)

I attended a presentation on the use of ontologies at NASA and the 
speaker took great pains to point out that a single ontology (well, 
multiple ontologies but with mappings to a single master ontology) was a 
prerequisite to success. When I asked if it wasn't possible to have 
mappings between multiple ontologies that did not share a common basis, 
he said that was possible, but that it was a difficult problem.    (07)

>> So, it isn't really logic-phobia but facing what I think you would 
>> call an "objective fact," that is that people are going to identify 
>> the same subject differently and that includes the ontologies that 
>> they will choose (or not) to use. If you have the information 
>> necessary to support an inferencing engine, by all means, use one in 
>> your topic map.
>>
>> Or to put it another way, if you can support an "intelligent agent," 
>> then do so. But don't overlook "intelligent users" while waiting for 
>> "intelligent agents" to arrive on the scene.
>
>
> See below for comment on this.
>
> <snip>
>
>>>> But more to the point and what seems to be completely ignored in 
>>>> this discussion, is that most people don't use first-order semantic 
>>>> frameworks to make those mappings.
>>>
>>>
>>>
>>> Ah, this is the nub of the matter. I reject this claim. True, people 
>>> don't formalize their conclusions to themselves in a classical 
>>> textbook first-order notation; but are they in fact using 
>>> first-order valid inferences? I think in fact they often are. Bear 
>>> in mind that machine inference is not done by using a textbook-style 
>>> display of a linear proof sequence, A following from B by a logical 
>>> inference rule: it is done by heuristic techniques generating a 
>>> semantic tableau, or by a Davis-Putnam process, or the like. But it 
>>> is still first-order inference. And humans reason in ways that seem 
>>> to be almost directly first-order, including when deciding 
>>> identities. Bill was wearing a yellow jacket; very few people wear 
>>> yellow jackets; the only person I can see wearing a yellow jacket is 
>>> that guy over there; that guy is probably Bill. This is a 
>>> first-order logical inference. Or: Joan should be here by now; Joan 
>>> hasnt phoned; if Joan had known she was going to be late she would 
>>> have phoned; so, something unexpected must have delayed her. That is 
>>> a first-order logical inference. And so on, and on. I don't want to 
>>> claim that *all * human reasoning is first-order: but a surprisingly 
>>> large amount of it seems to be.
>>>
>> Well, I was using your "first-order semantic frameworks" in the sense 
>> of formal application, without reference to their thinking processes 
>> being first-order. May well be a large amount of it is first-order as 
>> you say. But that elides over the issue that so far as I know, 
>> subject to your gentle correction, there is no generalized 
>> inferencing engine that comes close to those first-order processes 
>> when performed by a human agent.
>
>
> True. But so what? There are engines which can do useful reasoning 
> which humans can't do because their attention span is too limited 
> (which is why companies like Fair Isaac have spent millions of dollars 
> developing the world's fastest inference engines.)  The point is that 
> machines have talents which are useful to people, and what we ought to 
> be trying to do is find ways for them to help us.
>
No disagreement on finding ways to help users.    (08)

> <snip>
>
>>> Of course not. I agree this is an important issue; but topic maps 
>>> simply record this, as far as I can see. To record an equation is 
>>> easy: it can be done in almost any formalism.
>>>
>> Sorry, topic maps simply record .... what?
>
>
> That two names (used in their appropriate context) mean the same 
> thing. That is, an equation. (But I see now that this was a 
> misunderstanding.)
>
>>>> Moreover, with a topic map I can record my mapping between 
>>>> different identifications for the same subject, which would be a 
>>>> benefit to the next person who searches for that subject under any 
>>>> of its identifications.
>>>
>>>
>>>
>>> If I follow what you mean here, we also invented a notation for this 
>>> in IKL. Its basically the use of typed literals, where the 
>>> 'datatype' is the identification mapping. But I agree, having a very 
>>> general notation for this is useful.
>>>
>>> So let me see if I understand this. as you explain it. A topic map 
>>> is basically a complex name for a thing, one which records a variety 
>>> of 'superficial' names, each used in a different context of 
>>> identification to refer to the same thing. The TM records both the 
>>> fact of these names being coreferents and records, and links to, the 
>>> various contexts of identification where the superficial names are 
>>> used.  So one might express it as a collection of <namestring, 
>>> identification-context> pairs. Is that right?
>>>
>> Yes, modulo that a topic map is a collection of such "complex names" 
>> for things, where the "complex name" (in the reference model we call 
>> them proxies) can also include any other properties that are 
>> associated with a thing.
>
>
> Oh but wait: that little extra provision is very important. It means 
> that (as I had previously thought) TMs are not *just* a way of 
> organizing names to indicate coreference. They are also a way of 
> making assertions about things. They are an assertional language. Once 
> you get into that territory, you are definitely in need of a semantic 
> theory, and you really ought to be taking advantage of all the 
> scholarship and analysis that has been done by logicians (and 
> philosophers of language). To try to do this from scratch, inventing 
> your own terminology as you go, seems to be just silly, when there is 
> a mature science available, with a fully worked-out mathematical 
> basis, a precise technical vocabulary and a mature deployed 
> technology. It wouldn't diminish or deprecate Topic Maps to admit that 
> they are a way of packaging a part of modern logic for a particular 
> purpose.
>
Oh, quite so, yes, reinventing the wheel is silly. But it is also quite 
popular. ;-)    (09)

I have traced the roots of topic map "like" activities back to the 17th 
century. And mechanized versions of what I would call topic maps to the 
late 19th century.    (010)

Any pointers you would care to share (on list as our discussion seems to 
have attacted some attention) would be greatly appreciated.    (011)

Bertrand Russell, who gets his share of knocks on this list, was 
reported to have told his geometry tutor when asked why he had changed 
his position on some proposition the following: "I care more that my 
ideas are correct than that they are my own."  (Of course that was self 
reported in his auto-biography but it is a nice sentiment.)    (012)

Anything that I can read or use to make topic maps a more effective 
"packaging of a part of modern logic for a particular purpose" is fine 
by me.    (013)

> <snip>
>
>>>> *(The SW proposal that every subject have a single unique identifier
>>>
>>>
>>>
>>> Whoa, that isn't the SW proposal, and its not the TAG Web 
>>> architecture position either. Of course there isn't a single unique 
>>> identifier, in general: if there were, owl:sameAs would be vacuous. 
>>> I agree, the idea of single unique 'true name' is ridiculous. I call 
>>> it the EarthSea theory of reference, after the idea in the Ursula 
>>> LeGuin novels.
>>>
>> Ok, so we agree that a "true name" is ridiculous. Great!
>>
>> I really think our difference is one of emphasis if that. You want to 
>> say that users to create mappings between different identifications 
>> of the same subject are using "first-order" processes.
>
>
> I wouldn't even want to insist on that. People can of course use any 
> processes at all to come to a conclusion. What interests me more is 
> that when that conclusion, however it was arrived at, is *recorded* in 
> a way that is intended to be used by machinery - if only in a very 
> simple and straightforward way, such as substituting one name for 
> another - that the notation or encoding method that is used to record 
> it in be provided with a precise, mathematically described, 
> 'logic-style' semantic theory. Not in order to impose a tyrannical 
> mainstream logical cultural hegemony, and not to subtly trick users 
> into using alien notations, and not to require that all users have a 
> graduate degree in logic. Rather, the point of having such a semantics 
> for the formalism is purely pragmatic: it provides the only secure, 
> non-procedural basis for interoperability between all the different 
> formalisms. No one formal notation is going to be the single final 
> form that everyone uses. But as long as all the formalisms have a 
> common semantic base, there is at least the possibility of making 
> translators between them.     (014)

Here we may differ. When you say the "formalisms have a common semantic 
base" do you mean to be grounded in logic? I assume so.    (015)

I don't disagree that sharing a "common semantic base" enables the 
mapping you describe.    (016)

But, as you seem to imply, there is an exclusionary aspect that would 
exclude formalisms that don't share that "common semantic base." Yes?    (017)

I am not arguing with your pragmatic basis for making that choice, just 
want to be clear about what requirements are being imposed on formalisms.    (018)

But isn't that simply moving the notion of a cultural "hegemony" just 
one step further? In other words, any notation that shares the "common 
semantic base" can represent (if sufficiently expressive) any culture 
without any bias, etc. (An assumption I am granting for purposes of this 
argument.)    (019)

But, notations that don't share that "common semantic base" are 
necessarily out of bounds?    (020)

Being repetitive but recall I don't disagree with the reason for making 
that choice but at some level the choice of a "common semantic base" 
excludes some notations and not others. Yes?    (021)

> That is what I have devoted the last several years to achieving: 
> giving a variety of formalisms a common semantic base. So far we have 
> done it for Common Logic, IKL, RDF, RDFS and OWL. (It looks as  though 
> OWL 1.1 will deviate from this is some subtle ways that may not be 
> very important; more seriously, the long-awaited Rule Language (RIF) 
> may be even less semantically aligned with the RDF/CL vision. Oh well, 
> I did my best.) But if one steadfastly refuses to even give a 
> formalism a semantics at all, it is simply marks on a surface, or a 
> bucket of bits. I'm not meaning to imply that TMs mean nothing: they 
> clearly do mean something. But until that something is described in 
> ways that we can analyze with enough mathematical precision to be the 
> foundation of writing correct code, interoperation with TMs must 
> always be a matter of guesswork. Which is a poor basis on which to try 
> to build a planet-wide system of communication.
>
Is interoperation a matter of correct code? Or is it understanding the 
semantics of what is to be communicated?    (022)

I think I have a better understanding of why you have placed such 
emphasis on a "common semantic base." And it would have (I think) the 
advantages that you ascribe for it, but at the cost (unknown) of 
excluding notations that don't share that "common semantic base."    (023)

I think where your issue is really going to come to the fore is when we 
attempt to build larger topic maps that aren't really susceptible to 
human inspection. That is it is one thing for me to write a smallish 
topic map on say overlapping markup (one of my passions in markup) 
versus a topic map, largely auto-generated, for the Art of Computer 
Programming and all the following research.    (024)

>> If you want to describe their activity that way, I have no objection 
>> as it is your description. Where I would object is telling users that 
>> they have to use explicit "first-order" processes to make those 
>> mappings.
>
>
> Im not wanting to tell users they *have to* use anything. What I do 
> ask, however, is that when they go home and leave some stuff in the 
> machine, that I know how to interpret it without having to call them 
> up and interrupt their dinner.
>
> Pat
>
>> (Noting that I certainly agree you can use an inferencing engine to 
>> make to same mappings, well, asssuming you can find one as robust as 
>> a human user. Your mileage may vary.)
>
>
> PS. One final remark. I get the sense - and if this is wrong then I 
> apologize in advance - that you often see these debates in a kind of 
> human vs. machine way, with TMs being on the side of the humans, and 
> Logic, in all its mechanical clunkiness, as being somehow 
> representative of the machine world. This seems to come through in 
> your disparaging comments about the state of the AI, your tailpiece 
> slogan, and your contrast between "intelligent agents" (a term which I 
> abhor, by the way) and "intelligent users".
>
Well, the "tailpiece slogan" was written when I worked for the Society 
of Biblical Literature, some of whose members dislike standards like 
Unicode because it "restricts" their choices. Generally suspicious of 
technology in general (I on the other hand am not) so it was more of a 
marketing slogan than anything else.    (025)

I am old enough to have seen many promises from the AI community that 
were, shall we say, overly optimistic. ;-)    (026)

> If this is even slightly close to a reasonable analysis, let me urge 
> you to not think of AI technology this way. Most practicing AI 
> technologists don't. For several years now, my colleague Ken Ford and 
> myself have been arguing a completely different agenda for AI, in 
> which the acronym could be re-understood as 'Amplified Intelligence'. 
> Ken calls this 'human-centered computing': the idea is create machine 
> systems which can act as "cognitive prostheses" or amplifiers of human 
> abilities, so that the entire system of (person + AImachine) is 
> capable of more than either can achieve alone. I can go an about this 
> idea at length: too much length for this message. But the point I want 
> to get across is that it is helpful to think of AI methods, including 
> mechanical inference, as aids to people rather than competitors to 
> human dominance. Forget that damn silly Turing Test 
> (http://dli.iiit.ac.in/ijcai/IJCAI-95-VOL%201/pdf/125.pdf), and stop 
> worrying about the inhumanity of the machines. Backhoes and eyeglasses 
> aren't human either, but they are very useful muscle- and 
> vision-enhancers. What we need now are mind-enhancers :-)
>
Now that is an agenda to which I am very sympathetic! I do wear 
eyeglasses but have not been fitted for an ear trumpet, yet. I keep 
telling my wife that being rude and not listening isn't the same thing 
as being hard of hearing. ;-)    (027)

Actually I had a graduate student at Georgia Tech describe his 
department as being "human-centered computing" to distinguish it from AI 
which had suffered from bad PR. Turns out he meant in the early 1990's. 
I was thinking more of the late 1960's. ;-) But yes, understood as you 
explain it, "human-centered computing" has much to offer and I really 
should do serious reading on it.    (028)

I will run a bibliography search on your publications (using, gasp, 
Google) and add to my ever growing reading list.    (029)

I think this has been a very productive exchange!    (030)

Hope you are having a great day!    (031)

Patrick    (032)

> Pat
>
>> Hope you are having a great day!
>>
>> Patrick
>>
>> -- 
>> Patrick Durusau
>> Patrick@xxxxxxxxxxx
>> Chair, V1 - Text Processing: Office and Publishing Systems Interface
>> Co-Editor, ISO 13250, Topic Maps -- Reference Model
>> Member, Text Encoding Initiative Board of Directors, 2003-2005
>>
>> Topic Maps: Human, not artificial, intelligence at work!
>
>
>    (033)

-- 
Patrick Durusau
Patrick@xxxxxxxxxxx
Chair, V1 - Text Processing: Office and Publishing Systems Interface
Co-Editor, ISO 13250, Topic Maps -- Reference Model
Member, Text Encoding Initiative Board of Directors, 2003-2005    (034)

Topic Maps: Human, not artificial, intelligence at work!     (035)



_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (036)

<Prev in Thread] Current Thread [Next in Thread>