[Top] [All Lists]

Re: [ontolog-forum] Semantic Systems

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "John F. Sowa" <sowa@xxxxxxxxxxx>
Date: Sun, 28 Jun 2009 18:21:04 -0400
Message-id: <4A47ECD0.5020700@xxxxxxxxxxx>
Rich,    (01)

RCG> in http://www.jfsowa.com/pubs/cg4cs.pdf>
> You state that:
JFS> For empirical subjects, however, conjunction and the existential
 > quantifier are the only operators that can be observed directly,
 > and the others must be inferred from indirect evidence...
RCG> Let's choose a name for this algorithm you've postulated.    (02)

First of all, it's not an algorithm.  It's an observation.  And I'm
not the first person who said it.  Many philosophers and logicians
over the past few millennia have made the same observations, but
they may have used slightly different terminology.  Anyone can
verify that claim just by checking the options:    (03)

  1. If you detect something by any of your senses -- seeing,
     hearing, touch, taste, or smell -- you have evidence that
     something exists that created that stimulation.  You might
     make a mistake in identifying it, but you know that you have
     detected some X for which you can assert "There exists an X."    (04)

  2. If you detect something else, you can safely say, "There exists
     and X and there exists a Y."  Different observers may disagree
     about how to group all the sensations into objects or events
     or situations or features or properties.  And they may disagree
     about how many of them are separate or grouped in a single
     individual.  But they can all agree that there are many cases
     when they see or hear "This and that and something else..."    (05)

Those two points imply that any language or logic that is suitable
for recording observation sentences must have at least words for
explicitly or implicitly saying "something exists" and for adding
more observations by using a word such as 'and'.    (06)

I call the subset of logic that has just those two operators
'existential-conjunctive logic'.  Other people in AI have used
the term 'vivid logic' for exactly the same subset, and my only
objection to that term is that it's not self explanatory.  But
it's useful because it's five syllables shorter.    (07)

First order logic (and all natural languages) include those two
operators, but they also have means for expressing more operators:
'not', 'or', 'every', 'if-then', and combinations such as 'nor'
or 'if and only if'.    (08)

The next observation is that none of those other operators can be
observed directly.  They can only be inferred indirectly:    (09)

  3. Nobody can observe a negation.  If you don't see a hippopotamus,
     you can't be sure that one isn't hiding in the bushes.  But
     given the size of typical hippos, you might infer that the
     space you're in is too small to contain one.  Similarly for
     every other kind of negation:  it's always inferred, never
     observed.    (010)

  4. A conjunction ('and') can be observed directly.  But you can
     never see a disjunction ('or').  You might see 'red' or you
     might see 'blue'.  You might see something that's 'red, white,
     and blue'.  But you can never see 'red or blue' directly.    (011)

  5. An implication 'if X, then Y' can never be observed. (Hume
     made a big deal about this point.)  You might claim that
     whenever you saw X, you also saw Y.  But you can never be
     certain that the next X you saw didn't have a Y with it.    (012)

  6. You can see 'some hippopotamus', but you can never observe
     'every hippopotamus' or every instance of any other type.
     Because of some other assumption, you might infer that you
     saw every instance of something, but you can never make such
     a claim by direct observation.    (013)

So don't call it Sowa's algorithm or claim or assumption.
If you don't believe me, just try to imagine any possible case
in which you could directly observe any operator of FOL other
than existence and conjunction.    (014)

RCG> Here is a sample parse from the LGP...    (015)

What you showed is a typical parse tree, which is derived by a
phrase structure grammar.  That is a very common kind of grammar
for both NLs and formal languages.  However, there is another
type of grammar, which is even older:  dependency grammar, which
Lucien Tesniere developed in the 1930s and which are widely used
for languages other than English.    (016)

Chomsky's phrase structure grammars are useful for English, which
has a rather fixed word order.  But for languages that have more
flexible word order, such as  German, Russian, Sanskrit, and
the languages of modern India, dependency grammars are much more
convenient.  A link grammar is another variation, which generates
trees that are equivalent to the trees generated by dependency
grammars.    (017)

You can generate conceptual graphs with a phrase-structure grammar,
but the trees generated by link grammars and dependency grammars
are almost in a one-to-one correspondence with the nodes and links
of a conceptual graph.  Those are the grammars we use at VivoMind.    (018)

John    (019)

Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (020)

<Prev in Thread] Current Thread [Next in Thread>