John,
Yes, I agree that spreading activation can perform useful functions.
That's why, twenty years ago, I had the old 1911 (public domain) Roget's
thesaurus typed up into electronic form and put it on the internet, to serve
as one potential resource for NLP. I also spent some time reorganizing that
old Roget to make explicit the semantic relations that were only implicit in
the original organization. The resulting Factotum semantic network can be
found at:
http://www.micra.com/factotum/hithes11.doc (01)
. . . and a conference paper discussion that exercise is at:
http://www.micra.com/factotum/cicpapr-mod.doc (02)
But after working through that wonderful resource, I concluded that deeper
text understanding could not be supported by anything less than a
logic-based ontology, and I have spent much of my time since then trying to
understand just what kind of ontology structure would support the level of
language understanding that I would like to see the machines capable of.
Using the - let us call it "Lexical Expert (LE)" - approach, it will be
possible to include, within each LE, methods such as spreading activation,
as well as others, coordinated by use of a common logic-based foundation
ontology, as the standard of meaning within which accurate representations
of concepts can be communicated. But to achieve this coordination (and I
agree that it is similar in spirit to Minsky's 'Society of Mind') we do need
a common standard of meaning. I am open to exactly how large that must be,
but thus far see little reason to suppose that it can be less than a few
thousand concepts. That is a question for experimental resolution by trying
the tactic, and seeing how it works. Rather a bit of work, though. (03)
Does the foundation ontology have to be highly axiomatized to serve this
function? Perhaps not. The concept specifications do not have to be
necessary and sufficient. But it's hard to imagine such a function being
served by a set of ontologies that are not known to be logically consistent
(modulo lexical and syntactic translation) - or at least not provably
inconsistent. If somehow some group of NLU research teams ever decide to
coordinate by using a common foundation ontology (more expressive than
WordNet), that question might be resolved by a study using a careful and
systematic approach. I hope to live long enough to see that. (04)
Pat (05)
Patrick Cassidy
MICRA, Inc.
908-561-3416
cell: 908-565-4053
cassidy@xxxxxxxxx (06)
> -----Original Message-----
> From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-
> bounces@xxxxxxxxxxxxxxxx] On Behalf Of John F. Sowa
> Sent: Tuesday, April 01, 2008 2:39 PM
> To: [ontolog-forum]
> Subject: Re: [ontolog-forum] What is "understanding"
>
> Pat,
>
> There are many useful methods that should be considered, and age
> is not necessary an indication of quality. Some of the oldest
> methods, when suitably refurbished, can be surprisingly good.
>
> PC> I said I would like to 'revisit' WEP with an ontology as
> > a component I did not expect people to think of this as being
> > close in method to the original.
>
> For example, the following article from IJCAI 2007 uses the
> method of spreading activation that Quillian implemented for
> his PhD dissertation 41 years earlier (1966):
>
> http://dli.iiit.ac.in/ijcai/IJCAI-2007/PDF/IJCAI07-279.pdf
> Word Sense Disambiguation (WSD) with Spreading Activation
> Networks Generated from Thesauri
>
> Quillian applied spreading activations to WSD by tracing paths
> through a network of definitions in the Merriam-Webster 7th
> Collegiate dictionary. The IJCAI 2007 authors applied a
> variation of that method to WordNet. In the paper, they say
>
> We show experimentally that our method achieves the best
> reported accuracy taking into account all parts of speech
> on a standard benchmark WSD data set, Senseval 2.
>
> At VivoMind, we're using conceptual graphs as our basic
> knowledge representation. Although our methods are not the
> same as the above, they are closer in spirit to Quillian's
> than to Word Expert Parsing. And the amount of human effort
> to implement them is a tiny fraction of anything that would
> be required for WEP. There is absolutely no reason to ask
> human volunteers to encode word experts.
>
> John
>
>
> _________________________________________________________________
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-
> forum/
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/
> Community Wiki: http://ontolog.cim3.net/wiki/
> To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
> (07)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (08)
|