John, and Sergei,
I share that belief:
[JS] > > There is an enormous amount of information to be decoded from
> > any NL text, and no single method, by itself, is sufficient.
And
[SN] >
> I think that we need to use any source of
> heuristics that we can get for the tasks of
> NL analysis and generation.
> (01)
What I have apparently failed miserably to communicate is that the pieces
of code what I will now refer to as 'Lexical Experts' can include and access
any of those tactics that could be useful - and do it in such a highly
modular way that from one LE (for one word, phrase, sentence, generic
concept, or syntactic construction) to another LE there may be little in
common in the methods that they use and no person in common among the
individuals who contribute to their construction. What they do need, as I
imagine them, (1) is to be able to take information from and pass
information to other modules in a common meaning-representation paradigm,
using some foundation ontology that includes the 'Conceptual Defining
Vocabulary'; and (2) to each be focused on the nuances of meaning and
peculiarities of usage of small fragments of the language, mostly individual
words or stock phrases. As a start I would focus on the verbs, but other
linguistic elements such as adjectives are also likely to need special
routines to produce a proper interpretation of a text. (02)
I don't think this is inconsistent with the notion of the modular
Framework in John's paper. Perhaps the 'Glue Language' John refers to in
his architecture paper will need to be more complex than the ontology and
its associated logic. But I think that the language will have to include
the basic concept meanings associated with an ontology that contains within
it the meanings associated with at least the word inventory of the Longman's
defining vocabulary. Any NLU architecture along these lines that contains
such an ontology and is adopted for research purposes by more than a few
research groups is one that I will very much want to experiment with. (03)
Pat (04)
Patrick Cassidy
MICRA, Inc.
908-561-3416
cell: 908-565-4053
cassidy@xxxxxxxxx (05)
> -----Original Message-----
> From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-
> bounces@xxxxxxxxxxxxxxxx] On Behalf Of Sergei Nirenburg
> Sent: Tuesday, April 01, 2008 6:39 PM
> To: [ontolog-forum]
> Subject: Re: [ontolog-forum] What is "understanding"
>
> On Apr 1, 2008, at 5:40 PM, John F. Sowa wrote:
>
> > Pat,
> >
> > There is an enormous amount of information to be decoded from
> > any NL text, and no single method, by itself, is sufficient.
> >
> >> But after working through that wonderful resource [Roget's],
> >> I concluded that deeper text understanding could not be supported
> >> by anything less than a logic-based ontology...
> >
> > There is no single module that can do everything, not even a
> > theorem prover combined with a deeply axiomatized ontology.
>
> Let me second John's opinions.
>
> I think that we need to use any source of
> heuristics that we can get for the tasks of
> NL analysis and generation.
>
> We have been trying this for a long time. The
> latest direction has been reference (not simply
> co-reference) resolution work.
>
> > Please read Minsky's _Society of Mind_
>
> and The Emotion Machine
>
> > and my paper about the
> > Flexible Modular Framework:
> >
> > http://www.jfsowa.com/pubs/arch.htm
> > Architectures for Intelligent Systems
> >
> > These approaches can accommodate and integrate a multiplicity
> > of methods, including logic, analogy, case-base reasoning,
> > statistics, spreading activations, and even WEP.
>
> I completely agree with John about omnivorousness of processing
> methods.
>
> But don't let's forget knowledge acquisition (ontologies,
> fact repositories, lexicons, etc.) and empirical derivation of the best
> ways of combining evidence from heuristics obtained
> from diverse sources (morphology, syntax, semantics
> of text, discourse situation, prior knowledge and
> experience).
>
> This work is still the core of the problem. It's expensive
> but seems the only way for those of us who are interested in
> fruit that's not hanging low but rather that holds a
> promise of a solution to our problem.
>
> Incidentally, I am not saying that all this acquisition should
> be done completely manually. Automating acquisition of
> knowledge of the kind we are talking about (that is, not only
> knowledge about co-occurrences of textual strings, however
> sophisticated) is a very interesting problem. A number of
> people are working on this issue. If I may be forgiven for
> plugging something I am helping to organize, here
> is one relevant workshop announcement:
> http://www.coral-lab.org/ICSC08-workshop.htm
>
> As to WEP, my bad: I haven't read the postings closely and
> actually thought that Pat's references to word experts referred
> to the old Small and Rieger work...
>
>
>
>
>
> _________________________________________________________________
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-
> forum/
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/
> Community Wiki: http://ontolog.cim3.net/wiki/
> To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
> (06)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (07)
|