Hi John, (01)
- will look at the slides, thanks (02)
- what does NPL stand for (if not in the slides) ? (03)
- In highly structured documents and words such as a set of
specifications, don't you think identifying ambiguities people do not
notice is a different task than looking for and highlighting
ambiguities that require interpration? In other words, if a computer
notices TINY ambiguities everyone working with the information has
never had a problem with, or did not notice because the exchange is
serving their purpose as written and read - why spend time on machine
made discrepancies? (04)
Is the purpose of auto-nitpicking (what is supposed to be) structured
text to pull out seemingly minor inconsistencies so they can be either
dismissed or decided? If so, how will the computer know in the future
that this particular ambiguity is allowed? (06)
On 2/21/07, John F. Sowa <sowa@xxxxxxxxxxx> wrote:
> I agree that many kinds of writing must be very precise:
> > Patent claims, architectural specifications, legal documents
> > are only three examples.
> Patent claims and legal documents are two areas where computer
> processing has been attempted for many years. The commercial
> value of good NL understanding systems for either of those two
> areas would be immense. Unfortunately, they are still potentially
> important, but unsolved problems for NLP.
> > Every word is necessary or it is removed. Ambiguities do not
> > stay around very long because sooner or later there is a conflict
> > and the need for a determination and explicit interpretation.
> > Interpretations are also documented and cite what they are based
> > on creating even richer records over time.
> As I said in my previous note, computers are much better at detecting
> ambiguities than any human (even a trained linguist or patent lawyer).
> But computers don't have the background knowledge that could enable
> them to resolve the ambiguity as well as humans do.
> A level of precision that humans are capable of understanding is
> not adequate for computer understanding. Computers are far more
> demanding because they don't have the background knowledge necessary
> for "common sense". In most cases, humans resolve the ambiguities so
> quickly that they are not even aware that a different interpretation
> is possible.
> > In my opinion, carefully prepared language in documents that
> > already adhere to disciplined standards make sense "ontologize"
> > first - because order is already imposed by the information itself
> > being exchanged and documented - or written and read.
> I agree that such well-written documents are a good basis, but for
> the foreseeable future, human assistance is necessary to translate
> them to a machine-interpretable form. An important application
> of ontology is to give computers sufficient background knowledge
> so that they could do a better job at resolving the ambiguities
> that humans don't even notice.
> Bottom line: I believe that we can develop NLP tools that are much
> better than what we currently have, and good ontologies can help us
> design and build them.
> I also believe that we can develop much better tools for developing
> ontologies, but that is another topic. For more on these issues,
> see the slides of two talks I gave in 2006:
> Concept Mapping
> Extending semantic interoperability
> Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
> Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
> Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
> Shared Files: http://ontolog.cim3.net/file/
> Community Wiki: http://ontolog.cim3.net/wiki/
> To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx
The content of this email may contain private
confidential information. Do not forward, copy,
share, or otherwise distribute without explicit
written permission from all correspondents. (012)
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (014)