[Top] [All Lists]

Re: [ontolog-forum] Can Syntax become Semantic ?

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Rob Freeman <lists@xxxxxxxxxxxxxxxxxxx>
Date: Sun, 24 Jan 2010 16:46:08 +1300
Message-id: <7616afbc1001231946j34776e1bp4a28ab15b6c2b09a@xxxxxxxxxxxxxx>
On Sun, Jan 24, 2010 at 11:01 AM, Christopher Menzel <cmenzel@xxxxxxxx> wrote:
> On Jan 22, 2010, at 7:43 PM, Rob Freeman wrote:
>> So what is important is this idea that there are "predictive limitations of 
> A fact as well known (in its own manifestations) in physics as it is in 
>computer science.  A fact
> obvious to anyone who has taken a basic physics lab, let alone studied a bit 
>of nonlinear
> dynamics or quantum mechanics.  I'm not sure why you seem to be suggesting 
>you're on
> to something particularly deep here.  In computer science there are inherent 
>limits to
> computation.  In physics there are inherent limitations to observation, 
>calculation, and
> measurement.  These rather mundane facts immediately entail that our theories 
> predictive limitations.    (01)

Your affection for the mundane is too strong, Chris. Quantum mechanics
is not explained by "inherent limitations to observation, calculation,
and measurement", the explanatory force is entirely in the other
direction.    (02)

The observations are there, I agree. And go way beyond not having a
big enough microscope. Over time more and more people, in a diverse
range of fields, physics, maths, computer science, weather
forecasting..., and indeed linguistics, have faced up to the fact that
their observations appear to be random to some extent.    (03)

But I don't think this has yet caused a general crisis in the
theoretical orthodoxies (other than physics.)    (04)

In particular in computational linguistics, still, nobody questions
the role of grammar. They certainly don't imagine productions in
natural language might be examples of "computationally irreducible"
processes, perfectly tractable, simply admitting of no (grammatical)
"shortcuts".    (05)

John Sowa's advocacy of Wittgenstein's language games hints at this
idea of process which does not admit of abstraction, but he doesn't
see a theoretical link.    (06)

Faced with randomness theoreticians commonly conclude that their
theory is right, but the problem is elsewhere. So in computational
linguistics you get people saying things like "syntax is solved" the
problem is with semantics. Our theories of syntax don't solve the
problem, but that's because all the hard decisions are based on
something else called semantics. A case of "redefine and conquer". It
was this kind of idea I was reminded of when Ali said:    (07)

Ali, Jan 22: "While it's (usually) not hard for people to understand
the semantics in natural language, computers often lack the necessary
"background" ( 
) to do so."    (08)

>> My comment to Ali was that the "predictive limitations of theories" might 
>explain why our "theories" of
>> language (grammars) have failed to usefully disambiguate natural language, 
>and that we might be able
>> to do better by treating syntactic predictions purely as a process, distinct 
>from theories about that process.
> I have no idea what that means.    (09)

I know.    (010)

It is actually very interesting for me to see the issue through your
eyes, when you stop simply hrmphing and disputing words. For instance
you characterized undecidability as "...about the limits of
computation." This reminds me of a quasi-religious interpretation
along the lines of Goedel himself, or recently, famously, Roger
Penrose and his "Emperor's New Mind",
http://ase.tufts.edu/cogstud/papers/penrose.htm";    (011)

Why anyone would characterize undecidability as a "limit of
computation", I don't know. I guess it means your faith in an
absolute, decidable, world is so great, that when computation fails to
deliver, it is computation which has failed, not the perfect,
decidable world. The world must be decidable, so if computation is
not, then computation must not be adequate to model the world...    (012)

Similarly John has clearly pigeon-holed Wolfram's computational
irreducibility as computationally "intractable". That is how he
ignores it. How do you get from "irreducible" to "intractable"? It is
as if conformity with a theory is the goal, and if a computation
cannot be reduced to a theory then it is "intractable", no good,
useless.    (013)

These are perspectives which trap us.    (014)

When you are stuck in such a perspective, of course, it is almost
impossible to jump out.    (015)

-Rob    (016)

Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx    (017)

<Prev in Thread] Current Thread [Next in Thread>