On 1/12/2011 12:24 PM, Christopher Menzel wrote:
> I think that part of our apparent disagreement here is due to the
> notorious word "model", which has very different but, unfortunately,
> very entrenched uses in different communities. On the one hand, in e.g.,
> the database community, a model is often itself some sort of syntactic
> object, like a Bachman ER model with its boxes, diamonds, and arrows.
> On the other hand, in mathematical logic, a model is a certain kind of
> mathematical object, an abstract characterization of the meaning of a
> set of sentences (or a diagram) in a given representation language. (01)
There are so many different subthreads in this thread that I'd just
like to focus on this point, which may help to clarify some of the
others. Carl Adam Petri (of Petri-net fame) discussed this issue
many years ago, and I summarized his arguments (and others) in
my CS book. (Excerpts below) (02)
Note the following quotation from Section 1.1: "According to
Abrial (1974), a database is a 'model of an evolving physical
world. The state of this model, at a given instant, represents
the knowledge it has acquired from this world.' " (03)
Abrial's statement can be interpreted in a way that is true
of the common usage in *both* the logic community and the
database community. A relational or object-oriented DB is
a collection of ground-level facts, which are isomorphic to
a Tarski-style model in logic. It is also compatible with
the way that other engineers talk about their models. (04)
However, I agree that it is important to distinguish the
axioms or constraints on a family of models from the ground
level data that is isomorphic to a Tarski-style model. (05)
In AI, Marvin Minsky was fond of summarizing a book by
Kenneth Craik in a pithy slogan: "The brain is a machine
for making models." See the excerpts below from sections
1.2 and 1.6. (06)
At the end of the excerpt from 1.6, I summarized Petri's
arguments that the three different uses of the word 'model'
have enough in common that a common word is justified. But
I agree with Chris that it's important to state which sense
is intended in each case. I would distinguish them by adding
an appropriate adjective in front of the word 'model'. (07)
Finally, note the excerpt from Section 4.5, which discusses
Hintikka's *surface models*. Hintikka certainly knows the
difference between different kinds of models, but he used
the word 'model' in a way that is compatible with Petri's
three senses. (08)
John
____________________________________________________________________ (09)
Excerpts from J. F. Sowa (1984) _Conceptual Structures: Information
Processing in Mind and Machine_, Addison Wesley. (010)
From Section 1.1 (011)
Models of reality are just as important for database systems as they are
for artificial intelligence. According to Abrial (1974), a database is a
"model of an evolving physical world. The state of this model, at a
given instant, represents the knowledge it has acquired from this
world." Yet models are abstractions from reality. The systems analyst or
database administrator must play the role of philosopher-king in
determining what knowledge to represent, how to organize and express it,
and what constraints to impose to keep it a consistent, faithful model
of the outside world. To do a good job in analyzing reality, a systems
analyst must be sensitive to semantic issues and have a working
knowledge of conceptual structures. (012)
The hypothesis that people understand the world by building mental
models raises fundamental issues for all the fields of cognitive science: (013)
* Psychology. How are models represented in the brain, how do they
interact with the mechanisms of perception, memory, and learning, and
how do they affect or control behavior? (014)
* Linguistics. What is the relationship between a word, the object it
names, and a mental model? What are the rules of syntax and semantics
that relate models to sentences? (015)
* Philosophy. What is the relationship between knowledge, meaning, and
mental models? How are the models used in reasoning, and how is such
reasoning related to formal logic? (016)
* Computer science. How can a person's model of the world be reflected
in a computer system? What languages and tools are needed to describe
such models and relate them to outside systems? Can the models support a
computer interface that people would find easy to use? (017)
Since the subject is growing rapidly, no final answers are possible.
This book develops the theory of conceptual graphs as a method of
representing mental models, shows how it explains results in several
different fields, and applies it to the design of more intelligent, more
usable computer systems. (018)
From 1.2: (019)
As an illustration of the competing theories, suppose that a person
named Connie happens to be hungry when she sees a street vendor selling
ice cream. She may then walk up to the vendor, take out some money, buy
some ice cream, and eat it. Somehow, the possibility of eating ice cream
in the future "causes" her to carry out a sequence of actions in the
present. Yet the laws of physics say that future events cannot affect
the present. How can a merely possible event have a causal effect? (020)
A behaviorist would say that the stimulus of seeing the vendor, enhanced
by Connie's hunger, triggers a conditioned response that leads to eating
ice cream. For habitual reactions, the behaviorist may be right. But
people can override habits and deal with novel situations for which they
have no ready-made responses. If Connie intended to have dinner at a
fine restaurant, she would be less likely to buy the ice cream. The more
remote event has a greater effect than the present stimulus. Conversely,
if she did not expect to eat for several hours, she might still buy some
ice cream. Intentions and expectations have at least as great an
influence on behavior as immediate stimuli. (021)
A cognitive psychologist would say that when Connie sees the vendor, she
forms a model of the situation. But she also forms models of future
states where she may be eating ice cream, dining at a restaurant, or
going hungry. Which course of action she chooses depends on her options
for transforming a model of the current state into each of the possible
models. Her actions, therefore, are not caused by future events, but by
operations on models that exist in her brain at the present. As Craik
(1943) suggested, reasoning is a system of artificial causation that
transforms models in the head. (022)
To explain the reasoning process, Otto Selz (1913, 1922) developed his
theory of schematic anticipation: the solution to a problem is not found
by undirected association, but by finding the concepts to fill in the
gaps of a partially completed schema. In psychological terms, Selz
described mechanisms that were later developed for AI: backtracking,
pattern-directed invocation, and networks of concepts and relations. His
work was not appreciated during his lifetime, partly because of its
novelty and partly because it ran counter to the dominant trend of
behaviorism. Yet Selz had an indirect influence on AI through de Groot's
analyses of chess playing (1965) and Newell and Simon's work on problem
solving (1972). (023)
One of Newell and Simon's students, Ross Quillian (1966), implemented
networks similar to Selz's in a computer program. Given two types of
concepts, such as CRY and COMFORT, Quillian's program would search a
network of concepts to find the shortest path of associations linking
them. For this example, paths starting at CRY and COMFORT intersected at
SAD. The program then converted the two paths into the sentences, "Cry
is among other things to make a sad sound. To comfort can be to make
something less sad." Although Quillian's program found associations that
are similar to human associations, he needed further evidence that the
results were more than a lucky coincidence. To gather evidence for the
networks of concepts, Collins and Quillian (1969, 1972) measured human
reaction times for various kinds of associations. The responses that
were fastest for the computer were also fastest for human memory. (024)
Since behaviorism is now on the wane, mental phenomena have again become
the object of scientific study. But one phenomenon, imagery, has
remained controversial. Pylyshyn (1973), for example, ridiculed the
notion of a mind's eye that observes images in the brain: presumably the
mind's eye would transmit stimulation to a mind's brain, which would
have its own mental images observed by another mind's eye and so on in
an infinite regress. Despite such ridicule, psychologists developed
experiments that show the importance of both image-based reasoning and
conceptual reasoning (Kosslyn 1980): (025)
* Mental images are projected on a visual buffer. They can be scanned,
rotated, enlarged, or reduced. (026)
* Novel images can be constructed from a verbal suggestion: Imagine
George Washington slapping Mr. Peanut on the back. (027)
* Reasoning about sizes, shapes, and actions is faster and more
accurate in terms of images. (028)
* Abstract thought and logical deduction are faster and more accurate
in terms of concepts. (029)
* A complete theory of human thinking must show how images are
interpreted in concepts and how concepts can give rise to images. (030)
From 1.6: (031)
Instead of proving theorems, people assimilate separate facts into a
coherent image. In psychological tests, Bransford and Franks (1971) gave
subjects a list of separate sentences like the following: (032)
The rock rolled down the mountain.
The rock crushed the hut.
The hut was tiny.
The hut was at the edge of the woods. (033)
After hearing these sentences, the subjects could not remember whether
they heard the facts in a series of simple sentences or in a single
sentence, "The rock that rolled down the mountain crushed the tiny hut
at the edge of the woods." When a new sentence, "The hut was at the top
of the mountain", is added to the list, people immediately "see" the
contradiction: the hut had to be where the rock was rolling, at or near
the bottom of the mountain. Although people differ widely in how
graphically they imagine the situation, they normally detect the
contradiction as if they were looking at a model or picture. (034)
To explain common sense reasoning, Craik (1943) viewed the brain as a
system for making models: "If the organism carries a small-scale model
of external reality and of its own possible actions within its head, it
is able to carry out various alternatives, conclude which is the best of
them, react to future situations before they arise, utilize the
knowledge of past events in dealing with the present and the future, and
in every way react in a fuller, safer, and more competent manner to the
emergencies which face it" (p. 61). To simulate such a system, Minsky
(1975) proposed the notion of frames, which are prefabricated patterns,
assembled to form mental models. For the story about the rock crushing
the hut, people have frames for mountains, for things that roll, for
huts made of flimsy materials, and for massive rocks. In response to a
story, a person assembles frames to form a model. If the frames do not
fit together, the story is self-contradictory; if no frames are
available, the story is incomprehensible; if more than one frame can be
applied, the story is ambiguous. (035)
The word model has multiple meanings in engineering, logic, and common
speech. Petri (1977) noted three different meanings in the phrases model
of an airplane, model of an axiom system, and model farm: (036)
* Simulation. A model airplane is a simplified system that simulates
some significant characteristics of some other system in the real world
or a possible world. (037)
* Realization. A model for a set of axioms is a data structure for
which those axioms are true. Consistent axioms may have many different
models, but inconsistent axioms have no model. (038)
* Prototype. A model farm is an ideal or standard for evaluating other
less perfect farms or for designing new ones. (039)
Petri maintained that a common basis should be found for these three
different ways of modeling. Conceptual graphs, indeed, form models in
all three senses of the term: the graphs simulate significant structures
and events in a possible world; a set of axioms, called laws of the
world, must at all times be true of the graphs; and certain graphs,
called schemata and prototypes, serve as patterns or frames that are
joined to form the models. (040)
From 4.5: (041)
A semantic theory based on infinite worlds is acceptable for mathematics
and logic, but psychologically, it is unrealistic to assume that people
construct infinite models in their heads. In the search for a more
realistic semantics, Hintikka (1973) criticized the infinite,
closed-world models of standard logic: (042)
Usually, models are thought of as being given through a
specification of a number of properties and relations defined
on the domain. If the domain is infinite, this specification
(as well as many operations with such entities) may require
nontrivial set-theoretical assumptions. The process is thus
often nonfinitistic. (043)
It is doubtful whether we can realistically expect such structures
to be somehow actually involved in our understanding of a sentence
or in our contemplation of its meaning, notwithstanding the fact
that this meaning is too often thought of as being determined by
the class of possible worlds in which the sentence in question is
true. It seems to me much likelier that what is involved in one's
actual understanding of a sentence S is a mental anticipation of
what can happen in one's step-by-step investigation of a world in
which S is true. (044)
Instead of infinite models, Hintikka proposed open-ended, finite,
surface models. Understanding a story would consist of building a
surface model containing only those entities that were explicitly
mentioned. The model would then be extended in a “step-by-step
investigation” of all the implicit entities that must exist to support
or interact with the ones that were mentioned. Closed models are
limiting cases of surface models that have been extended infinitely far,
but at any point in time, only a finite surface model is ever constructed. (045)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (046)
|