Previously in this forum, I have made comments about meaning of a sign being
grounded in the behaviour of the agent interpreting the sign - and here I
mean semantics, not pragmatics. That is, I am interested in the phenomenon
of meaning, rather than the mechanism. A consequence of this is that I do
not have to try and imagine how a computer imagines the world. (01)
The earlier thread on Peirce, and the fact that in the triad <sign, object,
interpretant> are signs, suggested an approach to the infinite regress of
signs. Every interpretant must have an interpreter (a pseudo-mind), and in
practice we do not need to follow an infinite regression of interpreters.
Rather, we can stop when we find an interpreter that acts directly on the
sign. For example, if we have a system of signs in logic, the appropriate
pseudo mind is a reasoner, which will answer whether a particular
proposition is true or false (or not determinable, or not yet
determined...). I do not need to understand the regression of signs through
the mechanisms that the reasoner uses, whether it be a computer or human, as
long as it produces an answer. Obviously there are people who will wish to
follow this regression as a means of improving the performance of the
reasoner or validating that it works correctly, but my business is at the
level of business, not the detailed mechanisms that it uses to do business. (02)
This leads to two sorts of questions. Firstly, the question of semantics
and what does the agent need to know to behave in the correct manner. The
main developments in this area are currently being made in the long term
archiving/retention/data sustainment communities. One study in the libraries
community reported that to understand a single digital artefact, over a
thousand supporting artefacts were needed. (03)
The second question is that of semiotics, and what the agent needs to know
to identify a sign as conveying particular semantics. That is not just how
does the agent disambiguate signs, but also how does it know that when it
recognises a sign that it has indeed correctly recognised it. I suspect the
answer in this case is related to context - that is, we assume a context and
attempt to disambiguate/situate something within that assumed context.
However, context is itself an infinite sequence of every expanding contexts.
More generally, there is no such thing as context, it is merely a language
game to introduce additional data into the situation. Therefore "a context"
is a strategic decision to limit the problem space on the basis of the
resources that can be committed to the problem at hand. (04)
The practical engineering of ontologies is therefore about choosing a
system of signs that supports the range of behaviours needed in the system
of agents to be deployed, and ensuring that the agents have firstly the
right knowledge to respond correctly to the individual terms in the
ontology, and secondly the right knowledge to differentiate the signs and
uses them correctly. The complications occur when computational system is
trying to compensate for gaps in the knowledge of human agents or in the
specification of the system of signs. (05)
Feel free to disagree violently (06)
Sean Barker, Bristol, UK (07)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (08)
|