Ed, Pat, and everyone else, (01)
Many thanks to Ed and Pat for taking my kite-flying (of Jan 23) as
seriously as you did. In view of the major overlap in your
concerns, I am responding to both of you in this one post. I also
explicitly address John Sowa and Jim Schoening, so they are cc-ed
too. But I am appealing to all other list members as well. (02)
(Since I have taken so long to reply, here are the archived versions
of the posts I am responding to:
Ed:
http://ontolog.cim3.net/forum/ontolog-forum/2008-01/msg00383.html
Pat:
http://ontolog.cim3.net/forum/ontolog-forum/2008-01/msg00385.html
My original post is at
http://ontolog.cim3.net/forum/ontolog-forum/2008-01/msg00382.html) (03)
My core statements were these two middle paragraphs from my original
post (where the "this" of the first quoted line referred to Ed's
discussion of time with its various granularities and other
manifestations or characteristics): (04)
>> Is this not a prime example of where contexts - and
>> conversions or translations or mappings (often lossy) between
>> them - are used by everybody with no problem? It's as in
>> colloquially-familiar and meaningful statements such as "The
>> precise meaning of the time statements is dependent on their
>> context."
>>
>>Other classic aspects of context are communities' comfortable
>>commitments to them, the dependency - as in all the time
>>examples in this thread - of statements upon the proper but
>>natural awareness by the various parties concerned of the
>>respective contexts, and their easy accommodation to the
>>frequent need for contexts to shift as conversations or tasks
>>or even transactions unfold. (05)
As Ed put it, that was all about context "in the large". It had Pat
insistently drawing me back into "ontological engineering" with its
objectives and constraints, prime among which is the need for
precise definition. Both of you were obviously rightly concerned
about the needs of automated reasoners. (06)
But I had carefully written those paragraphs to try to present
'context' as an everyday phenomenon and colloquial concept that
most people generally have little trouble with. (True, Ed, there
are often misapprehensions of contextual differences, but
once the parties concerned are made aware of them they can
generally correct themselves quite easily.) (07)
I had put forward, in effect, a partial statement of end-user
requirements in respect of 'context'. (08)
At the same time, though, I was flying a kite, for behind every
programmer's statement of requirements lurks at least a seed of a
proposed supply to address the demand. So I wanted to see if my
description of colloquial context struck any chords. And in a
useful way it did, though apparently in a negative sense. On the
contrary, however, I find the outcome very positive, as both of you
bothered to respond so seriously. With such problems as you raise,
there must be an opportunity! That, then, is the background to a
rather bold though doubtless somewhat premature announcement: (09)
The fact is that I have advanced significantly with a formalization
and implementation of context as I had portrayed it. (010)
That is - of course - a direct and assertive answer to my original
post's concluding paragraph: (011)
>> Can we really do without some formal recognition and
>> representation of 'context', all in a context-dependent way, of
>> course? (012)
(That is, in effect, "No, we cannot do without a formalization
addressing the requirements often associated with 'context', such as
both of you were demanding!" even if Pat would drop the word
itself.) (013)
But the prime context of my proposal is not automated reasoning by
non-human agents. It is the intersection of application
interoperability and human collaboration, that having been where I
have spent most of my professional life so far. It is DP/IS/DBMS
and internet-leveraging (hence also highly scalable!) application
development, rather than AI/NLP. (014)
I believe the architecture and a first implementation of it, after a
quite small project that still needs to be set up, building on my
programming to date, can be launched within a year or so. (015)
At that stage, thanks to the features introduced below, it will seed
and powerfully foster and host much further development by the open
market. It will in due course also strongly support the field of
both AI and NLP. (016)
Now those of you who have noticed some of my earlier posts on this
subject to this list and to the SUO list (and particularly
http://suo.ieee.org/email/msg00634.html) might well comment that I
have been making similar claims for many years already without
having produced! That is true, but there have been some important
changes since then (more than counterbalancing my increasing age, I
believe): to cut a long story short, I am now willing to divulge
the hitherto "trade-secret" aspects, and I have considerably further
extended and tied together the philosophical and the technical
arguments in favour of the whole design and implementation project.
In other words, there is now much less of a grey area between the
top-down philosophizing or demand-side, and the bottom-up system
design and programming or supply-side. More importantly, perhaps, I
have been noting (while ever testing and rejecting the
wishful-thinking hypothesis...) a growing convergence of the
industry with my longstanding proposals, while the basic principles
and existing detail of my own picture remain entirely unthreatened
by any of the new insights or developments we all continually
encounter. (017)
So if there is enough interest from the list, I would like to
respond to my own challenge and start setting out, systematically,
in a series of posts each in a focussed way seeking relevant
feedback from you, just why the proposed architecture is what we are
all seeking, including why its notion of context is key to it. It
would soon emerge how it renders both 'context' and the fuller
ontology-based product powerfully relevant, practical, useful and
supportive of their own strong further evolution by the open market,
in addition a market which the architecture implementations will in
their turn, by intent and by detailed design, enormously strengthen
and promote. (018)
Hopefully trying to tempt you, I shall now list a few features of
the eventual end-result rather confidently anticipated despite the
many classical philosophical conundra and practical quagmires the
whole story courts. (At this stage, do insist on any indispensible
clarification of any of these points, but please let's not debate
their correctness, aptness or feasibility just yet! Such discussion
will profit enormously from, and should therefore await, the detail
I am suggesting I follow up with.) (019)
1. Especially for John Sowa: There will in effect be a lattice of
theories. But that is meant in a tighter way than you seem to have
in mind, John: it has to do less with different logical styles and
more with modules structured canonically according to the certainly
evolving and migratable canon, and implementing careful notions such
as abstraction, coherence, context, identity, orthogonality, and
semantics. So far at least, my perhaps idiosyncratic use of such
familiar terms seems not to have resulted in any Humpty-Dumptyish
great falls, and the resulting data and message architecture does
provide for more extensive, robust, growable, tunable and yet
individuality-respecting and ultimately mind-expanding
interoperation, "helping people simplify complexity together". (020)
2. Those "theory" notions (abstraction etc) are tied together by
another graphic allegory. Much as in Lewis Carroll's neat story
just invoked, though far more extensively, it offers an image for
Ontology of the philosophical kind. (Prompted by the latest thread
on the SUO list, I hastily add that it is most explicitly not
perfect!) John, in one of the posts I am proposing here I would
also rephrase the question I put to you in an earlier post on this
list,
http://ontolog.cim3.net/forum/ontolog-forum/2007-12/msg00206.html,
in the thread: 'CL, CG, IKL and the relationship between symbols in
the logical "universeofdiscourse" and individuals in the "real
world"'. But I would situate my question in this new context, so
that I do not once again unintentionally send you off after a red
herring.) (021)
3. The design and even programming already in place offer a good
basis for an eventual displacement of all existing DBMS
architectures. (It already dispenses with them.) As in RDF, the
logical atom is the fact-triple. But the design, long pre-dating
RDF, has escaped the baggage of RDF's roots and context. One major
difference - and "complexity-simplifying" feature - is in the
molecule, or that there _is_ a molecule, with its own synergetic
semantics and pragmatics. Separation of Concerns, or 'program/data
independence' in the DBMS context, is traditionally via information
or "complexity" hiding within functions or other algorithmic
modules. It is provided here by procedure-free yet precise
recognition of orthogonalities between the molecules in the form of
ontology-based specification modules. Querying and updating become
more SPARQL-like rather than SQL-like, more wysiwyg than
command-driven, often automatically relevance-driven yet still
optionally heuristic or discursive, and above all, more natural and
integrated. There are promising growth-points for Category Theory
to play a role somewhat analogous to that of relational algebra in
RDBMS (though in future, after initial implemented proof of
concept...). Transformations, as in CT's morphisms, are fundamental
to the architecture, and further key concepts seem to have at least
close CT equivalents and would doubtless - for automated inferencing
with scalability - profit from CT-inspired refinement and
manipulation. (022)
4. Similarly, all Internet-leveraging applications will in due
course gradually be migrated to the new architecture, all in an
architecturely-canonical and implementation-supported way. Think
"design patterns" (and anti-patterns) implemented via canonical
reflection on contexts! Hence such migration will tend to be
reasonably painless, thanks to the superior new development
environment and platforms, and migration via canonical
transformations. The attraction of user-oriented and user-driven
simplicity will prevail. (023)
5. The last two points are not mere hype, as there is a fundamental
reason for them: the architecture involves a radical reshaping of
OO, namely from the presently-conventional Classical Object Model to
a new kind of Generalized Object Model. Yet it still preserves and
even refines the three OO pillars of polymorphism, inheritance and
encapsulation (and all in plainer language, so there is no need to
confuse or otherwise deter with such terms...). So componentization
and reuse - in terms of ontology-based modules - will be
dramatically enhanced. (Digressing somewhat, I do want to state
that I prefer "Form", short for "conceptual form" or even "formal
structure", to "ontology". (Also to "typology" as my oldest web
pages had it.) The concept is where the familiar and the formal
intersect, so it should have a plain name though still one which
insistently reminds of its formal aspect.) (024)
6. Interestingly, there is a perhaps even more fundamental reason
for the obviously-impending success of that component architecture,
and I guess it relates closely to this statement from Pat's post: (025)
> The ideal ontology framework is one in which all
> contextual sensitivity has been _eliminated_ as far as possible. (026)
A major outcome here is not any decontextualizing as Pat might seem
to have in mind, but an explicit, enriching yet dynamic
situation-dependent recontextualizing of the atomic binary
relationship-based fact, not only by the addition of at least one
time component but also involving any other combination of the
indefinitely-varied dimensions of context, including modalities. (027)
But also, and crucially, it leverages those orthogonalities
mentioned in points 2 & 3 above, so does at the most abstract
relevant level imply Pat's elimination of contextual sensitivity.
And that is what provides not only for the better component
architecture already mentioned, but is even the logical platform for
further features such as a kind of automatic predicate-locking for
concurrency management, a requirement which as you know is coming
strongly to the fore with the new multicore chips and larger-scale
virtualization. (028)
Indeed, though rather hidden in the rather contextless verbosity of
this point 6, this is where it will become apparent why, in a
time-honoured epistemological way, the architecture will so
ubiquitously "help people simplify complexity together", that being
one of my favourite phrases for describing the overall objective in
a maximally-objective way. (Other important reasons will emerge
too, from cognitive and sociological and such-like fields.) (029)
7. But is there not some fatal oversimplification in all those
fabulous notions? Well, er, no, because the whole picture is itself
modelled around a very respectable and much-quoted classical fable!
I am referring to the events in Homer's Odyssey leading up to and
including the Scylla and Charybdis episode. As interpreted in a
novel way by me before this present project was conceived, the
allegory astonishingly, thanks to the centuries if not millenia of
its evolution, consists of an elegantly-conceived,
carefully-structured and coherent set of patterns and anti-patterns
for both good and effective cognitive behaviour. So, with the help
of the demythologization mentioned, it is in effect a broad and
reliable though high-level statement of end-user or human
requirements. The entire evolution of the detailed architecture
introduced here has been informed and stabilized by that age-old
wisdom. And that is partly (and only partly) why I am suggesting as
a name for it "The Mainstream Architecture for Common Knowledge".
For some more argument, albeit still high-level, expanding on why
oversimplification is not an undue concern, see
http://jeffsutherland.com/oopsla97/SpottiswoodeByndBO.html#table.
You might also find "Scylla" in that paper and follow the links to
other pages in the same domain. (030)
Sorry! Those few points have expanded more than I had intended. So
I rephrase my immediate question to this list: After all those
rather heavily-stated points, would you like to see a more complete
exposition of that proposed architecture, in instalments on this
list, including all its main technical details? Since I am now
prepared to publish all the details as far as they go, the story
will be much shorter and to the point than my previous webpages. In
return I would expect that you would tend to ask all the difficult
questions that I need to have confronting me. (And yes, that
includes Pat's inimitable putdowns! (... if I can manage to deserve
his attention...)) I would reserve my right to answer the list's
questions briefly, later or not at all, but the latter kind I would
at least try to acknowledge by means of some quick pigeonholing. (031)
Alternatively - and this is where Jim might come to the fore in my
story - the SUO list is conceivably the better home for all this
proposed activity. My honest opinion in the light of everything I
have been telling you about here is that the kinds of SUO proposed
by SUMO, CYC, DOLCE, etc badly miss the interoperability-support
target. Therefore they are an unintentional and unwitting burden to
interoperability. Is it interoperability or some notion of classic
Ontology we are aiming for? That question might also help explain
why John wants to take a very different tack. The architecture here
proposed has many features which the proposals alluded to all
totally lack yet which are essential for the interoperability so
clearly provided for here. That will be more clearly apparent to
you too from the detail I am proposing, but my position now is that
this project will produce an interoperability-SUO of its own highly
effective kind. So, which list: ontolog or SUO? (032)
Meanwhile, what would I hope to achieve by this particular
list-gambit? Other than the obvious corrections and refinements and
extensions to the architecture and my descriptions or arguments, I
would also hope that in some way one or more of you might advise or
help in setting up an appropriate project to complete and launch a
first implementation of the architecture. Such a project might - or
might not - build on the running "Metaset" program that I have
already coded (in plain C), far though it is from being a launchable
product or even being a proof of concept (except in certain detailed
though important technical areas). As a design, however,.there is a
far more complete basis. I would therefore put myself forward as a
lead architect and programmer on such a project, at least initially. (033)
Another option might even be to release all present technical bits
and pieces to the public as a seed for an Open Source project.
Usually that works only with already-running products, at least at
proof-of-concept level. But I do believe that the overall
simplicity of concept and design, together with the technical design
I plan to release, and on condition that you do where necessary
correct and complete them, will lead to the programming project's
comparatively easy completion by a very very few (2 or 3 or even 1?)
of the large numbers of good programmers out there on the Internet,
such as I have had the marvelling pleasure of working with in the
past, if only some such few can be persuaded of the message and run
with it. (This author and programmer, at 66 years of age, has
finally become too impatient to persist, through day job and other
distractions, in trying to complete the programming of Metaset to a
launch state all by himself, without the stimulus of intensive
teamwork.) (034)
Where might it all lead? Consistently with my 7 points listed
above, Metaset has long roots as an "application operating system".
That is, the high-level ontology aspects, especially at its own
internal kernel-, DB- and UI-levels, that is, its
"operational-SUO-like" features (also canonically designed and
"easily" implementable), very precisely circumscribe all the
remaining procedural language programming. That partly explains why
it is basic to the whole concept for all such internals coding to be
Open Source. But the main next point is that the architecture does
actually seem to be an appropriate (though non-prescriptive)
architectural foundation and implementation basis for an environment
for everything at the application level. I have long been pointing
out that it requires only the most basic host operating system and
telecommunication functionality. Therefore, over the medium to long
term, further canonical construction by the open market has the
potential of replacing at least most of the bloat of present
operating systems and _all_ their attendant application-level
"layers", "frameworks" and other such supposedly shared and usable
facilities. An interesting line of evolution looms! (035)
Another eventual consequence would be a far more secure environment
for all activity building on the Internet. (036)
That is enough from me for now (if not already too too incredibly
much...). And my main question to you right now remains and is
fortunately very simple: Would you like to see this ontolog list
(or the SUO list) as the home now for the kind of discussion and
workshopping I seem to foresee? (And if not, what alternative would
you have in mind?) (037)
(Well, if you have made it to here, thank you already!) (038)
Christopher (039)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (040)
|