ontology-summit
[Top] [All Lists]

Re: [ontology-summit] [Applications] Launching the conversation about La

To: Ontology Summit 2012 discussion <ontology-summit@xxxxxxxxxxxxxxxx>
From: joseph simpson <jjs0sbw@xxxxxxxxx>
Date: Mon, 23 Jan 2012 23:01:58 -0800
Message-id: <CAPnyebxk+w9uErRidg0xsX9pk1NaqD3tmaRPb0PtRiBbJ3gK_Q@xxxxxxxxxxxxxx>
John:

These are excellent points and it is important for me to recap the main topic ideas that I addressed.

The general theme that I originally addressed was "formal methods for integration of diverse professional views associated with a specific concept."

In this domain of application there are differing human groups with specialized symbols, syntax, and semantics.  The primary reason for this specialization is the limited capacity of humans to acquire, analyze and understand information.

Computers do some tasks much better than humans, humans do some tasks much better than a computer.  Each "agent type" has structural and architectural limits.

Therefore, it is my understanding that one of the goals of the "Ontology Summit" is to explore the nature, structure and form of one or more ontology types that effectively mediate between groups of people and groups of computers as well as support the realization of large-scale systems.

The specific domain of application is "Large-scale Systems Engineering", which I view as based on General Systems Theory.  I use Klir's definition of systems engineering, which is:
   "A special branch of general systems theory, motivated solely by engineering aspects.... Its objective is an elaboration of general methodology of engineering systems design in the broadest sense of the word "design" including, e.g., needs research, problem specification, environmental research, decision-making with respect to optimization criteria included in the problem specification, synthesis, analysis, development of components, realization of the system, study of psychological factors that influence the design, etc."

Further, I view the practice of systems engineering as firmly anchored in System Science.  Specifically, I view the Science of Generic Design developed by John Warfield as the basis for all Systems Engineering work.

John Warfield's work addressed the limits of human comprehension as well as some of the formal language constructs that are necessary to increase the capability of humans to be successful in addressing large-scale system problems.

John Warfield designed a formal language set that has thee symbol types, prose (words), graphics and mathematics.  Using this fundamental symbol set, composite languages (combinations of symbols) can be developed and applied. Using a combination of the "Theory of Relations" (Boole, De Morgan, Peirce) and Hilbert's concept of a dyad of object language and metalanguage, Warfield created a family of system languages that use mathematics as the object language and natural language and the meta-language.  Very interesting stuff, most of Warfield's references have been listed in the reference thread.

If the goal is to design an ontology for systems engineering then the aspects of system science and design articulated by Warfield must be considered.  However, many of the constructs that are produced using the Warfield approach are static and asynchronous in nature.

Classic ontology has been viewed as having a static nature, with specific nodes and relation types being constant.  While there appears to be a need to have a common, standard conceptual framework for any application domain there also appears to be a need to review, analyze and evaluate the dynamic components of the domain space.  There is a tension between the persistent ontology form and the need for dynamic change and balance.

Just as Warfield designed system languages based on careful analysis of the domain space and the crafted application of an object language and meta-language, I believe it is possible to design an "ontology type" that has two basic features.  The first feature is called a ConceptCube(sm) (CC) and is constructed from carefully selected concepts (that are common and constant) found in the domain space. The second feature is an adaptable set of relations that exist between the CC and sub-components and cubes.

The concepts associated with the CC have a well defined set of methods to either expand or collapse the level of abstraction associated with each node in the set.  An "Asset Protection Model" was developed using the CC approach and was published in the paper "A Systematic Approach to Information Systems Security Education."

It appears to me that there are many features of ontology development that could be customized to address the need for dynamic relation evaluation and also address the application of natural language as a meta-language for discussing specific domain application features.

Have fun,

Joe
 

On Sun, Jan 22, 2012 at 11:52 PM, John F. Sowa <sowa@xxxxxxxxxxx> wrote:
On 1/23/2012 1:20 AM, joseph simpson wrote:
> Machine semantics and human semantics are usually treated in different
> manners, especially with reference to symbols and syntax.
>
> While machine processing of semantics is very dependent on common
> symbols and syntax, human semantics is very dependent on common context.

I agree with the distinction.  But the fundamental principle is that
*every* artificial language of any kind -- either a version of logic
designed for communicating with humans or a programming language
for computer processing -- is ultimately defined informally in a
natural language that is written by humans for the purpose of being
interpreted by other humans who program the machines.

Just look at any textbook of mathematics or computer science.
Every formal language is defined in a tightly controlled or
stylized natural language.

That has been true of *every* artificial language from Aristotle's
syllogisms to the most abstract formal languages used today.

FCA is based on a formal algorithm for deriving a lattice of
concepts from a table of instances.  But the instances that were
put in the table and the labels of the data items in the table
were chosen by human beings based on their informal intuitions.

The fundamental principle of computer processing since the 1950s
is GIGO:  Garbage in -- garbage out.  It doesn't matter how precise
your algorithms may be if your data happens to be based on somebody's
faulty assumptions or careless mistakes.

But there is some hope:  techniques such as FCA are very good at
detecting inconsistencies in the data.  They can draw the attention
of some human expert to the source of the inconsistency and request
some guidance about how to correct it.  That is very useful, but
it doesn't eliminate the need for human opinion.

Alan Perlis made a related observation:  "You can't translate
informal language to formal language by any formal algorithm."

John



--
Joe Simpson

Sent From My DROID!!

_________________________________________________________________
Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/   
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/  
Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2012/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2012  
Community Portal: http://ontolog.cim3.net/wiki/     (01)
<Prev in Thread] Current Thread [Next in Thread>