ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Architecture of Intelligent Systems - Flexible Modul

To: "'[ontolog-forum] '" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: "Rich Cooper" <metasemantics@xxxxxxxxxxxxxxxxxxxxxx>
Date: Tue, 26 May 2015 14:23:42 -0700
Message-id: <0aaa01d097fa$3f198680$bd4c9380$@com>

Ed, thanks for your inputs!  My comments are interleaved below,

 

Sincerely,

Rich Cooper,

Rich Cooper,

 

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

 

EB: Your argument (1) is largely irrelevant.  Yes, the cooperation of multiple persons in accomplishing a task creates overheads:  When two competent people do a job, you get 1.7 staff years of production and 0.3 staff years of coordination, maybe, but you still get more than one staff year.  

 

RC: But the cost and schedule in software development climbs from the first staff forward.  Remember that the 0.3 staff years gets added to the schedule, and there is no allocation in your rough model for the cost of delayed schedules.  It is not only the direct scaling of overhead, but even the scaling of overheads on top of those overheads. 

 

Yet another factor is incoherence of vision.  You have probably seen the photo of a bridge that didn't meet correctly in the middle.  It's reminiscent of Escher drawings, like the Escher dice photo below:

 

Spending staff time in coordination meetings only gets the coordination started.  There is far too much miscoordination being misconstrued or misinterpreted.  The loudest, most aggressive participants, drive the use of time and concepts, not those participants who understand the problem, and possibly are debating a solution. 

 

On most large software projects, only a few of the staff perform most of the effective programming work, while the others just cost money and schedule. 

 

EB: It is easy to identify projects which cannot be done by one person at all, and other projects that can be done by one person, but not in a reasonable amount of time. 

 

RC: Wild disagreement here; it is not so easy to distinguish software projects that will take enormous amounts of time and money from those that won't.  Many companies bidding on software contracts lose their shirts because they give a fixed price bid that is way too low.   

 

EB: Consider, for example,  the erection of a skyscraper, or a bridge. 

 

RC: Even skyscrapers go over budget and schedule, but at least in that technology there are hundreds of years of shared construction experience among architects and engineers to set up the construction process in a much more orderly way than for software projects. 

 

EB: Humans initially formed communities for mutual support and protection.  As Jared Diamond put it, 10 ill-nourished farmers can still beat one sturdy hunter/gatherer.

 

RC: Yes, but one sturdy hunter/gatherer has spears and arrows, and can run faster than the 10 ill-nourished farmers.  Is there a point here I am missing?

 

EB: Argument (2) has merit, but “conceptual breakthroughs” don’t advance civilization or the creation of health, wealth or happiness, unless they are communicated to others.  It is the dispersal of knowledge, so that it can be reused, that creates the major advances. 

 

RC: I would rephrase that to "it is the dispersal of knowledge, so that other people can use it, and other people can build yet more knowledge upon it, that creates economic advances. 

 

EB: It is now commonly believed (although evidence is still lacking) that the “great leap forward” in human technology around 40,000 years ago was contemporaneous with some major advance in the ability of the human species to communicate.

 

RC: Ed, do you have a reference(s) to material about that leap and communication related to it?  That sounds like a subject worth looking at. 

 

EB: In sum, communication before the fact is very important to some developments; communication after the fact is very important to the success of others.

 

-Ed

 

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Rich Cooper
Sent: Monday, May 25, 2015 6:26 PM
To: '[ontolog-forum] '
Subject: [ontolog-forum] Architecture of Intelligent Systems - Flexible Modular Framework

 

Dear John,

 

In your "Architecture of Intelligent Systems" paper at:

http://www.jfsowa.com/pubs/arch.pdf

on the first page, you write:

 

JFS:> People communicate with each other in sentences that incorporate two kinds of information: propositions about some subject, and metalevel speech acts that specify how the propositional information is used—as an assertion, a command, a question, or a promise. By means of speech acts, a group of people who have different areas of expertise can cooperate and dynamically reconfigure their social interactions to perform tasks and solve problems that would be difficult or impossible for any single individual.

 

The goal is laudable, but I have two questions:

 

Point 1.  First, your phrase "a group of people ... can ... solve problems that would be difficult or impossible for any single individual" strikes me as true only in a very, very limited, quantitative sense - "twice the work takes four times as many people" - due to the losses in efficiency and coherency when any two or more people discuss the issues. 

 

In the qualitative sense, remember the eighties' saying: "Adding more people to a programming project makes the project take longer". 

 

Point 2. Conceptual breakthroughs historically have come from a single mind, which integrates prior knowledge related to the relevant issues needing decisions, and posits a different answer that turns out (like so few of them do) to actually work. 

 

That is why we attribute breakthroughs to Newton, Einstein, Turing, Kim Il Sung{:-|}, and other individuals instead of their local groups, which bore them, fed them, raised them, educated and counseled them, and generally helped them get to the pinnacles of self esteem, so they could stand out from the prior history, with their newfangled concepts.

 

In my dissertation (published back in the aught seventies), I showed how those newfangled microprocessor chips could be put into a crowd of hundreds or thousands of other cpus, with tiny packet buffers between each successive pair in a line, all working in a pipeline of packet buffered computation.  There was a two page summary published in IEEE Trans Computers back in 1977 (plus or minus a year) as "The Distributed Pipeline".  I also have an old pdf of the scanned pages I can send if anyone is interested, for some reason.  I kinda anticipated the use of internet protocols to get lots of things done in parallel.  Back then, a lot of people were working on the issue of faster computing.

 

However, the loss of efficiency in any multicomputer architecture at the time was atrocious.  I remember that Hughes Aircraft made a crossbar system for about four cpus and eight memory banks, one rack drawer each.  The crossbar cost 2.5 times as much as a cpu! 

 

But a pipeline-oriented sequential path method worked much better.  I got efficiencies around 70% using timely TI 9900 chips (simulated) in benchmarks like the FFT with 300 processors working on the same problem! 

 

At 70% efficiency per 300 cpus on challenging benchmarks like that, it would even be a nice architecture today, with internet connections, usb, parallel port to port, or with backplane connections among the boards. 

 

Of course, no one uses general purpose cpus if they are in a hurry.  Today's technology suggests, at the highest performance end, that each cpu be physically close to the others it connects with, extremely low power and heat, with lots of flash memory and ram.  But there are functions that can still best be done with interfaced special purpose chips such as graphics processors, database inverted file processors, even tiny interpreters for java or lisp. 

 

But the software problem has not been solved yet, for any architecture.  The Macromodules project at WUSTL in the seventies is no more.  My own Micromodules project of using cpus along with interfaced special purpose function circuits, never got the funding required to make it work, so maybe I was the only one who thought it could work.  Later, my Reusable Software R&D project showed ways to package software for functional reuse (that was before object-oriented software became the next paradigm).

 

I like your Flexible Modular Framework article for that reason especially, but do you expect it to really work in practice?  Has it worked in any realistic cases?  It would be good to have a post from you on how that has worked out in the years since you posted that web page. 

 

Sincerely,

Rich Cooper,

Rich Cooper,

 

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>