Dear John,
In your "Architecture of Intelligent Systems"
paper at:
http://www.jfsowa.com/pubs/arch.pdf
on the first page, you write:
JFS:> People communicate with
each other in sentences that incorporate two kinds of information: propositions
about some subject, and metalevel speech acts that specify how the
propositional information is used—as an assertion, a command, a question,
or a promise. By means of speech acts, a group of people who have different
areas of expertise can cooperate and dynamically reconfigure their social
interactions to perform tasks and solve problems that would be difficult or
impossible for any single individual.
The goal is laudable, but I have two questions:
Point 1. First, your phrase "a group of
people ... can ... solve problems that would be difficult or impossible for any
single individual" strikes me as true only in a very, very
limited, quantitative sense - "twice the work takes four times as
many people" - due to the losses in efficiency and coherency when any
two or more people discuss the issues.
In the qualitative sense, remember the eighties' saying:
"Adding more people to a programming project makes the project take
longer".
Point 2. Conceptual breakthroughs historically have come
from a single mind, which integrates prior knowledge related to
the relevant issues needing decisions, and posits a different answer that turns
out (like so few of them do) to actually work.
That is why we attribute breakthroughs to Newton,
Einstein, Turing, Kim Il Sung{:-|}, and other individuals instead of their
local groups, which bore them, fed them, raised them, educated and counseled
them, and generally helped them get to the pinnacles of self esteem, so they
could stand out from the prior history, with their newfangled concepts.
In my dissertation (published back in the aught seventies),
I showed how those newfangled microprocessor chips could be put into a crowd of
hundreds or thousands of other cpus, with tiny packet buffers between each
successive pair in a line, all working in a pipeline of packet buffered
computation. There was a two page summary published in IEEE Trans
Computers back in 1977 (plus or minus a year) as "The Distributed
Pipeline". I also have an old pdf of the scanned pages I can
send if anyone is interested, for some reason. I kinda anticipated the
use of internet protocols to get lots of things done in parallel. Back
then, a lot of people were working on the issue of faster computing.
However, the loss of efficiency in any multicomputer
architecture at the time was atrocious. I remember that Hughes Aircraft
made a crossbar system for about four cpus and eight memory banks, one rack
drawer each. The crossbar cost 2.5 times as much as a cpu!
But a pipeline-oriented sequential path method worked
much better. I got efficiencies around 70% using timely TI 9900 chips
(simulated) in benchmarks like the FFT with 300 processors working on the same
problem!
At 70% efficiency per 300 cpus on challenging benchmarks
like that, it would even be a nice architecture today, with internet
connections, usb, parallel port to port, or with backplane connections among
the boards.
Of course, no one uses general purpose cpus if they are
in a hurry. Today's technology suggests, at the highest performance end,
that each cpu be physically close to the others it connects with, extremely low
power and heat, with lots of flash memory and ram. But there are
functions that can still best be done with interfaced special purpose chips
such as graphics processors, database inverted file processors, even tiny interpreters
for java or lisp.
But the software problem has not been solved yet, for any
architecture. The Macromodules project at WUSTL in the
seventies is no more. My own Micromodules project of using
cpus along with interfaced special purpose function circuits, never got the
funding required to make it work, so maybe I was the only one who thought it
could work. Later, my Reusable Software R&D project
showed ways to package software for functional reuse (that was before object-oriented
software became the next paradigm).
I like your Flexible Modular Framework article
for that reason especially, but do you expect it to really work in
practice? Has it worked in any realistic cases? It would be good to
have a post from you on how that has worked out in the years since you posted
that web page.
Sincerely,
Rich
Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com