Re Individual Cortical Column
Distributions
This paper is written for three
people who have vast experience in EEG and neuroanatomy related to electrical
foci in the brain. You can struggle with it if you have that kind of
background, but I found it so dense that the summary contains the only sentence
that I thought was quotable, and valuable:
Our
current working hypothesis can be thus summarized: individuals maintain an idiosyncratic
distribution of electrically active neocortical association areas,
regardless of particular conscious activities performed,...
Here is the URL if you are
motivated enough:
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0128343
IMHO, this work is related to
Jeff Hawkins' rendition of neurology and neuroscience as a topographical two
dimensional surface on the neocortex.
If there are MDs in the list who
are lurking, it would be nice to get a more civilian language explanation of
this paper. But it seems to indicate that using signal
averaging over neorcortex maps of many individuals actually loses the
signals as individual expressions on neorcortical mappings. That
implies to me that locating lesions using EEGs may be biased by the averaging
use in conventional EEG analysis for epilepsy, for example.
Are any MDs available to comment?
Sincerely,
Rich Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT
EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx
[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Edward
Barkmeyer
Sent: Tuesday, May 26, 2015 3:08 PM
To: [ontolog-forum]
Subject: Re: [ontolog-forum] Architecture of Intelligent Systems -
Flexible Modular Framework
Rich,
Of course, if a project can be
done by one highly competent software engineer in 6 months, it is not cost
effective, or even productive, to use a team. But if the project will
require 20 man years to address all 71 required features, you can’t
wait 20 years for one engineer to do it all. It is necessary to build a
team to accomplish it, and even more necessary to retain members of that team
to maintain it.
And yes, if you are to deliver
the product in a year, you will need 25 staff to produce the 20 staff years of
productive software development that realizes the product. But this is
also why major construction jobs have foremen and site engineers in addition to
the productive workers who actually fabricate the edifice. Teams consume
some labor as overhead, but that overhead is necessary to the effective
cooperation of the team.
Schedules and delays are about
realistic estimation of team performance and the complexity of the problem. The
coordination and management overhead must be figured into the staffing for the
project. And in construction and manufacturing tasks, one has to allow
for statistical distribution of other failures in the system – supplies,
equipment, lost manpower, etc. Getting this right is hard, and producing
a competitive bid while maintaining a realistic estimate is even harder.
Most of the scheduling problems we see are a consequence of unrealistic
estimates or truly incompetent staff (and the perennial “low bid”
selection that risks that), but some are consequences of statistically unlikely
events or necessary but unsafe assumptions about on-time availability of
certain resources. But this is all MBA stuff. In any case, I don’t
see how schedules are related to communication.
Other points:
EB:
Humans initially formed communities for mutual support and protection. As
Jared Diamond put it, 10 ill-nourished farmers can still beat one sturdy
hunter/gatherer.
RC: Yes, but one sturdy
hunter/gatherer has spears and arrows, and can run faster than the 10
ill-nourished farmers. Is there a point here I am missing?
The point was that cooperation
and communication enable the farmers to win a battle with individually superior
opponents. That is why the community was formed.
EB:
Argument (2) has merit, but “conceptual breakthroughs” don’t
advance civilization or the creation of health, wealth or happiness, unless
they are communicated to others. It is the dispersal of knowledge, so that
it can be reused, that creates the major advances.
RC: I would rephrase that to
"it is the dispersal of knowledge, so that other people can use it, and
other people can build yet more knowledge upon it, that creates economic
advances.
I suppose that depends on your
thinking that building more knowledge on it is a different kind of “major
advance” from “economic advances”. I disagree that
“advancing civilization or the creation of health, wealth or happiness”
is necessarily an “economic” advance, but I certainly don’t
want to have that debate on this exploder.
-Ed
From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx
[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Rich Cooper
Sent: Tuesday, May 26, 2015 5:24 PM
To: '[ontolog-forum] '
Subject: Re: [ontolog-forum] Architecture of Intelligent Systems -
Flexible Modular Framework
Ed, thanks for your
inputs! My comments are interleaved below,
Sincerely,
Rich Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT
EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com
EB:
Your argument (1) is largely irrelevant. Yes, the cooperation of multiple
persons in accomplishing a task creates overheads: When two competent
people do a job, you get 1.7 staff years of production and 0.3 staff years of
coordination, maybe, but you still get more than one staff year.
RC: But the cost and schedule in
software development climbs from the first staff forward. Remember that
the 0.3 staff years gets added to the schedule, and there is no allocation in
your rough model for the cost of delayed schedules. It is not only the
direct scaling of overhead, but even the scaling of overheads on top of those
overheads.
Yet another factor is
incoherence of vision. You have probably seen the photo of a bridge that
didn't meet correctly in the middle. It's reminiscent of Escher drawings,
like the Escher dice photo below:

Spending staff time in
coordination meetings only gets the coordination started. There is far
too much miscoordination being misconstrued or misinterpreted. The
loudest, most aggressive participants, drive the use of time and concepts, not
those participants who understand the problem, and possibly are debating a
solution.
On most large software projects,
only a few of the staff perform most of the effective programming work, while
the others just cost money and schedule.
EB: It
is easy to identify projects which cannot be done by one person at all, and
other projects that can be done by one person, but not in a reasonable amount
of time.
RC: Wild disagreement here; it
is not so easy to distinguish software projects that will take enormous amounts
of time and money from those that won't. Many companies bidding on
software contracts lose their shirts because they give a fixed price bid that
is way too low.
EB:
Consider, for example, the erection of a skyscraper, or a bridge.
RC: Even skyscrapers go over
budget and schedule, but at least in that technology there are hundreds of
years of shared construction experience among architects and engineers to set
up the construction process in a much more orderly way than for software
projects.
EB:
Humans initially formed communities for mutual support and protection. As
Jared Diamond put it, 10 ill-nourished farmers can still beat one sturdy
hunter/gatherer.
RC: Yes, but one sturdy
hunter/gatherer has spears and arrows, and can run faster than the 10
ill-nourished farmers. Is there a point here I am missing?
EB:
Argument (2) has merit, but “conceptual breakthroughs” don’t
advance civilization or the creation of health, wealth or happiness, unless
they are communicated to others. It is the dispersal of knowledge, so
that it can be reused, that creates the major advances.
RC: I would rephrase that to
"it is the dispersal of knowledge, so that other people can use it, and
other people can build yet more knowledge upon it, that creates economic
advances.
EB: It
is now commonly believed (although evidence is still lacking) that the
“great leap forward” in human technology around 40,000 years ago
was contemporaneous with some major advance in the ability of the human species
to communicate.
RC: Ed, do you have a
reference(s) to material about that leap and communication related to it?
That sounds like a subject worth looking at.
EB: In
sum, communication before the fact is very important to some developments;
communication after the fact is very important to the success of others.
-Ed
Dear John,
In your "Architecture of Intelligent Systems"
paper at:
http://www.jfsowa.com/pubs/arch.pdf
on the first page, you write:
JFS:> People communicate with
each other in sentences that incorporate two kinds of information: propositions
about some subject, and metalevel speech acts that specify how the propositional
information is used—as an assertion, a command, a question, or a promise.
By means of speech acts, a group of people who have different areas of
expertise can cooperate and dynamically reconfigure their social interactions
to perform tasks and solve problems that would be difficult or impossible for
any single individual.
The goal is laudable, but I have two questions:
Point 1. First, your phrase "a group of
people ... can ... solve problems that would be difficult or impossible for any
single individual" strikes me as true only in a very, very
limited, quantitative sense - "twice the work takes four times as
many people" - due to the losses in efficiency and coherency when
any two or more people discuss the issues.
In the qualitative sense, remember the eighties' saying:
"Adding more people to a programming project makes the project take
longer".
Point 2. Conceptual breakthroughs historically have come
from a single mind, which integrates prior knowledge related to
the relevant issues needing decisions, and posits a different answer that turns
out (like so few of them do) to actually work.
That is why we attribute breakthroughs to Newton,
Einstein, Turing, Kim Il Sung{:-|}, and other individuals instead of their
local groups, which bore them, fed them, raised them, educated and counseled
them, and generally helped them get to the pinnacles of self esteem, so they
could stand out from the prior history, with their newfangled concepts.
In my dissertation (published back in the aught
seventies), I showed how those newfangled microprocessor chips could be put
into a crowd of hundreds or thousands of other cpus, with tiny packet buffers
between each successive pair in a line, all working in a pipeline of packet
buffered computation. There was a two page summary published in IEEE
Trans Computers back in 1977 (plus or minus a year) as "The
Distributed Pipeline". I also have an old pdf of the scanned
pages I can send if anyone is interested, for some reason. I kinda
anticipated the use of internet protocols to get lots of things done in
parallel. Back then, a lot of people were working on the issue of faster
computing.
However, the loss of efficiency in any multicomputer
architecture at the time was atrocious. I remember that Hughes Aircraft
made a crossbar system for about four cpus and eight memory banks, one rack
drawer each. The crossbar cost 2.5 times as much as a cpu!
But a pipeline-oriented sequential path method worked
much better. I got efficiencies around 70% using timely TI 9900 chips
(simulated) in benchmarks like the FFT with 300 processors working on the same
problem!
At 70% efficiency per 300 cpus on challenging benchmarks
like that, it would even be a nice architecture today, with internet
connections, usb, parallel port to port, or with backplane connections among
the boards.
Of course, no one uses general purpose cpus if they are
in a hurry. Today's technology suggests, at the highest performance end,
that each cpu be physically close to the others it connects with, extremely low
power and heat, with lots of flash memory and ram. But there are
functions that can still best be done with interfaced special purpose chips
such as graphics processors, database inverted file processors, even tiny
interpreters for java or lisp.
But the software problem has not been solved yet, for any
architecture. The Macromodules project at WUSTL in the
seventies is no more. My own Micromodules project of using
cpus along with interfaced special purpose function circuits, never got the funding
required to make it work, so maybe I was the only one who thought it could
work. Later, my Reusable Software R&D project showed
ways to package software for functional reuse (that was before object-oriented
software became the next paradigm).
I like your Flexible Modular Framework
article for that reason especially, but do you expect it to really work in
practice? Has it worked in any realistic cases? It would be good to
have a post from you on how that has worked out in the years since you posted
that web page.
Sincerely,
Rich
Cooper,
Rich Cooper,
Chief Technology Officer,
MetaSemantics Corporation
MetaSemantics AT EnglishLogicKernel DOT com
( 9 4 9 ) 5 2 5-5 7 1 2
http://www.EnglishLogicKernel.com