It seems to me that we DO maintain our
machines, mostly without the help of heavily constructed detailed
Back in the seventies and eighties, the
government insisted on full docs for software development and they defined the
DoD standard 2167 family of specifications. Productivity on full doc projects
was running at around 60 LOC per person month at that time. With the better
word processors of today, perhaps that would be a better rate – maybe 75 LOC.
But small R&D groups developing in
teams of 2 to 5 people can regularly turn out 2,000 to 10,000 LOC with minimal
documentation to demonstrate technical viability of a concept. So the question
was raised about how valuable it is to fully document, test, qualify and
validate a software project compared to the cost of doing so.
Best current practice is to only use deep
docs when you have a mission-critical application that really requires that
Then, there is the problem that the maintenance
process often renders documents of questionable use because new materials are
being added, new concepts, new situations that cause undesirable behavior. Even
behaviors that were perfectly specified and implemented change faster than the
docs can be updated. That is why so many major software projects “wear out”
and have to be completely replaced with new developments. Maintenance is a
very expensive process only justified in certain circumstances, and even then,
they require a large company development perspective, not the small company
technology development perspective that the original concept did.
Randy Jensen studied the actual
development cost and schedule information and was able to formulate the kind of
math model for estimating software development which made his estimates better
than any of the more famous, better published authors of those studies. His
SEER concept was implemented in a program then, though I haven’t kept up with
the details since about 1990.
Jensen’s one line summary of good software
development management techniques was to treat the process as similar to “crowd
control”! Divide the design into small components, separate the development
teams for every detail not required to be interfaced to other components, and
try to recreate the high productivity of small teams by narrowing the size of
the component interface specs compared to the size of the implementation
software for each component. But management was only able to achieve some
small degree of improvement, mainly because the concepts in the component
interfaces were very difficult to make known, unambiguously, to both component
developers and component users.
He used to say that the main cost of
software development was communication. Getting every programmer concerned
with each interface symbol to understand the meaning of that symbol as presented,
designed, implemented and tested in the component, and later presented,
designed, implemented and tested in the system assembly that incorporated each
component. It was that limit (communications), according to Jensen, that
caused large groups of programmers to be so vastly less productive than small
groups. Ultimately, his conclusion was that there could be NO basic
improvement because the large projects simply run into the human limits of
conceptual organization and sharing about the precise meaning of the component
interfaces. Toolware helps a lot, but it is ultimately the programmer’s
limited brain (seven plus or minus two chunks) which establishes the cost and
Rich AT EnglishLogicKernel DOT com
9 4 9 \ 5 2 5 - 5 7 1 2
[mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of David Eddy
Sent: Saturday, September 25, 2010
Subject: Re: [ontolog-forum]
semantic analysis was do not trust quantifiers
On Sep 25, 2010, at 12:55 PM, Rich Cooper wrote:
The real issue, from my
viewpoint, is just how UNNATURAL the language can be before turning the users
back to hiring computer operators instead of doing their work firsthand on the
available computers and software. My experience with UNNATURAL languages
is that they don’t function as normally advertised.
What I'm beating on here is that the language used inside software
applications—the systems that ensure milk gets delivered to my supermarket
& that my checking account is properly balanced by the bank—the language
used is U-G-L-Y.
Natural language is the stuff you read in the NYTimes... it's been
explicitly written to be read by another human, PLUS edited by a professional
editor for readability. The software that runs invisibly in the
background of our lives is written to make a computer do something. It is
mostly not written for readability. Such resources easily succumb to
When another human comes along & needs to read software, because
there are effectively no rules for such language (other than the technical
restrictions on length & separators between terms) it can be a slow &
error-prone process to understand what cryptic abbreviations mean.
IF there were a mechanism available to help the analyst/programmer to
quickly understand that at line 1503 in program LCCIIL02 "MIT" means
male impotence test (rather than Massachusetts Institute of Technology) this
would be socially useful.
To repeat: EM Forester's "The Machine Stops"... if we cannot
maintain the "machines" (e.g. software applications) that support our
society, this is not good.