I think you miss my point and make my case. :-) I would certainly not direct an engineer into one of these methods - but rather eliminate their concern for the issue. If you call out any particular method for parallel decomposition or "message passing" you will preoccupy the engineer with performance semantics.
Better to eliminate the ideas entirely from high level languages designed to solve real problems since they are not necessary and contribute nothing algorithmically. Ideally, programmers need not be concerned about mapping their solution to a machine.
But to solve this problem well requires a combined and coordinated effort between software and hardware layers. The solution requires that the two be designed as one with the long term in mind.
On Feb 24, 2013, at 5:01 PM, "Rich Cooper" <rich@xxxxxxxxxxxxxxxxxxxxxx> wrote:
It is not a question of limiting thought but more to do with directing the engineer toward one solution over another, often complicating their thoughts and behaviors with unnecessary concerns, and distracting the engineer from the task at hand.
For example, in the case of parallel programming the engineering preoccupation becomes data distribution. Speak to a "parallel programmer" and this is what they will tell you about before everything else. Whereas it would be more desirable and more productive for all concerned if their preoccupation were, in fact, the problem at hand.
I disagree. The methods for parallelizing computation are diverse, and by “directing the engineer” into one of those methods is counterproductive. Each engineer has a conception of how to incorporate parallelism, and data distribution is just one such way.
In my dissertation, I showed a method for organizing the computation sequence into chunks so that each chunk could be performed in any one of N computers, and the calculated output of that chunk and function becomes another chunk to be input to any one of those N computers. The system self balances, i.e., each computer in a string has as its first obligation to pass the current chunk on to the next if and only if that computer is not busy. That is, each processor keeps the next processor busy before continuing its own calculation. There was a paper published in the IEEE Transactions on Electronic Computers (September, 1977 I think, or thereabouts) based on the same dissertation.
In other words, the preoccupation distracts engineers from their primary task: algorithmic design. And in fact parallelism itself contributes nothing at all to algorithmic design - both issues, parallel decomposition and data distribution, provide only performance semantics (a pragmatic).
The details are in:
In this book I tried to address this issue for parallel machines. There were not so many of these at the time but now, of course, they are pervasive. However, I should say that I am happier with my more recent ("Keen") proposals in this area.
I considered only general purpose programming languages, not languages for distinct domains. However, these issues are general and will still exist even if one or the other language were generally considered more suitable for a given application domain.
Again, I disagree. The issues are certainly widespread, but not truly general. The history of computer architecture shows so many ways to solve concurrent computation problems that are not related to the language itself, but to the processor architecture and interconnection method. Language is secondary in that it conforms to the computing architecture in which the system must run.
However, any language that addresses parallelism must also map efficiently onto some architecture. That is not a simple mapping, since multicomputer architectures are so diverse. With LANs now common, it has become standard practice to consider the parallel architecture of the LAN (or WAN) to be the default architecture, but that is not always the case.
On Feb 24, 2013, at 7:08 AM, Phil Murray <pcmurray2000@xxxxxxxxx> wrote:
> I doubt that a
> specific language limits what a good programmer thinks should be