For fans of computing history and/or Feynman, this article about his time with, and contributions to, Thinking Machines and the Connection Machine is a great read!
I was incredibly lucky to have been funded to write StarLisp code for the original CM-1 machine. CM-1 was a SIMD architecture, the later models were MIMD. Think of the physical layout being a 2D grid or processors with one edge being for I/O. That was a long time ago so I may have the details wrong.
> As an ironic footnote, a friend who worked for Steve Jobs at NeXT told me the CM-1 was the inspiration for the form of his NeXT machine.
<https://tamikothiel.com/cm/tshirt/index.html>
Replying to myself here - I decided to just actually go read wikipedia about this. Here's the answer:
<quote>
By default, when a processor is executing an instruction, its LED is on. In a SIMD program, the goal is to have as many processors as possible working the program at the same time – indicated by having all LEDs being steady on. Those unfamiliar with the use of the LEDs wanted to see the LEDs blink – or even spell out messages to visitors. The result is that finished programs often have superfluous operations to blink the LEDs.
As a developer you had explicit access to them, so you could use them for debugging. A lot of times, they were just running an RNG to look cool though.
There is no documentation of what the LEDs were _actually_ doing. There are descriptions, like 'Random and Pleasing is an LFSR', but no actual information that maps to actual pixel coordinates spaced in time. Nearly zero code.
I'm saying this because I need this information, and the fastest way to get information is to state that it's impossible or doesn't exist.
Seems like CM-1 and CM-2 show CPU activity, so each light blinked when a CPU did something. Those were the ones that were designed by Tamiko Thiel.
Then, CM-5 did have the option of having "artistic" or "random patterns" on it, apparently designed or co-designed by Maya Lin. IIRC, the CM-5 is the one appearing in Jurassic Park.
I don't know if is there any firmware code or hardware design available to check how that function worked. Maybe the people from the Computer History Museum knows something. They have the first CM-1 and have at least one CM-5.
Check their library to see if maybe some of the technical docs say something:
Worked on the CM-1 and CM2. I felt they were awful buggy. At one point they asked if they could use my code to run as a diagnostic, it would break the log() function on occasion.
Around the same time (1984), there was also another very cool piece of technology that often gets overlooked: the CMU WARP. It wasn’t as flashy as the Crays and the Connection Machine, but it was the first systolic array accelerator (what we’d now call TPUs). It packed as much MFLOPS as a Cray 1.
It's also the computer that powered the Chevrolet Navlab self-driving car in 1986.
I'd be interested to hear what you thought of the programming architecture.
Excluding the bug side of things. If they did everything they were supposed to how hard was it to get them to perform a task that distributed the work through the machine.
I read some stuff on, I forget, maybe *lisp? I found it rather impenetrable.
On top of this, have there been any advances pin software development in the subsequent years that would have been a good fit for the architecture.
I always thought it was an under explored idea, having to compete with architectures that were supported by a sotware environment that had much longer to develop.
I used them at the (US) Naval Research Laboratory, programming in a dialect of C called C*. This automatically distributed arrays among the many processors, similar to how modern Fortran can work with coarrays.
If the problem was very data-parallel, one could get nearly perfect linear speedups.
For fans of computing history and/or Feynman, this article about his time with, and contributions to, Thinking Machines and the Connection Machine is a great read!
https://longnow.org/ideas/richard-feynman-and-the-connection...