I'm not sure Occam is needed. The point of the GA144 (assuming it actually had an intended point) is that CPU resources (instruction speed) is no longer a precious resource at 700 * 144 MIPS. So you can dedicate a CPU to handling an SPI interface, or a UART, etc. They use three CPUs for the raw I/O on the memory interface and may use more to implement the "server" to share it across any CPU that wants access.
Each processor has *very* limited memory, so a given function may have to be implemented in several CPUs to get enough memory resources.
I think of the chip as a CPU analog to the FPGAs. A Field Programmable Processor Array (FPPA) if you will. Build this chip in a 20 nm process and you have something like 11664 CPUs running at who knows what speed!!!
There are many limitations and so far no fixed methodology has emerged to design with a chip like this. Everything has been seat of the pants. I think they did a 10 Mbps Ethernet interface and a few other functions (I've yet to see even a 12 Mbps USB interface).
Programming in assembly is actually not hard if you are familiar with Forth at all. The instructions are few (5 bit opcode) and there is only two or three addressing modes. In Forth subroutines are "words" which have no formal parameters, everything is passed explicitly on the stack, managed by the programmer. Not really hard, just different.
The GA144 came about by Charles Moore applying his minimalistic philosophy to CPU design. The CPU is designed to keep the implementation as simple as possible to yield low power, high speed and a small footprint, forgoing much of the usual complex features of typical CPUs. It could work if they had a large enough group of people working the software side. That is one of Chuck's limitations. He is a lone wolf who doesn't really care so much if anyone else uses his ideas.
Just in case I didn't get this across, the CPUs are essentially async coordinated only by data passing handshakes. I think I mentioned that but it is an important feature.