Best bang-for-buck uC

I see the smiley. Is that a sign that you aren't making sense?

--

Rick C 

Viewed the eclipse at Wintercrest Farms, 
on the centerline of totality since 1998
Reply to
rickman
Loading thread data ...

Tim Williams wrote on 8/25/2017 1:37 PM:

Guess I was thinking in the MCU rut. The GA144 is a powerful part with 144 CPUs with 700 MIPS each. That's a lot of horsepower. The aggregate horsepower is so large they intend for the user to use an entire CPU for an I/O function like an SPI port.

--

Rick C 

Viewed the eclipse at Wintercrest Farms, 
on the centerline of totality since 1998
Reply to
rickman

For embedded, stuff like that is very interesting. But AFAIK, it's still very niche, and therefore loses on the MIPS/$ front. (Well, maybe. ~100GIPS sure is a lot. What's price and availability like?)

And it comes with all the troubles of multithreading: resource allocation and management (buses, pins, memory, cache), Amdahl's law (if you need to parallelize some algorithms to try and make better use of all the cores you aren't needing for IO), atomic operations / mutexes / locking, and impossible-to-track bugs.

One of those things that, done right, can offer amazing redundancy, reliability and scalability; but which is so painfully easy to do just a little bit wrong.

Tim

--
Seven Transistor Labs, LLC 
Electrical Engineering Consultation and Contract Design 
Website: http://seventransistorlabs.com
Reply to
Tim Williams

I am using a C compiler now on that core, nothing to complain about yet

Reply to
Klaus Kragelund

What's wrong with it?

The EFM8BB is nice, has comparators, so you can use it for a peak current mode SMPS, no external comps except for the gatedrive and the power components

Cheers

Klaus

Reply to
Klaus Kragelund

I used to do serious stuff in 1K bytes, assembly on a 6803, but that was a hassle. Unless the million-piece price matters, something more reasonable, like an LPC1758 maybe, can do some serious signal crunching.

--

John Larkin   Highland Technology, Inc   trk 

jlarkin att highlandtechnology dott com 
http://www.highlandtechnology.com
Reply to
John Larkin

We use a lot of LPC3250's, an ARM uP that has an external memory bus. We often glue it to an FPGA (or several), and a serial flash to boot the whole system. It has Ethernet and USB and all the usual stuff, including ADC and DAC, and a DRAM interface if the on-chip memory isn't enough.

It has hardware floating point, which is nice. We pay $6.17.

--

John Larkin   Highland Technology, Inc   trk 

jlarkin att highlandtechnology dott com 
http://www.highlandtechnology.com
Reply to
John Larkin

I know zilch about Forth implementations from the last 30 years, but BITD they were limited to integer/fixed point math. Whatzizname who (back then) wrote "Starting FORTH" and "Thinking FORTH" (both of which I read carefully) tried to turn this into a virtue, IME completely unsuccessfully.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

Tim Williams wrote on 8/25/2017 5:33 PM:

I seem to recall $10 in qty. They are stock for a few thousand. But longevity is the question. This is a one product company running on a shoestring. I have no idea how many of these they sell or how long the company will be around.

You have already blown the concept. You will not program 144 processors

*anything* like you will program a single fast processor. Think of this as 144 logic elements that can compute very fast. If you try to program it the same way you program a PC you will lose all the capability which you appear to understand.

Ok.

--

Rick C 

Viewed the eclipse at Wintercrest Farms, 
on the centerline of totality since 1998
Reply to
rickman

You seem to spend a lot of time living in the past. Actually most programmers don't understand floating point anyway and make exactly the errors they are trying to avoid by using it. Floating point is not a panacea. It also requires analysis to prevent math errors.

As to floating point and Forth, you *are* living in the past. There is nothing in the language that limits the use of floating point.

--

Rick C 

Viewed the eclipse at Wintercrest Farms, 
on the centerline of totality since 1998
Reply to
rickman

Well, Forth is definitely in my past, anyway.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

the chip has two non-volatile memories, the instruction flash memory is 8K which is 4K words (essentially 4096 instructions) the data EEPROM is the same size as the RAM: 512b.

Ihe instruction memory can also store data in a byte-addressed mode, but access is harder, pages are larger, and it can witstand fewer write cycles.

--
This email has not been checked by half-arsed antivirus software
Reply to
Jasen Betts

I know how micros work. I was just asking how it can have 512B of flash and 8K program memory. The answer is 512B wasn't flash, it was EEPROM.

Reply to
lonmkusch

It's probably better that way for everyone since you can't seem to let go of things you learned 30 years ago to learn new ideas. Forth definitely requires you to think a bit differently and be willing to learn new ideas.

--

Rick C 

Viewed the eclipse at Wintercrest Farms, 
on the centerline of totality since 1998
Reply to
rickman

GPU cores are designed to usually be programmed in a high level language like HLSL or GLSL. They're very restrictive subsets of C with some additional types and functions for vectors and matrices, and extremely fast hardware accelerated floating point math functions.

"Vertex shaders" and "pixel shaders" operate in parallel on each vertex and pixel of a video frame. There's no inter-core communication as far as I'm aware so there's no way for the code operating on one pixel to "know" what the code operating on some other pixel is doing, at least not without communicating with the CPU or main memory which is slow and sort of defeats the purpose.

It's more like writing VHDL than C, but you can make some lovely effects like this one (if you can view it it's being generated in real time by your computer's GPU by the code on the right):

Reply to
bitrex

You snipped the part that told you what I was talking about. The GA144 is not a GPU. It's an embedded CPU array, power consumption max is about a watt but each processor uses no power when it has nothing to do so the actual power is typically much less.

--

Rick C 

Viewed the eclipse at Wintercrest Farms, 
on the centerline of totality since 1998
Reply to
rickman

bitrex wrote on 8/26/2017 12:04 AM:

Pretty cool video.

--

Rick C 

Viewed the eclipse at Wintercrest Farms, 
on the centerline of totality since 1998
Reply to
rickman

Winding you up about Forth is just fun, is all.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

Awfully good price. Well, hope they pick up and become a thing -- it's clear that multicore approaches are the future, we just need the tools to automate them, compiling around the traditional issues where possible.

Well, the broad bullet points that I hit are intrinsic to any real time, resource sharing system, whether it's PC multithreading, or traffic in the street. I would find it quite surprising that they've solved that.

By "logic elements", do you mean to imply that each core is self-contained, with a limited number of inputs and outputs, and some sort of programmable (not necessarily at run time) crossbar or interconnect bus between them? In other words: a CPLD, but instead of logic blocks, compute cores.

That would seem altogether far more difficult to organize and program than a shared bus system, though.

But that's an interesting insight, come to think of it: FPGA routing being analogous to resource management in multithreaded systems. Downside: the data inputs and outputs of a thread have to be known at compile time, which means the compiler needs to know a hell of a lot more about the function of that thread. Whereas a logic circuit is static, simply a set of inputs and outputs, very easily solved (relatively speaking..).

It is perhaps a solvable problem, to be able to determine, algorithmically, the optimal scheduling of threads, at compile time. But it'd most likely be either O(N!), or Omega(BusyBeaver(N))... But, somewhere between those extremes, and compromising for heuristics rather than algorithms, there may be a suitably limited, simplified subset of threaded computation, that is easily analyzed yet still powerful enough to be useful (and not so arcane that no one wants to program in it).

Tim

--
Seven Transistor Labs, LLC 
Electrical Engineering Consultation and Contract Design 
Website: http://seventransistorlabs.com
Reply to
Tim Williams

Tilera made processors with a concept like that:

They were bought out by EZChip, which was recently themselves bought out by the Israel networking hardware group Mellanox:

Anat Agarwal taught MIT's online MITx program's "EE 101"-type courses.

Reply to
bitrex

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.