Do you see any future to the 8-bit MCU's?

*What* "real world data"?

Do you need a 32b core in a mouse? Keyboard? Microwave oven? Controlling the power windows in your car? Running (i.e., *in*) your furnace or ACbrrr? As your "intelligent thermostat"? Scanning credit cards for sales authorizations?

We see all these "big" applications (iPhones, etc.) with *huge* volumes... and forget that their actual volumes are *tiny* when you think of all the other "non-glorious" things that are out there "just doing their jobs"...

Reply to
Don Y
Loading thread data ...

I don't know about specific company's programmers -- so far I've only used TI's. But I do know that the ARM JTAG programming interface is pretty generic. Your best bet for a programmer may be to get a 3rd-party one that's optimized for speed, or just get one of the cheapie Olimex ones that's optimized for price.

--
www.wescottdesign.com
Reply to
Tim Wescott

-

ide quoted text -

Huh? Where did I suggest 32-bit cores for anything?

As to the question "what real-world data" the answer is any unsigned count or quantity over 255, or any signed count or quantity outside

-128..+127. Including addresses. I doubt you'll find much 8-bit code that doesn't have any such data. It's an open question as to the distribution of the amounts of such data manipulation over all 8-bit applications. All I know for sure is that I've seen plenty of operations on such data ever since the 8080 days.

Reply to
KK6GM

LOL, remember that there is an two 8049s shipped with every computer ever sold !!!

One is in the keyboard. The other is in the keyboard controller on the motherboard.

So for every one million computers shipped, there are two million 8049s shipped.

hamilton

Reply to
hamilton

quoted text -

The OP's post concerned 32b being "cheap enough" to make 8b cores obsolescent.

What sort of number crunching do you think goes on inside a microwave oven? Or, when you push the "open" button for your car window? Or when your furnace decides "Yes, the igniter *has* lit the gas so I can keep it flowing and don't have to retry the ignition sequence"? Or, when your *toaster* decides the toast is ready??

Do you think all n-bit processors have n-bit ALU's? Extending the precision of a computation is such a rudimentary operation that folks don't even *think* of it in deciding how complex an application is ("OK, I need to keep track of time... that will be

6 *digits* for HH:MM:SS, 2 more for day, 2 more for month and some number for year -- plus whatever I need for a prescaler"). With HLL's, you don't even *see* this.

I.e., there are very few applications that deal with SINGLE *CHARACTERS* yet we have no problem running those "*string* applications" on 8, 16 or 32b processors. We just inherently know that the cost of operating on a string exceeds the cost of operating on a character (just like the cost of operating on a long exceeds the cost of operating on a short)

(I know of products that do everything in BCD. So, [0..9] is effectively their concept of "real world data")

It's surprising how few (percentage-wise) applications really *need*

32b power. Especially when you consider how fast instruction cycles have become (want to buy some 600ns 2KB EPROMs?). Work with something truly RISC-y (e.g., 8x300, SPARC, etc.) to get a feel for how easily you can trade cycles for complexity.
Reply to
Don Y

It's worse than that. There's typically an 8b MCU in your CD/DVD drive. Another in your mouse. Some systems have one that acts as the "system monitor". What's in your WiFi adapter? etc.

It's like *ants*. They seem so small and inconsequential that we ignore them. Oh, sure, we know that there are "a lot" of them. But, do we really know just how many??

IIRC, the cumulative **mass** of the "ants" exceeds that of "humans" on the planet by a HUGE margin!

Reply to
Don Y

ct

+

can

ind

ty

Yes, i have the TI/LMI (based on FTDI). It would have been a cheaper solution for me. But eventually, Freescale won with the 16 bits A2D we needed.

Reply to
linnix

Optical and/or wireless mouse/keyboard.

Unicode support for multi language user interface.

J1839 protocol stack.

Running (i.e., *in*)

Multi language user interface, connectivity and compatibility with the different hardwares.

Certified WAN-class TCP/IP stack and https.

What you mentioned are all big applications for 32-bitters.

Toys, smartcards, timers, RKE, sensors, battery maintainers, miscellaneous small controllers and other large volume standalone stuff is the area of 8- and 4-bit MCUs.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

to

r

ra-

ut

Hide quoted text -

ode

I don't think you're following my question. Let me try again. Remember, I was responding to a post about "extreme low power designs."

1) 8-bit cores presumably are lower-power than corresponding 16-bit cores. 2) 8-bit code will require more instructions than 16-bit code when dealing with >8-bit values, including addresses. 3) Might it be the case that the disadvantage of (2) offsets the advantage of (1) in many cases, thus making 16-bits a sweet spot in extreme low power designs?

It may be true or it may be false, but it is not _obviously_ false to me.

Reply to
KK6GM

In a lot of cases, the *need* for 32 bit is not the determining factor. In some cases, a capable 32 bit chip is cheaper than a similarly capable

8 bit device, and even when the 32 bit device is slightly more expensive, it may have other advantage, like a better high-level language support allowing shorter development times.
Reply to
Arlet Ottens

I think the gap between 8 bit and 32 bit is too small to leave anything but they tiniest niche inbetween.

Even if there was some profit in that, it is likely that both 8 bit and

32 bit manufacturers would make improvements to keep competing with each other.
Reply to
Arlet Ottens

For the most part, power follows devices*frequency. So, if you have half the active silicon running at twice the frequency you are (roughly speaking) using equivalent power. (neglecting other factors)

Sure. But, it need not require twice as much "work" to do the same thing!

I.e., if you are processing single *bits* in a 128b wide address space (i.e., *lots* of "bits") then the cost of manipulating the address goes up but the cost of processing the *bit* goes down (smaller ALU). How often you have to manipulate one or the other becomes a factor, then.

E.g., if you can walk through 256 consecutive addresses before you have to "bear extra cost" to handle the ">8 bit" address space, then your address processing costs are rather small.

If you have to process addresses algorithmically (e.g., walking through the address space as if it was an exponential curve), then your cost for address processing goes way up! OTOH, if you are loading those addresses from a table, then the cost of the "programmatic load" isn't that much more than the "work" a wider processor would have to do to load that address in "a single reference", etc.

IMO, no. The "data" (avoiding your choice of "real world data") that many devices have to process is often very simple (e.g., "Is the current temperature greater than the setpoint temperature?") and lacking in severe time constraints (i.e., you can afford *hundreds* of clock cycles to process it, not just *one*!).

I look at it this way: can I code the same application on a 16b processor in *half* the code of an 8b? (i.e., does each instruction do twice as much work in a 16b). IME, that hasn't been the case. There's too much "little stuff" where the 16b yields no savings.

For *extreme* low power applications, you also have to look at the startup costs of the processor (e.g., coming out of deep sleep). I suspect it is more efficient to wake up, do "some stuff" and then go back to sleep than it is to wake up, do *half* as much stuff and then go back to sleep (assuming "some stuff" is a small work effort)

Reply to
Don Y

Exactly. You can shrink the die and increase the clock to make up the difference.

*Or*, put extra hardware on the die to eliminate the "expensive" application aspects that might otherwise suggest the need for a more "capable" processor (e.g., communication subsystems).

Its a shame some of the older CPU designs have fallen by the wayside as they would easily lend themselves to multicore designs with infinitesimal cost penalties (besides the real estate for the second core) -- you can do a *lot* with a tightly coupled second CPU!

I find many applications just need lots of address space but not much computational horsepower. And, since you aren't going to "halve" that address space by moving to a wider core, you don't usually get as "efficient" a solution as you move up in CPU complexity (though market pressures can distort *pricing* sweet spots)

Exactly.

People have been predicting the demise of "tape" technology for decades. Disks keep getting faster, smaller, cheaper, etc.

But, gee, the same sorts of things keep happening to *tape* :-/

(no, I don't believe this will continue...)

Reply to
Don Y

No, it doesn't. The problem is entirely and exclusively with posts originating at Google Groups, and the problem is that their posts lack the References: header. Yes, some news clients can be told to ignore broken References and construct threads based on Subject similarity. But that's just wrong.

Reply to
Hans-Bernhard Bröker

+42

Google is taking the MS attitude, here. I.e., we're going to ignore what has been done before and come up with Our Own Way Of Doing Things -- even when doing what was done previously would incur little or no cost.

Or, they are just plain incompetent.

Reply to
Don Y

However, in this case the original message header contains the line Message-ID:

and the header in the message from Dave N that prompted the "starting a new thread" comment contains the line: In-Reply-To:

It looks like RFC 5322 permits either or both, with the in-reply-to field containing the single message id and references optionally containing all of the message ids of the thread.

Agent seems to handle threading with only the in-reply-to field just fine. How much, or whether, Google's implementation is broken is above my pay grade; I'd punt to the IETF.

--
Rich Webb     Norfolk, VA
Reply to
Rich Webb

Which is quite irrelevant here, since that's the RFC for E-Mail. This is not E-Mail, but News.

The chapter and verse Google groups is clearly in violation of is RFC

5537, section 3.4.3 , paragraph 5:
  1. The followup MUST have a References header field referring to its precursor, constructed in accordance with Section 3.4.4.

And I'm pretty sure GG broke that on purpose. Mere negligence just doesn't explain the level of blatant incompetence they've repeatedly reached whenever make any change to their service.

Reply to
Hans-Bernhard Bröker

Assuming Google does nothing without a *reason*, this begs the question: what do they gain by doing this? (since gain they must!)

Reply to
Don Y

AAaaaaaaaa!!!111... *RAISES* the question not *begs* the question!

--
Rich Webb     Norfolk, VA
Reply to
Rich Webb

I'd agree that using Subject headers for threading is wrong; modifying the subject breaks threads, while unrelated messages with "generic" subject get bundled into the same thread.

However, some readers (including Pan) can maintain the threading based upon the In-Reply-To header, and that doesn't have these problems. Relying upon this has the problem that a missing message will break the thread, which is why the References header exists (but that has its own problem, namely that it can become unreasonably long), but there's no harm in a news reader using In-Reply-To as a fallback in the event of a missing References header.

Reply to
Nobody

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.