AVR32 availablilty ?

Ah, what a nice cosy secret society. Well, Chris, guess what, there _are_ people out there who _can_ do a better job than those who get it all by legacy and they will keep on pressing/trying to get a share of the work. Your points against open source software may be valid to some extent, it will take time until society learns to deal with 100% open information (which it inevitably will if mankaind makes it long enough). But keeping the hardware closed so the world has to wait for those who have been granted (typically by legacy) the right to program it to learn how to do it and eventually figure out whether there is anything they cannot do - so they leave it out for someone with enough brains who can - well, this is 100% wrong. There is nothing in this which brings any benefit to society or anyone on a wider scale, so don't expect much support for this point of view to come from outside the "secret society" circles who have inherited a share of the pie.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

Reply to
Didi
Loading thread data ...

In message , Didi writes

Not at all it is normal business practice. (Not just in the electronics area either but everywhere )

Possibly. It is the same in ANY industry. Some due to track record/history have a favoured relationship with other companies. Over time new people come in and others drop out. That is normal for ANY business.

That will never happen. In fact Open Source movement is moving away from that. Business is not altruistic.

1 What do you mean by "legacy"? 2 Are suggesting that those not in the loop are better than those in the loop? If so you are living in a fantasy. 3 What is 100% wrong? That good highly intelligent engineers who have proved them selves (both technically and ethically) are permitted to work on new hardware pre-release?

It brings new MCU and new debug tools what is wrong with that?

It makes money for the Chip maker and their partners whilst delivering god tools to the users.

You are starting to get religious.....

ALL companies who produce things have suppliers and partners. No matter what they make. They ALL have preferred suppliers and partners. No mater if they make cars, microwave cookers, silicon chips, desks, chairs etc.

What you seem to be complaining about is that you don't wan to play by the rules and are cross they you are not invited to join a group when you can show them no benefit to them in you joining.

Grow up.

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
/\/\/ chris@phaedsys.org      www.phaedsys.org \/\/\
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris Hills

Not all of what is normal busyness practice today was that yesterday, nor will it be tomorrow. The times they are a chaingin'... :-).

Busyness is not altruistic, certainly. Evolution even less so, however. Our world has been evolving for millions of years towards more information exchange, now you want to change that? Good luck... (In the short term I agree with you that "open source" seems to back off its openness, just try to get access to some wifi adaptor firmware/host interface for example).

That some people are in a position to take advantage by blocking other peoples efforts rather than by only doing better than the others. What harm will be done to anyone except the beneficiaries of the secret data by opening the debug interface of chips? Freescale do that for most of their families (well,not for the PPC - rumour is IBM are the reason for that). I know of course that this is how our world works, but that does not mean I have to like it and not do my part towards changing it for the better.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

Reply to
Didi

Yes, Done that myself :)

-jg

Reply to
Jim Granville

Open source has its place.

Lewin was talking of "opening documentation on the debugging protocols in use" but I see no evidence that is not happening, certainly not to the level it is impacting chip usage. If you want the debug info, you can ask for it.

Any NDAs I've seen are driven by lawyers, and patent nerves, not commercial or engineering 'secret society' motives.

Putting my user hat on, it's more important to me that a Debug system WORKS, and if the vendor supplies Debug for free ( see AVR Studio, Silabs, Zilog, Freescale? ) then that can also be called quite open. From that angle, these days with Debug Silicon, I'd prefer Debug that was at least tested and integrated from the chip vendor - it's of little concern how much opensource content is in that effort, if it comes for free.

I think some vendors offer a DLL form of 'open documentation', it means any designer can get into their core (if they really have the time), but any patent lawyer is given no information.

- and they save in support bandwidth as well.

-jg

Reply to
Jim Granville

Jim,

"Free" is not synonymous with "open". They thus can (and do) tie you to some other stuff you do not control etc. - in the case MS windows or whatever _they_ put on your menu. Open information is either available or not, not much to define about that.

It does not matter how many times they will call the secret data "open". It is secret as long as it is not available. Open DLL, great. Yet another control in the hands of MS - no, thanks. I can live without MS and would like to be able to do that in the future, thank you very much. I wonder how many have already bought into that "open DLL" nonsense.

Dimiter

P.S. To put the record straight, I am not accusing Atmel in anything, nor do I expect them to lead the way to opennes. It just happened that the thread became more general than the subject line suggests.

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

Reply to
Didi

You've lost me. If a company compiles to a DLL, what stops them offering equivalents in multiple operating systems ?

As for a MS free desktop, well nice idea, but we have work to do....

I do minimise my MS usage, but not to the point of 'cut off the nose to spite the face'

-jg

Reply to
Jim Granville

There is nothing new about all this, except Chris's peculiar antagonism towards open-source programs. As an example, consider the DDTZ that I created and made public in the heyday of CP/M, i.e.

1980..1988, almost 30 years ago. This was, and is, the outstanding CP/M debugger. No, it did not capture and display symbol names etc., however it did handle z80 and 64180 (z180) code. It was designed to cause the minimum load on the system. At the time there was no objection from the corporate world to it.

You can see the whole thing (and more) at:

--
 Chuck F (cbfalconer at maineline dot net)
   
   Try the download section.
Reply to
CBFalconer

Their willingness to do so? Call it open, close it within MS and let the public enjoy the confusion how come this is open but secret. Or, if it is not one, but 2 or 3 platforms under which the data are encrypted, this by no means makes the data "open". It is open if - and only if - I can read it.

I manage to do all the design and programming work I do without using MS/intel; and I have proven more than once to be way more efficient than anyone I have had a chance to compare with who uses MS or whatever. I know this is untypical, but those untypical guys also want to live and keeping the information secret is a calculated hostile policy exactly against the likes of myself... hence my (over?)reaction.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

Reply to
Didi

I compare an AVR32 more to an ARM9 than an ARM7. Take an ARM966, for instance, and I believe that 5-10 just melts away, no, as the 966 has all those nice saturating adds and other gizmos?

Why compare to ARM7 when patently the AVR32 is better (architecturally-wise at least) compared with an ARM9?

Just my take. Oh, and as for power consumption, I gues we're all looking at the new STM32 and anticipating the Handshake stuff (no idea how this will actually turn out though.)

-- Paul.

Reply to
Paul Curtis

"Paul Curtis" skrev i meddelandet news: snipped-for-privacy@corp.supernews.com...

The AVR32 uC3 core is a three stage pipeline (as is the ARM7) making it natural to compare to the ARM7 which is also a 3 stage pipeline.

Power consumption has a lot to do with what standard cell library you use. You can optimize for performance, but then you take a hit in leakage current, or you can optimize for good standby performance, but then you will have limited top frequency.

If you look at full speed operation, then the ST datasheet says 22 mA at 36 MHz. AVR32 uC3B is 13,8 mA at 36 Mhz.

The uC3B has somewhat optimized for power, so it tops out at 60 Mhz, instead of the 66 MHz provided by the uC3A.

The STM32 in this comparision will top out at 72 Mhz, which I believe is the fastest grade.

Still, no obvious reason to be ashamed of the AVR32.

Some time ago, I talked to a professor at the University of Michigan focusing on CPU architecture, and his take on asynchronous CPUs was that they are good, except they have 2 x logic size,

2 x power consumption and 1/2 x performance. Will be interesting to see if they succeed to prove him wrong.
--
Best Regards,
Ulf Samuelsson
This is intended to be my personal opinion which may,
or may not be shared by my employer Atmel Nordic AB
Reply to
Ulf Samuelsson

Well the new (sort of) thing is the fact that more and more hardware which contains "chambers of secrets" becomes popular. I think none of the processors you mention had undocumented debug interfaces (?), and I know those I used during the 80-s had none (I used 6800, 6809 - grew up as a programmer on it - , 68020). Later the CPU32 appeared and used to have the BDM interface completely documented; those I have bothered to check now (Coldfire, HCS08) do have it documented as well, only the PPC has its so called "COP" (not a watchdog monitor, a JTAG-accessible debug interface in this case) port secret, but I do quite well without it. I did write back then a debug monitor and an FDD ROM for the

6809 which would run with MDOS (like in an Exorsiser...). Actually I do have that system running emulated in a DPS window - together with a very complex graphics terminal I had made back then for it, only it runs tens of times faster than the original :-). While Chris' opposition of anything open is perhaps too vehement, I am not quite sure if I would like to open all my sources yet (well above 1M lines). Too much at stake for me, I suppose. What I am quite sure of is that I would avoid a part with "chambers of secrets" on it as long as I can afford to, even if this brings an extra cost - have done it and will do it again.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

Reply to
Didi

I have amazingly little trouble with silicon vendors [as long as their products don't suck - NEC, I'm looking at you here], and with careful selection of tool vendors I have no trouble at all. I'm sorry I don't belong to the One True Faith of Chris Hills, but I can strongly recommend you visit the Church of the Flying Spaghetti Monster because it makes just as much sense as any other faith.

Ulf believes in his company's product line. Which is good; faith is a wonderful thing, but I don't believe his attitudes are 100% aligned with reality. I happen to like Atmel's products, a lot. I lobby to get them in as replacements for a couple of Godforsaken 8-bit cores we're moving away from. But we, like many other companies, are trying to consolidate our IP excellence into a few cores. MSP430 fills the low- power niche. AVR fills the low-cost 8-bit niche in applications where an 8051 part isn't right (for whatever reason). ARM7 (I guess soon CM3) fills the ubiquitous 32-bit core niche. AVR32 is an answer waiting for a question. And it's an answer being spoken by a nerdy kid in the front row jumping up and saying "Me! Me! Me!" whenever the teacher asks about ANYTHING.

Because it costs a lot more to retool an ASIC around a new core than it would to move between two general-purpose microcontrollers (say ARM to MIPS or vice versa).

Reply to
larwe

If you want to reduce your vendors, then the Picopower AVRs could give you a reason to get rid of the MSP430 :-)

I think the CM3 and the AVR32 is trying to answer the same question. If there is no question to be answered, why then the CM3?

If you are looking at performance you see ARM7->ARM9->ARM11 chips today uC3xxx->AP7xxx chips today CM-3 ????

That is your interpretation. The AVR32 uC3k is a good solution for a lot of people. That does not mean that it is superior in all cases.

I see as many ARM9 users as ARM7 users today and neither the CM3 or the uC3k will meet the performance of the ARM9 and neither will run Linux/WinCE. - uCLinux? ...sigh!

The AP7000 will run Linux, but not WinCE, so if you want WinCE then clearly ARM is the architecture for you.

Where is the CM-3 here.... No solution to be found for Linux or WinCE. CM-A8, don't see the general purpose controllers based on this core, real soon ...

If you are using the AVR, most likely you have a JTAGICE Mk II. The gcc compiler and AVR32 Studio, is for free. The cost will be a new development board, which is in the range of $100-150.

The cost of learning, is probably the biggest cost. On top of that you may have to learn the peripherals. If you already are a user of the AT91SAM7, then you find that the AVR32 is using exactly the same peripherals, so there is very little learning. And you can move to the ARM9 (and soon the ARM11) with the same peripherals.

--
Best Regards,
Ulf Samuelsson
This is intended to be my personal opinion which may,
or may not be shared by my employer Atmel Nordic AB
Reply to
Ulf Samuelsson

The goal posts keep moving on this (of course). Async parts naturally are low-energy, and self compensate for temperature, voltage and process. but that means you still have to meet a MIN corner, and if the part DOES vary with T,V,P how do you actually know you have a corner device ?

- vendors will still have to spec some datapoint, and build in their margin to ensure yields. Then we have on-chip regulators, now moving to be common, and they allow quite focused (and low system impact) choices on Core Vcc. That also allows control of power profiles, and all it takes is some auto-Vcc control on idle modes, to vary that CoreVcc, and you get very close to Async joule limits, but on a standard process part. The AMD/Intel devices do this now, on a multi-chip basis, but it's not hard to do that in a aingle chip microcontroller.

-jg

Reply to
Jim Granville

Before a part (or debugging protocol, or whatever) is stabilised, then obviously the hardware company will want to work with partners to see how the parts will work in practice. At that stage, you want to keep information secret, because it is very difficult to change things if the information has been released to a wider audience. But by the time the part is available (and preferably as soon as its specifications are finalised), then there is no benefit in keeping secret any information regarding its use.

I have known commercial closed source vendors (as well as open source developers) who have had enormous trouble because hardware companies sometimes make it so extremely difficult to get details of their debugging protocols.

I am a believer in fair competition, and in choosing tools (or anything else) on the basis of value for money and fitness for use. Rewarding partners with early information in return for help during testing and development is perfectly reasonable - artificially blocking out alternatives by keeping information permanently secret is not.

Reply to
David Brown

You are welcome to ask - that does not mean you will necessarily get the information!

That's certainly true - an engineer's instinct is towards openness (at least, for systems that he's proud of!).

You can call it a banana split if you want, that does not make it one.

Free (as in price) is a totally different concept from "open".

Like most professionals, I don't have a problem with paying money for something, as long as I get decent value for it. The point of "open", in regard to debugging information, is that I or anyone else are free to make use of that information. For the most part, I am also happy simply to be able to get tools that work. But for that to happen, and for there to be a selection of tools available, the information has to be freely and openly available - even if I don't use it personally. When the information is there, tool vendors can use it - and users can select among a feature and price range from Olimex to Abitron.

As an example of using debugging information myself, Freescale fully document the BDM protocol for their ColdFire processors (for those not familiar with the BDM, you can think of it as a bit like a jtag debugging port). With this information, I've been able to build a USB programmer into boards with these microcontrollers. If the BDM protocol had been secret, or had NDA restrictions, I would not have been able to do that (I have to be able to give out the relevant software to others). It would be absolutely no help to have free (cost) software from Freescale, and no help with some sort of DLL. Open information was what I needed, not zero-cost software.

Zero cost is always nice, of course, but it is not critical. Open availability of information is what's important. Then I can choose to buy the chip vendor's recommended and qualified solutions, or I can buy from a cheaper vendor (knowing that it might not be as well tested, or might not support the latest parts), or I can building something myself.

A DLL is not a "form of open documentation" !! A DLL and documented API is arguably better than nothing, but it is very far from open documentation. It can help if you want to make automated test and programming software for a windows machine, but is of no interest if you want to make debugging hardware, or other software or use other operating systems. It is also of little help if you simply want to understand what is going on in the system.

Reply to
David Brown

Unfortunately not, for application-specific reasons I don't want to get into here. Believe me we have tried to fit them into the same applications.

Our applications don't even exploit a fraction of the processing performance of an ARM7. The reason we need such parts is basically for large, flat flash and RAM space. And on-chip special peripherals like Ethernet.

All of our products (hundreds of SKUs) with perhaps two small exceptions have no OS in them at all. We are looking at RTOSes for future products, but nothing remotely as heavy as Linux. uCos-ii maybe. WinCE is a joke without a punchline.

Please don't get me started talking about that ICE, I have nothing good to say except that I believe someone, somewhere possibly had it working right once :)

The cost of redesigning the ASIC around a new core, then porting thousands of lines of code and getting it qualified is the biggest cost. It takes us in the neighborhood of two years to qualify a new ASIC like that.

Reply to
larwe

excel.

Your remark was immediately after my comment that performance of real world code matters rather than the performance of individual instructions. So you're suggesting to people that AVR32 is 5-10 times faster than ARM7 on DSP code, which is obviously totally ridiculous. Let's look at the facts, shall we?

Look at the instruction timings of ARM9E and AVR32 and you can't deny that many commonly used instructions take fewer cycles on ARM9E, so ARM9E is going to be a bit faster on realworld code (contrary to your Dhrystone claims). ARM9E is twice as fast as ARM7 on the BDTI DSP benchmarks, so AVR32 would fall somewhere inbetween.

The most complicated MAC is macsathh.w which takes 7 cycles on ARM7. However after you account for reading the data and the looping overhead, the speedup of a dotproduct using macsathh.w is 3.7 times compared to ARM7 (for Cortex-M3 the factor would be 2.1). For non-saturating instructions the differences are far smaller. And if you have less than 100% DSP code then the factors scale down accordingly.

Even if you meant individual instructions then it would be meaningless to talk about speedup factors. For example divide on AVR32 takes 35 cycles, while Cortex-M3 can do a divide in 2 cycles. Should ARM be claiming a 17.5 times speedup?

Wilco

Reply to
Wilco Dijkstra

Cheers. Useful to know for future reference. I guess it was having both the USB and Ethernet together, and the distributor availability that clinched it.

Andrew

Reply to
Andrew Burnside

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.