Microchip & OnSemi want to buy Atmel?

Sorry, should be 8.5K. 0.5K more to go.

Having 64 identical lines hardly make it readable.

But some people might think that's the right way to code. The rest of Atmel's examples are not so bad. It's probably written by Atmel interns, but they should have someone else to review the examples.

Reply to
linnix
Loading thread data ...

"lowcost" skrev i meddelandet news:gce0sr$d1r$ snipped-for-privacy@aioe.org...

No, it was a real application used in a product selling in numbers of tens of thousands.

It is pretty meaningless to shrink a 400 byte program in a 1 kB part, isn't it?

It is also pretty meaningless to write the program in assembler, if you can fit the program using C.

If you have time critical parts, then you may, write that small part in assembler, but you can also write it in C, and check the resulting code in the list file.

It would not be bad, if you could supply a pragma, specifying timing information for code.

I maintain that you are better of programming in C, and if that means swicthing part, you are often better off doing that.

--
Best Regards,
Ulf Samuelsson
 Click to see the full signature
Reply to
Ulf Samuelsson

I have seen worse examples.

When the AT90CAN family was in design, I showed how the receive message subroutine executing in 394 us on an 8051 could be reduced to ~10 us executing on an AVR, by changing the peripheral interface. Instead of having a number of 16 bit registers, each implementing a single function for 15 message buffers (1 bit per buffer), you are better off having a single 16 bit register per message buffer, where the function is defined as a bit location.

I.E: Instead of: struct { unsigned short start; unsigned short stop; unsigned short this; unsigned short that; ... } messages;

void start_message(unsigned char msg) { messages.start = (1

Reply to
Ulf Samuelsson

Best trick to reduce code is to use good compilers. Not all compilers are alike.

IAR allows you to optimize some global variables into registers. I had a customer running oút of code space on his 8 kB NEC part, and with the IAR compiler, it became 5 kB. With the register optimization added, it became less than 4 kB.

--
Best Regards,
Ulf Samuelsson
 Click to see the full signature
Reply to
Ulf Samuelsson

It's typical to need to do low level stuff on embedded systems. And there are libraries - it's not possible to write Pascal libraries in Pascal.

If you're talking about an unprotected real mode OS, with totally unprotected system calls (15 years ago?), then yes, there is a tiny chance you might end up executing dangerous code. There are fairly trivial measures one can take to stop it from ever happening. It's impossible to do this on a modern OS.

What damage, apart from a bruised ego perhaps?

What makes you think writing C is any slower than Pascal? C is more expressive and less restrictive and so it is far easier to write C.

Sure. I'm not saying that is it wrong to check for them, but that it is wrong to teach the programmer that it is OK to be lazy and not add their own checks. Ideally none of the automatic checks should ever trigger, either because there is an explicit check or there is a proof the check is redundant.

Wilco

Reply to
Wilco Dijkstra

constructs are hell for optimising

Absolutely. But I believe it is important to have the choice. I can choose to write C code quickly but it might not be efficient, or I can choose spend more time and write highly optimised code. I'm not forced down a particular route by the language designer. And that freedom is why many of us prefer C.

typos possible. A well known classic

value of OK is non-zero the condition is

Just about every C compiler gives a warning for this (which can often be configured to be reported as an error), no need to use lint.

indexing error is trapped imediately before

reliability code. It may not be as exciting as a

Terminating a program is damaging as well if it happens for real.

Windows is when some hacker takes control of

These are typically basic mistakes when reading input data without adding the correct checks. You always have to assume the data might be manipulated or corrupted.

the effect of removing a lot of

somewhat more verbose as a result, but it is

Having to think about the problem can be an advantage indeed. But it is also a disadvantage if you just want to do some prototyping or want to make design changes at a later date. For example you can easily change a structure into a union in C, while in Pascal it requires major changes.

point out

action.

The sooner a program is faulted for an

side effects that only show up in the

programmers.

The goal is to add explicit checks that make all automatic checks completely redundant. One typically enables all checks for debug and testing but a subset in the release build. So you get faster development using the extra checks but don't pay the runtime cost of checks that are not strictly required in the release.

including

logic

system that will run for months on end

security defects are caused by various buffer

You are quite right, Vista is indeed a very robust OS that runs for months without crashing. I've use it on a daily basis for the past 18 months and it is at least as reliable as XP (not a single crash despite uptimes of many months). Have you even used Vista yourself? I guess bashing Vista is still a popular pastime.

Wilco

Reply to
Wilco Dijkstra

larwe complained about the PIC architecture:

The AVR architecture doesn't have any better support for that than the PIC architecture does (not considering PIC32). Both are Harvard architectures, and in either case, if you really need a pointer that can point into either address space, you need to do special things to get the compiler to generate the appropriate code, which will then at run time inspect the pointer value and decide which address space to use.

If you're complaining about the problems of Harvard architectures in general, fine, but Microchip certainly doesn't have any monopoly on that. Historically the general reason for some of the low cost microcontrollers to use Harvard architectures is that doing so takes less die area and can result in higher performance for applications that don't need much use of program memory for data storage.

Perhaps someday the delta in cost will be small enough that Harvard architecture microcontrollers die out, but we're not there yet.

Reply to
Eric Smith

Apparently Digikey finally shipped my AT91SAM9G20-EK order yesterday, so perhaps another batch came back from fab just recently?

I hope that there is distributor stock of the chip by the time I'm ready to go to production.

Reply to
Eric Smith

That tiny chance happened once and that was bad enough.

Yes, like using Pascal instead.

.. unless that OS is written in C by some folks in Redmond. No wait ... you said modern so I take that back. You are right.

The overwritten MBR for example.

[....]

Pascal is much easier to write than C. I use both and have seen enough projects written by others in C to know that C is not easier.

I add own checks. Adding your own checks for inputs being as you expect them is a good idea in any language. Those checks however may contain the same error as the code of the program because the same person wrote them.

Ideally, the hard disk should not have been trashed. Ideally, I'd be rich(er) and good looking.

Reply to
MooseFET

We have the information, that now we will get new samples in 6 weeks.

--
Frank Buss, fb@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
Reply to
Frank Buss

constructs are hell for optimising

Though perhaps you should be prepared to trade some freedom for better intrinsic safety and robustness. Using a chain saw to carve wood is a lot of fun until you make a mistake with it.

BTW attempts by C programmers to micromanage code execution can have the opposite effect to that intended and prevent the optimiser from working to its full extent.

from typos possible. A well known classic

value of OK is non-zero the condition is

That was a very simple example, there are many more subtle ones and there was a time when most C compilers would happily compile it without any warnings at all. In those days lint was essential. I have always been amazed how many errors static analysis tools can find in production C code. A few are phantoms caused by the language ambiguities but most are real vulnerabilities. NB a good static analysis tool can also find hidden defects in Pascal or M2 code but not as many because the compiler has more clues about the programmers intentions.

indexing error is trapped imediately before

reliability code. It may not be as exciting as a

Most of the time it is preferable to continuing with defective data.

Windows is when some hacker takes control of

Indeed very basic mistakes and ones made all the time by programmers in industry. Bounds checker and similar tools find lots of quirks in this vein. Humans make errors - even the best engineers can make fence post errors or typos. The sooner they are found and fixed the better.

the effect of removing a lot of

somewhat more verbose as a result, but it is

Modula2 can do it relatively well, as can some dialects of Pascal.

Having everything as a pointer to a hand crafted who knows what object like the C Windows API is a recipe for chaos.

point out

explicit

action.

The sooner a program is faulted for an

side effects that only show up in the

programmers.

release.

The same can be done with the runtime checks on Pascal and M2 compilers. They are not compulsary but they are available. And using a compiler with a built in safety net does not preclude writing defensive code.

myths,

including

logic

system that will run for months on end

security defects are caused by various buffer

without

least as

Yes. I would not recommend it to my worst enemy. I have been using it seriously on a high end Toshiba 17" screen portable for about 6 months. So far it has shown every sign of instability including one total self destruct that nothing short of reinstall from master disks would fix. Its helpful self repair feature corrupted the main DLLs after a dodgy update on 1/10 (see the thread More Vista Woes which has since degenerated into a slanging match about RS232 cabling).

The remaining faults appear to be driver issues with the clever power save modes of the Toshiba hardware resulting in nasty keyboard lock ups (including the on/off switch). And there is no reset switch short of pulling the battery out and waiting 5 minutes. Plugging in an external USB keyboard lasts about another 5 minutes after the main keyboard driver dies. I only use the damn thing for regression testing to support Vista OS since for some reason customers insist on using it.

The only thing I would say is that Excel 2007 is a much worse crock of shit. That is back to levels of unreliability not seen since 1997.

Regards, Martin Brown

** Posted from
formatting link
**
Reply to
Martin Brown

Thank goodness someone finally brought some reasonability into this. Anything resembling a decent OS won't let something like this happen. Obviously anything written by MS doesn't fall into this category, and it's a sad commentary on the state of things that most people find their poor OS implementations acceptable. I've been programming since the 70's and most OS implementations that I've seen were fairly solid; it's just the ones out of Redmond that really suck. But what should one expect from a bunch of kids that had no real world experience and absolutely no idea of what security is?

Reply to
Anthony Fremont

for this there was created 'gpart' and it works well to guess the partition table.

last time I erased the MBR on my hard disk I was writing in pascal. I learned that interrupt $33 is the mouse driver interrupt and 33 is a dos OS call.

g C800:0005

it does ask you first.

back on the subject of the unix command-line: today I reminded myself learned that

find -name foo -delete

is different to

find -delete -name foo

luckily nothing mission critical was lost.

Bye. Jasen

Reply to
Jasen Betts

Any decent compiler will complain about that.

it is not certain, I used a similar construct yesterday. and when I did I wrote it like this.

if( (x=somefunc()) )

so that the next person to inspect the code, and the compiler, would know that I really meant it.

--
Bye.
   Jasen
 Click to see the full signature
Reply to
Jasen Betts

"you can fix that in software."

Bye. Jasen

Reply to
Jasen Betts

There's no reason to use an assignment in the condition of an if statement. Loop conditions are another matter, but in C and C++ you can use the comma operator, i.e.

while(x = somefunc(), x) {}

That'll bring the maintenance programmer up short if he doesn't know C well.

For simple ifs, the usual defensive idiom is to put the lvalue on the right side of the conditional, e.g.

if(3 == x) {}

That way the compiler catches it if you make a typing mistake. I use that one religiously, having learned it as a child of 30 or so. ;) It saves time and isn't hard to read once you get used to it. With inequalities I sometimes write them this way and sometimes straight, because the error isn't so hard to see in that case.

Cheers,

Phil Hobbs

Reply to
Phil Hobbs

Eeuuuuuwww... Yeah, but it's UGLY :-p

My preference for the if-embedded assignment is to

if ((x = somefunc()) != 0)

which should compile to identical code to the simple assignment while making the intent clear to the maintainer.

--
Rich Webb     Norfolk, VA
Reply to
Rich Webb

GCC allows you to do this:

if ((x = somefunc()))

The extra set of parens tells gcc that you intentionally tested the result of an assignment as a value.

Reply to
DJ Delorie

Seriously, perhaps they've got a 16-bit timer design laying around, and they're too lazy, or time-to-market's too short, to design in a

32-bit timer?

Also, it could be the sort of thing Freescale (this isn't an ad, I've just been using their parts lately and saw all this on the website) is doing with its new line of "8-bit and 32-bit compatible microcontrollers." It appears what they've done is made certain versions of 32-bit-core microcontrollers that have the exact same on-chip peripherals as some models with 8-bit cores. Thus if you design in the 8-bit controller and then find out it doesn't have enough horsepower, you can "drop in" the 32-bit part (more expensive, but I think pin-compatible) and just recompile your C code for the

32-bit target. I'm not sure if or how useful all that would be, but as the 8-bit parts tend to have 16-bit timers, the equivalent 32-bit parts would also have those same 16-bit timers.
Reply to
Ben Bradley

Sadly true.

- but really, at 100MHz clocks, 16 bits is just too small, and trying to expand in SW is best avoided.....

Yes, a double-edged sword.

Smarter might have been to design a 16 bit compatible mode, for easy SW porting, but allow 32 bit as well.

Not hard to do with timers - 8051's still have a 13 bit 8048 compatible mode!!

-jg

Reply to
Jim Granville

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.