AREF bypass capacitance on ATMega2560?

First, you're assuming an FPGA (hammer). Second, you're going to need a bigger FPGA (even bigger hammers aren't free).

Reply to
krw
Loading thread data ...

That doesn't alter the fact that you're constantly moving the goal posts. You *are* defending the FPGA as the general solution when it is quite decidedly only applicable in the niches. You wanted to know what DSP had a CODEC. I told you but now you whine that it won't solve YOUR problem. It's not my job to do your work.

If you refuse to understand, why do you bother coming here?

Only because you are repeating your silly FPGA uber alles, nonsense.

WFT are you talking about? You *are* saying that FPGAs are the cat's ass for all applications when nothing could be further from the truth. they're a niche and the solution of last resort. Well, and ASSC is the solution of last resort, but...

Absolutely wrong. FPGAs are *only* useful when there is no other choice. For anything else, they will always be the most expensive solution. Your position is exactly a nail looking at every tool as a hammer.

You clearly don't simulate. You've completely ignored the issue here. Like the elephant in the phone booth, FPGA zealots want to ignore it. Simulation is much more work than design, yet it is always forgotten in these discussions.

Oh, good grief! You never have to rework libraries? Roll your own, for what was free in the other vendor's? All of the peripherals function the same, for each vendors'? Come on, get real!

That's all in the two weeks. From working code to working code. We just did it from a TI to Freescale SoC.

...yet you claim that you can do that all between X and A without even looking at the code. Unbelievable!

I call bullshit when I see bullshit and that is *BULLSHIT*.

What bullshit. There are no PCIe libraries? USB? uC? GMAFB!

Tools? TOOLS?! What do they do all by themselves?

Simulation takes TIME and SKILL that most don't have. That's in addition to the skills and time needed to do a uC or DSP solution. If there is *any* way to do a project with either, the FPGA loses. That's the whole point here; FPGAs are a solution to niches - always will be.

They do a lot and some better than other DSPs. It was an example, to counter your point. BTW, I'm not trying to say that DSPs are the end-all solution, like you are trying to, with FPGAs. ...and like I said, I've done way more FPGA design than DSP (I only do hardware).

I can't help it if you don't like being called on your bullshit. If you don't like being called on it, don't bullshit.

Reply to
krw

I liked this article and it's one of the things that has me interested in FPGA's:

formatting link

The attractive thing there (compared to DSP's) is having multiple DSP blocks right there on the FPGA, even in fairly cheap ones. Even at low clock rates you can outcompute a pretty serious DSP by using those multipliers in parallel. And more and more powerful function blocks (hard macros) on the FPGA are on their way. But, for computing directly in the LUTs, there is obviously a price to be paid.

Reply to
Paul Rubin

It's not always known how much you need, you discover things during development, rework algorithms, sometimes trade RAM for speed. Customers want more features.

Hmm, now that really sounds like turning a constraint into a feature!

Yes, if you have a powerful FPGA you could use a less powerful CPU usually.

--

John Devereux
Reply to
John Devereux

The Proper Fixation article posted above is an interesting read.

It's been over 15 years since I last did any programmable logic, but I'm now in the process of learning Verilog, courtesy of the legal department of a (large) client. I need to be able to testify about a little

250-line bit of Verilog for a CPLD in an accused product.

Given the amount of C/C++ I've written over the years, Verilog is pretty easy to read, except for the ugly Pascalian begin/end blocks, but for testimony it'll be helpful to be able to say that I've designed working hardware in Verilog. Fun stuff.

The main thing the blogger leaves out of the equation is task switching. People have been talking about reconfigurable processors for yonks, but it never happens because you can't usefully run, say, 10 threads using the same resources. In embedded applications, sure. In general purpose computers? Not gonna happen.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
 Click to see the full signature
Reply to
Phil Hobbs

That depends on the software risk class. If the software imposes no risk (risk mitigated in hardware for example), I don't think they would care.

Indeed, you need to have procedures in place for changes and lifecycle management.

I guess that was a safety critical (isolation?) transformer then. Depending on the available paperwork and test data that could require re-testing some parts. But re-certing the whole device sounds a bit too much, but I don't know the circumstances.

What can also happen is that an old medical device was certified under the 1st edition of the european 60601 standard. If you then change something, chances are that you need to re-cert for the 2nd edition.

At least that was until the release of the 3rd edition. That now requires that everything you sell is certified under the 3rd edition, not only changed devices. A huge re-certing effort for all older devices, changed or not.

Yes, that is where having (and using!) procedures and standards is a real benefit.

I understand that those markets have strict requirements. But it surprises me that their only answer to (minor) changes would be: recertify the entire device. Instead of: Show us the changes and their effect and we'll then see if we need full re-cert or just partial.

In my experience, in medical device you can sometimes do changes without re-certification, but certainly not always. Thats why I started with "That is not always true".

--
Stef    (remove caps, dashes and .invalid from e-mail address to reply by mail) 

I've already got a female to worry about.  Her name is the Enterprise. 
 Click to see the full signature
Reply to
Stef

There is also a price to pay for hard macros that go unused. That includes DSPs and even block RAM. There is a *lot* of waste silicon in FPGAs.

Reply to
krw

That is almost never the case. An FPGA, just like a DSP or uC, is too close to the game to be able to make that kind of safety claim in most cases.

We always do. But then you must also follow those procedures and document that you did. Meaning there will be a lot of effort after each design change no matter what. It really doesn't make much of a difference whether the FDA checks such compliance directly or you self-certify (honesty assumed here).

Yup. In medical they almost all are, even signal transformers.

Not for the whole unit but, for example, a power module. It has to go through the whole UL spiel again. Doesn't really matter if it's the whole machine or a smaller part of it, the effort, time and cost are quite similar.

In some markets such a change requires full re-cert, the whole enchilada.

A boondoggle for test labs.

For my office I have them even for non-med and non-aero designs.

All I can tell you is what my clients tell me, "If we add this one diode we would have to do this all over ..."

In the end the effort is mostly almost the same. In SW or firmware it's regression testing et cetera. For safety boundary changes it's module tests. And there it hardly makes a difference how much of it must be re-tested.

Similar in aerospace. You change something on a module that goes into aircraft and whoops, the whole RTCA/DO-160 testing has to be done all over again. There are very few changes that would not trigger this.

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

Agreed it would have to be the change to the switch for the courtesy ADDITIONAL reading light to assist any reading of paper documents that MIGHT not need ANY retesting.

much

Thats the important thing must be documented as tested and how.

once

And in aerospace..

require

Then you get what I have seen for Mil spec ASICs, every new wafer batch has a small sample packaged and built up, as these are engine sensor devices, they have to after normal testing do at least 100 hours in aircraft flight time tests before the reste of the devices can be tested and assembled.

This happens on EVERY wafer batch.

--
Paul Carpenter          | paul@pcserviceselectronics.co.uk 
    PC Services 
 Click to see the full signature
Reply to
Paul

And only if there is a stern warning "Hot! Caliente! Do not put in mouth! No introduzca en la boca! Children under the age of 65 shall not ...".

[...]

For mission-critical parts it has to be strict. Sometimes when you probe through conformal coating this has to be documented. Who probed, when, why, who re-sealed the puncture breach, signed and dated.

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

As there is in *any* complex device. Even your car likely has lots of features most people don't use, they all cost something. But each feature adds value for some users and the cost is relatively small. In the end it is much cheaper than each user having a custom design and that is the point.

--

Rick
Reply to
rickman

Oh, GMAFB. FPGAs are *designed* to have waste silicon. The whole concept of programmable logic wastes silicon.

Good grief. At least rent a clue.

Reply to
krw

Yes, that is another way of saying "we don't want to design the thing, let's just code it and see how it works." My point is when asking software folks how much RAM they need they look off to the corner of the room for a moment and then give me a number. I'm sure there are those who know how to estimate realistically, but it's not common. So they just want more than they think they *might* need. I'm sure a *lot* of RAM goes unused in most MCU designs. Someone in this thread would refer to that as silicon inefficiency.

No, it is a statement that you have to size a buffer to design it into hardware just like to do in software. But in software realistic estimates are often put off and the numbers are played with until the design is done. If you blow your RAM budget on an MCU you need to use the next bigger one. Fine, but then you have a new BOM cost. Same with FPGAs, but hardware designers are used to actually coming up with hard numbers before they start.

No, this is just a statement that when doing hardware design it is customary to actually spec out all the details. That is another reason why FPGA designs typically have fewer issues in test and integration.

--

Rick
Reply to
rickman

Why do you mention the FPGA? I think we are not talking about the same thing here.

What I meant was adding some 'real' hardware to limit things that could get dangerous and are under software control. It is very common to add such protection to reduce the software risk class.

Example: A device generates a train of current pulses that pass through a patients body for some kind of measurement. The safety of these pulses depends on the duration, frequency and current. All of these parameters are under software control, so in theory the software can create a dangerous situation. This puts the software in a high risk class.

If you add hardware that monitors the output signal (timers, comparators) and that switches off the output when the signal goes out off bounds, the software can no longer create a dangerous situation. That reduces the risk class of that piece of software.

This practice of adding hardware to remove the safety risk from software is very common.

That's where I don't agree. If I change something in my software that is protected by hardware like in the above example. I can do my internal tests and write a document for the notified body. This ofcourse takes time and care but it is much less work than a full re-certification effort. I don't need to repeat my safety tests, EMC tests etc.

--
Stef    (remove caps, dashes and .invalid from e-mail address to reply by mail) 

Think lucky. If you fall in a pond, check your pockets for fish. 
 Click to see the full signature
Reply to
Stef

The thread has moved there, Rickman advocates that a lot of things can be better handled by FPGA. In the end it doesn't matter, programmable is programmable and that gets scrutinized. Has to be.

I generally have that. But this does not always suffice. Take dosage, for example. Suppose a large patient needs a dose of 25 units while a kid should never get more than 5. How would the hardware limiter know whether the person sitting outside the machine is a heavy-set adult or a skinny kid?

You can take your chances but it carries risks. For example, I have seen a system blowing EMC just because the driver software for the barcode reader was changed. The reason turned out not to be the machine but the barcode reader itself. One never knows.

The other factor is the agency. If they mandate re-testing after certain changes you have to do it. In aerospace it can also be the customer demanding it, for example an airline.

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

IBM's POWER8 can be configured to un eight SMT threads per core. Obviously not quite ten, but close. POWER7 can be configured to run in SMT1/2/4 modes, with that number of SMT threads. While much of the core is shared, the various modes also introduce different hard partitions in various resources. POWER8 adds SMT8 mode.

In both cases the core is really overbuilt for a single thread, but many workloads show significant throughput gains with multiple threads running.

Reply to
Robert Wessel

Sure, we're in violent agreement about that. (I've been writing multithreaded and clusterized programs ever since OS/2 2.0 came out, back in 1992.)

I was talking about trying to run multiple tasks by reconfiguring the hardware on the fly, which is not possible with any sort of reasonable task switching overhead.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
 Click to see the full signature
Reply to
Phil Hobbs

There are cases where a hardware limiter is an option, there are cases where that's not an option. In your example above the biggest risk is however the nurse calculating and setting the dose, but that's another part of the risk analysis. ;-)

Yes, there are always chances and you have to weigh the risks. Making sure all units pass EMC testing can only be done by fully testing each unit under all cirumstances. Which is ofcourse impossible.

Your barcode scanner example is unfortunate. But such a scanner could also change it's behaviour on scanning different codes and lighting conditions. Did you perform EMC testing with all available barcodes and forseeable lighting conditions?

Yes , if the agency or customer demands re-testing, there's nothing you can do but re-test.

--
Stef    (remove caps, dashes and .invalid from e-mail address to reply by mail) 

She asked me, "What's your sign?" 
 Click to see the full signature
Reply to
Stef

[...]

Some companies EMC-test every machine that leaves production though.

That usually isn't necessary. I told the client to get lots of different new readers, and fast. They did that and it turned out that many that were claimed as "class B" failed majorly. One didn't and it had so much margin that it wasn't needed to test it under lots of conditions. I took it apart to make sure that the designers had done a good job.

[...]
--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.