Re: board - T562.jpg

John Lark>> >>

>>> John Lark>>>> This is the thing I'm working on this month. It's a delay generator, >>>> with an MC68332 uP on the back side that manages things. One of my >>>> guys quit, leaving behind about 14 klines of nasty, buggy code, so I >>>> thought it over for about 18 seconds and tossed it and started over >>>> from scratch. I'm workin with another guy who is re-doing the nasty, >>>> buggy FPGA design, ditto. He says bad things about the V8.2/SP3 Xilinx >>>> WebPack software. >>>> >>>> The application program is in flash, soldered down, and we're going to >>>> include a flash boot-block thing that lets you reflash the app code >>>> through the serial or ethernet ports, to upgrade the firmware. That's >>>> sort of mind-boggling, since the flash which holds this boot program >>>> disappears from the uP bus while it's being erased or programmed. >>>> >>>> John >>>> >>>> >>>> >>> That's an interesting non-ortho arrangement :) >>> >>> On Webpack 8.2 SP3, I am afraid he's right. There have been some >>> rumblings on comp.arch.fpga about it recently, and I 'upgraded' to it >>> for my latest design, and then it would not process 3 previous designs, >>> although those have no errors I can see. Xilinx claimed it had >>> 'tightened up' certain things, but it caused me some grief because those >>> designs are the basis for a number of things where the FPGA code is >>> designed to in-system upgradeable should some new feature be requested. >>> I eventually re-installed (from an old full download) 7.1 for those >>> projects. >> >> Yeah, I wish people would stop breaking things, and stay absolutely >> backwards-compatible to existing designs. FPGA were supposed to make >> hardware design easier, and then they sent in an army of programmers >> to replace hardware problems with software nightmares. >> >> The *service pack* is a 300 megabyte download. >> >>> I've done the reprogrammable flash thing myself and I definitely concur >>> it _can_ get a little hairy. >>> >>> Wish you the best of luck getting the design fully operational. What are >>> the specs? >> >> Here it is. Fortunately, the hardware is in good shape, so all we have >> to do is pound the firmware and fpga into submission. Soon. >> >>
formatting link
>> >> >> John >> >Nice specs indeed. > >If I had the money, I'd want one ;) > >Cheers > >PeteS

But hey, this is a revelation:

Hardware design has always been comforting because it is direct, simple, visible, wysiwyg, physical, and generally reliable. The tools, oscilloscopes and such, are approachable and dependable. I can use a

30-year old tube-type TEK oscilloscope to debug the most modern analog or digital circuits, without downloading and installing service packs.

Software is abstract, indirect, bizarre, and unreliable. The tools are buggy, bloated, always changing, unpredictable, pig slow, and seldom backwards-compatible. I can't use current-gen tools to edit a 2-year old FPGA design, and I'm lucky if I can somehow still find and run the older tools.

So, FPFAs, VHDL, and the associated software tools are the trojan horse that's finally letting the software people get revenge, finally allowing them to force us hardware designers depend on (and endlessly pay for) their bizantine and unreliable methodologies, to trap us in the gotta-upgrade-but-every-generation-has-more-new-bugs loop.

And the new Windows-based scopes and logic analyzers, of course... same idea.

John

Reply to
John Larkin
Loading thread data ...

Well, this could fill a book.

One of my real pet peeves is I don't have control over the synthesis and PAR process such as have with C (for example - I've written many hundreds of KLines of C).

As an example, there are times I _deliberately_ want to delay a dignal by a prop delay or two, but the damn tools optimise it out. If I had a 'volatile' keyword, or perhaps 'pragma' to tell the tool to *leave this damn code alone and don't think of optimising*, it might be better.

This snippet for example;

wire sigA; // input sig wire sigB; // intermediate signal wire SigC; // output signal

BUF (a.sigA, b.SigB); BUF (a.SigB, SigC);

..

the tools will optimise them out when I _specifically_ want them there to add a delay. Of course, you can always say

SigC = #10 SigA;

but I prefer to use buffers because the timings are specified far better than the tools will do what I want.

Windows based scopes - I have always said that a smart power switch on a windows machine is the dumbest thing I ever heard of.

No - I agree with you. Real hardware is properly parameterisable and not subject to the latest 'thoughts' of a software engineer who doesn't have a F***ing clue about hardware.

Cheers

PeteS

Reply to
PeteS

and further (yeah, I know I had typos in the previous rant)

When I do pure hardware I do not have to try and figure out what the hell was done to implement my statements.

This was a major issue on a design I did about 4 years ago where I interfaced an upstream bus to the busses on 6 devices (with a lot of other stuff) and the synthesis / PAR etc kept optimising away certain things that were there to maintain the timing. The response I got was 'well, use pure synchronous design' but in this case it was simply not possible (am issue I am sure you'll understand).

I once deliberately did a DeMorgan transform by hand because I did nto trust the tool to do it right. (Code available on request ;) )

Cheers

PeteS

Reply to
PeteS

Yup, this *is* the real world. We recently had to do a clock-edge deglitcher, using delay elements. It couldn't be synchronous, because we were, well, deglitching the clock! Ditto stuff like charge-pump phase detectors, where you really need exactly what you need, delays and all.

One thing you can do is add a pulldown to a pin (or ground it) and call that signal ZERO or something. Then just OR it with things to create new, buffered, delayed things. If you need more, run it through a shift register and create ZERO1, ZERO2, etc. The compiler can't optimize them out!

So the FPGA software people ought to provide us an irreducible ZERO without wasting a pin, or a buffer that stays a buffer always.

So, is there a block of logic so complex that the compiler can't figure out that it indeed will always output a zero? Maybe the MSB of a thousand-year counter, but that wastes flops. Maybe some small but clever state machine that always makes zero but is too tricky to be optimized?

John

Reply to
John Larkin

As noted, I once did a DeMorgan transform by hand (simple one too) and it materially changed the compiled output. (For those who understand the transform, don't get upset at the comments; they were there for people who _didn't_).

Here it is:

******************************

else begin cs0

Reply to
PeteS

They should back off this religious devotion to fully synchronous logic and give us a couple of dozen programmable true delay elements, scattered about the chip. But they won't because it's not politically correct, and because they figure that we're so dumb that we'd get into trouble using them.

John

Reply to
John Larkin

What exactly did the tool do with this design? I don't see anything that will tell the tool not to optimize the logic and eliminate your gates entirely. Logically your equation is equivalent to the original input signal. There are attributes you can give a signal to prevent the tool from optimizing it away, did you use one on cs0?

I also don't see what advantage your DeMorgan transform has. If you lock the cs0 signal so that it is not optmized away, the tool will use a LUT to produce cs0 from cs_I. It will also use a LUT to do the OR.

There are delay elements built into the IOBs of Xilinx parts. I don't recall just what you can and can't do with them, but if you run a signal through the delay and also run it without the delay, you can then OR the two in just the way you intended above. This may require two input pins however and it may not be usable unless you are running the signal into the input FF, I'm not sure.

Reply to
rickman

They can't back off on the religon - as you note - they think we are dumb. It is very hard to get ASIC companies to provide a guaranteed delay element, let alone FPGA companies. There are many applications that need fixed delay elements. I even remember when a delay element was the standard way for DRAM timing to be generated.

One way to add an unoptomized buffer is to allocate 2 input pins of the FPGA. For goodness sakes we have 8-billion pins these days. Externally pull one high and one low. Then just AND or OR your logic as needed with these input pins. Since they are static their routing delays are minimal (and desired). In the end, if they aren't used, just call them "Spare FPGA Inputs" and everyone will think you thought ahead!

Trevor

Reply to
Trevor Coolidge

"John Larkin" schreef in bericht news: snipped-for-privacy@4ax.com...

Well, synchronous design is easier and more understandable then asynchronous. So the "specialists" tell the world synchronous design is the only way rather then confess they are unable to make reliable asynchronous ones. And let's be honest. A lot of old asynchronous designs are real nightmares of critical race conditions, ad hoc inserted RC delays, glitches and tricks. Good asynchronous engineering is rare and was rare even in the days synchronous design was not yet invented. (Practised may be the better word.) I ever was asked to "debug" a time critical circuit. Looking at the asynchronous circuit which failed only once a week or so and reading the full page of explanations about the inner workings I discarded the whole design, doubled the clock frequency and made a rock solid synchronous circuit. They were almost disappointed it was that "easÿ". I could have made disciples for "synchronous only" but I'm not a true believer myself. Times they are a changing. I heard rumours asynchronous design has a revival as it can be used to build faster circuits. There should be special software for it, meant to design processors and the like. It's those software that makes me shiver. When the first micros hit the market, an engineer complained that those programmers used lots of extra space. He would have been fired by using only one percent of that number in extra gates. (Nothing to do with old uncle Billy :) Things did not grow better ever since but the real bad development is the habit of those programmers to think and make decisions for me. I really hate that. Almost all of these programmers have a theoretical background in "software engineering". Ever visited such a college with lots of computers, tens of masters and hundreds of students and no one could handle a soldering iron. There was none in the whole place. An assistant brought his own one from home when he needs to repair a cable. None of those "engineers" ever wrote bugfree software, but expect hardware designs to be it. Nevertheless, if something goes wrong, it's inevitable a hardware failure. I ever had technicians to exchange hardware seven times before they stopped to deny it to be a bug in the software. Another time one of my collegues disassembled a floppy driver to prove it did not met the drive specifications. (It could not format a floppy and they insisted we used the wrong brand of floppy although we tried every brand we could lay our hands on.) They complained about the disassembly rather then admit their own failure.

Well, maybe we're doomed. Maybe I'm going to write software. What's worse? :)

petrus bitbyter

Reply to
petrus bitbyter

What John is asking and what're asking are two different things. What you're asking is not doable at all, at least at a cost you'd be willing to pay. There are no "guaranteed delay elements" in ASICs. The delay variation across the whole PVT range is around 4 to 6 times. You may have programmable delay elements which need to be calibrated (and recalibrated as temperature and voltage change). I don't think the reason you don't get this off-the-shelf is political correctness but self-preservation. If you change processes as often as X & A are changing and you want the designs to run in the next chip, you have to use fully synchronous implementations. Absolute delays, with our without calibration, don't port to different processes too well.

Reply to
mk

[snip]
[snip]

I'm thinking about it... can't you do all analog functions with a uP now anyway ?:-)

...Jim Thompson

--
|  James E.Thompson, P.E.                           |    mens     |
|  Analog Innovations, Inc.                         |     et      |
|  Analog/Mixed-Signal ASIC's and Discrete Systems  |    manus    |
|  Phoenix, Arizona            Voice:(480)460-2350  |             |
|  E-mail Address at Website     Fax:(480)460-2142  |  Brass Rat  |
|       http://www.analog-innovations.com           |    1962     |
             
I love to cook with wine.      Sometimes I even put it in the food.
Reply to
Jim Thompson

"Jim Thompson" wrote in message news: snipped-for-privacy@4ax.com...

You're some sort of smug bastard on the quiet. Obviously they can, it's just that your crappy analog electronics isn't fast enough to make it viable. Instead of sitting on the sidelines and sniping you should get your finger out.

DNA

Reply to
Genome

They do have delay elements, but they tend to be encapsulated. There are pin delay elements, and the multi-phase DLLs use calibrated delays (which is why they are granular).

As part of their operation, the DLLs have to lock and maintain tracking of temp/process, and I've thought that one thing the vendors could do, is allow user access to that calibration/tap pointer register - but that is a niche market.

-jg

Reply to
Jim Granville

There are some people doing serious (cpu-scale) async logic, but that's pretty thin-air stuff. What I meant was that, even in a properly synchronous design, there are times where a real, analog delay is a useful design element, and it sometimes can save a full clock worth of time... not all data paths need a full clock to settle. Plus, it would be nice to be able to dynamically tune data:clock relationships and such. I have one architecture that we use a lot that simply must have a real, unclocked one-shot (among other things, it stops and resets the clock oscillator!) so we have to go off-chip to a discrete r-c.

I toured the Cornell EE department recently. 95% of the screens are on PC's, maybe 5% are oscilloscopes. No soldering irons in sight, just those drecky white plastic breadboards. Tulane, my alma mater, used Katrina as an excuse for eliminating the EE department entirely. I guess those labs are too expensive, compared to one TA teaching 400 kids art history in a lecture hall, all at once.

Yeah, there is a lot of ad-hoc async stuff around, full of 555's and one-shots and race conditions and such. Disciplined sync design should be the starting point from which to cheat.

I'm stuck, all this month at least, rewriting a diastrous, 14,000 line mess (68K assembly) that took over a man-year of work to make this bad. Programming is OK, but after a couple of weeks of it I get depressed... too much like bookkeeping. I can design hardware forever.

Gotta do the ghastly serial DAC driver next. The Xilinx bit-bang serial config thing worked like a charm, even blinking an LED during the loop.

John

Reply to
John Larkin

There's rarely "the only" way. However alternatives may be hard ;)

If you have access to the source those issues are easier to workaround..

In many places students get by through pure theory, and occasional lab. Access to labs are scarce outside the formalised sphere. Which will deter many persons from trying on their own. Theoretical tests are premiated substantially, and that is what you get..

I heard an interesting comment once, students are not to be trusted with untyped languages because they can't handle it. STILL same persons are expected to handle advanced math with pen & paper.. Maybe they expect correct math and spaghetti implementation? :-)

Maybe because failure to pay attention in hardware have such dire consequences directly? :), you can't undo/recompile the magic smoke..

Reply to
pbdelete

Can't rewrite it in C or so .. ?

Reply to
pbdelete

How about programs such as Solidworks that are only one-way upward compatible. I.e. you can open a part or assembly file created in SW

2005 with SW 2006 but once you save it in 2006 it cannot ever be opened again in 2005 (or 2004, or 2003 etc.). Thus forcing everyone to upgrade...

Best regards, Spehro Pefhany

--
"it's the network..."                          "The Journey is the reward"
speff@interlog.com             Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog  Info for designers:  http://www.speff.com
Reply to
Spehro Pefhany

I'm c-phobic. It looks like monkeys pounding on typewriters to me. The

68K is a beautiful machine to program in absolute assembly, bare-metal, no libraries or linkers or anything. A typical embedded program will run 4k lines, 4-8 kbytes, and just work with very little, under 10%, debugging time.

John

Reply to
John Larkin

Virtex4 has them. They are the idelay elements, which give you 64 steps of delay with 75ps granularity.

Reply to
Ray Andraka

Almost 5 ns! Most cool.

John

Reply to
John Larkin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.