Graphics rendering revisited

Since I was responsible for de-railling the original thread, let me be the one to beat the horse back to life, since it has interesting potential.

The orignal question was: What algorithms can you use to generate live video, that contains only line art (lines, rectangles, curves, circles, etc.), if you can't use a frame buffer.

The benefit of using a frame buffer is flexibility. Namely you get random access to any pixel on the screen. This opens up a wide range of algorithms you can use to play the performance-area-complexity tradeoff game.

Without a frame buffer, you only have sequential access to your pixels. No going back, no going forward. Quite Zen I suppose. Anyways, you lose access to a lot of frame buffer algorithms, but some can still be used.

The conceptually easy ones to understand are math based algorithms, but often they're expensive hardware wise. In the first section, I'll go over implementation issues of the ideas that other people gave. Nothing super complex.

The second section contains a more novel (maybe) approach based on pixel spacing. It's conceptually harder to get a handle on, but has the potential to require less resources. Unfortunately there are problems with the idea that I haven't flushed out. Perhaps someone will have some ideas, or maybe it'll inspire something better.

Oh yeah, I was too lazy to double check what I wrote, so there might be problems. I also left things unfinished towards the end, I've got other things to think about. Hopefully it gets the ball rolling though.

Regards, Vinh

MATH ALGORITHMS ================

Lines

----- There was a math based algorithm mentioned by Peter Wallace, where you use y - (mx + c) = 0 and minx

Reply to
Vinh Pham
Loading thread data ...

Lot's of good stuff. I'll have to read it later tonight. I just wanted to modify one assumption you made. Resolution. I'll be working at 4K x 2.5K and maybe as high as 4K x 4K and 60 frames per second soon. My current work is at 2K x 1.5K, 60 fps though.

Here's a product I finished recently that's working at 1920 x 1200 and

60fps.
formatting link

The design is 100% mine, electrical, board layout, mechanical, FPGA, firmware, GUI, etc.

Some of the highlights: Two 1.485GHz inputs, two 1.485GHz outputs, 165MHz DVI output, USB, lots of interesting real-time processing going on.

Yes, it has a frame buffer (four frames actually). No, it shouldn't be used to render graphics primitives.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Martin Euredjian

To send private email:

0_0_0_0 snipped-for-privacy@pacbell.net where "0_0_0_0_" = "martineu"

algorithms

access

potential

can

the

be

9
a
a

two

K(x,y)

you

your

don't

and

by

much

into

The

+

to

it'd

Reply to
Martin Euredjian

Just about the circle: Another approach for circle drawing is using trignometry. The inputs for circle drawing macro are of course the circle Center(X0,Y0), Radius R, and the real time scanning index (x,y), assume active pixel matrix is 512x512. First we need to check if vertical index is within the drawn circle by compare |y-Y0| <= R, if yes then scale |y-Y0| to radius R. Let say yy = |y-Y0|/R. For this we may use LUT for 1/R, L1(R) = 1/R. yy = |y-Y0| * L1(R) (1) Notes that yy = sin(teta), teta is an angle in first quadrant,

0<= yy <= 1 Known sin(teta) one can find cos(teta) by another LUT, let say L2(yy)=xx, where xx=cos(teta) or  L2(|y-Y0| * L1(R)) = xx (2) Notes: sin^2(teta) + cos^2(teta)=1 Perform multiplying to find out |x-X0| = R*xx or |x-X0| = R*L2(|y-Y0| * L1(R)) (3) solve for x from (3) : x = X0 +- R*L2(|y-Y0| * L1(R)) (4) From equation (4), one can see x is a function of X0, Y0, R, and y. Note that (4) can be computed during horizontal blank time (take several clock cycles), register results, and perform another calculation... That means for a pair of LUTs L1, L2, it can draw more than one circle. use LUT to figure out |x-X0|, where LUT imply function F(S) = C where S=sin(teta) and C = cos(teta) in the first quadrant. We can use RAM LUT for this function F(S)= C, where S=sin(theta) and C=cos(theta) in the first quadrant circle. The inputs to this circle macro function are of course the Center(X0,Y0) and Radius R.
Reply to
marlboro

Just about circle: Another approach for circle drawing is using trignometry. The inputs for circle drawing macro are of course the circle Center(X0,Y0), Radius R, and the real time scanning index (x,y), assume active pixel matrix is 512x512. First we need to check if vertical index is within the drawn circle by compare |y-Y0| <= R, if yes then scale |y-Y0| to radius R. Let say yy = |y-Y0|/R. For this we may use LUT for 1/R, L1(R) = 1/R. yy = |y-Y0| * L1(R) (1) Notes that yy = sin(teta), teta is an angle in first quadrant,

0<= yy <= 1 Known sin(teta) one can find cos(teta) by another LUT, let say L2(yy)=xx, where xx=cos(teta) or  L2(|y-Y0| * L1(R)) = xx (2) Notes: sin^2(teta) + cos^2(teta)=1 Perform multiplying to find out |x-X0| = R*xx or |x-X0| = R*L2(|y-Y0| * L1(R)) (3) solve for x from (3) : x = X0 +- R*L2(|y-Y0| * L1(R)) (4) From equation (4), one can see x is a function of X0, Y0, R, and y. Note that (4) can be computed during horizontal blank time (take several clock cycles), register results, and perform another calculation... That means for a pair of LUTs L1, L2, it can draw more than one circle.
Reply to
marlboro

I guess you are talking about raster-scan displays without a pixel to pixel frame buffer behind it, and not about vector-drawing displays (like an oscilloscope in X-Y mode).

Interesting theoretical enterprise, but I really don't see the point. I remember quite some years ago talking to a guy who had invested millions of $ in developing software for Evans&Sutherland color vector displays for the drug design industry. I just casually threw out the comment that in 5 years the E&S gear would be in the dumpster, and everybody would have switched to pixel/raster scan systems. They were doing stuff with up to 100,000 simulated spheres on the screen, and he essentially told me I was so nuts that he couldn't even begin to explain how impossible it would be for a frame buffer to ever handle such a task. Well, of course, all that is history now, and his company had to invest a BUNDLE in converting all their software to adapt to the frame buffer mode of doing things.

Jon

Reply to
Jon Elson

to

Don't worry about it, there's nothing profound in it, I just carried away when I start writing :_)

work

Jeeze that's quite a bit of bandwidth there.

formatting link

Cool way to expand the use of a Cinema Display. I bet HD sized CRTs are awfully heavy and delicate. So with your product someone could view live HD footage from inside a small helecoptor, for instance? Looks like it'll change the way people think of and us HD displays. Pretty cool to make a product that can affect the way people do their work.

Must be fun having a hand in every aspect of a product, it's your baby. Like the olden days of hand crafted cars, before Ford turned it into an assembly line.

165MHz

Is PCB layout particulary challenging? Heh everything probably is when you're processing that much data. Doesn't seem like it needs much ventilation, so heat's not much of a problem?

:> Yes, it has a frame buffer (four frames actually). No, it shouldn't be used

But...but...;_)

Reply to
Vinh Pham

Someone just had a rare situation where they couldn't use a frame buffer. You can think of it as an intellectual exercise :_)

of $ in developing

Hahaha no wonder he refused to believe you. Sort of like when you buy a crappy product, but you make yourself believe it's great, because of all the money you spent on it.

Did E&S's vector display draw only outlines of spheres, or shaded? Shading with x-y vectors doesn't sound too fun.

What do you think was the main reason why people switched to pixel/raster? Simplicity? Scales better?

Thanks for the interesting anecdote Jon

Reply to
Vinh Pham

pixel

And you wouldn't outside of a contextual reference frame that allowed you to understand where/why this might be important. It's a very narrow field of application. Not mainstream at all.

--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Martin Euredjian

To send private email:
0_0_0_0_@pacbell.net
where
"0_0_0_0_"  =  "martineu"
Reply to
Martin Euredjian

Martin,

Looked at the spec's of the EDP100. Looking very nice indeed.. So to convert the HDSDI into DVI you would need a deïnterlacer and a frame rate converter. Guess that's where your 4 framestores come from. If you don't mind, I'd like to know how many fieldstores are actually used in the deinterlacer. Normally, you'd need two stores for doing the frame rate conversion (double buffered). So that would leave you with 2 stores left to do deinterlacing which allows for some nice 3field algorithms.

sorry to go off topic with this.. I'm just curious since I'm roughly in the same business.

regards, Jan

"Martin Euredjian" schreef in bericht news:d_3fb.8955$ snipped-for-privacy@newssvr27.news.prodigy.com...

to

work

formatting link

165MHz

used

the

random

No

over

idea

maybe

use

random

y

(and

it

uses

part.

resolution

and

being

values,

based

into

not

1D

how

for

W

later.

whose

the

0
Reply to
Jan

convert

converter. ...

I can't get into the internals at that level as some things must remain proprietary. I'm sure you understand.

The de-interlacer uses some conventional algorithms and a couple of not-so-standard techniques. Keep in mind, this is a monitoring device, and, as such, it tries very hard to not modify the incoming HD stream too much. Some de-interlacing techniques produce great looking pictures that are highly synthetic. That's OK if you are building a deinterlacer for a system that will then have to process the image further or for something like home TV. Probably no OK for professional use. At least that's my approach. Seems to work.

Can you elaborate? Privately would be OK, of course.

--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Martin Euredjian

To send private email:
0_0_0_0_@pacbell.net
where
"0_0_0_0_"  =  "martineu"
Reply to
Martin Euredjian

Oh, no, it painted them in very nicely. I don't remember whether it had a variable-width electron beam. They use this in the Rediffusion flight simulators and some other gear that I think had E&S image generators at the end of the processing chain. It looked much like Gouraud shading. Yes, that's why the thing cost several hundred K $.

Plain cost. Imagine how insanely difficult it would be to have a color CRT with a variable beam width, able to deflect from one side of the screen to the other in a couple of uS, and maintain focus and purity while doing all that! Then, you need a geometry engine and have to solve all the occlusion and clipping problems while flying through the graphics data base one time only. With raster, you can push a lot of that work into the logic such that it all gets sorted out when the most foreground pixel is rewritten. With vector, you better not write an occluded background mark, because the CRT can't erase what it has already drawn. Larger, faster, cheaper memory made raster POSSIBLE! When E&S designed this stuff, you just couldn't do read-modify- write cycles fast enough to make a usable raster system without making something like a 1024-bit wide memory word, and doing all the read-modify-write work at

1024-bit word width. There actually were some late 1970's imaging systems that did this, they cost about $3 million per viewport and filled 5 6-foot rack cabinets. Obviously, only for the absolute highest-end flight simulator systems and such.

Jon

Reply to
Jon Elson

I guess you are talking about raster-scan displays without a pixel to pixel frame buffer behind it, and not about vector-drawing displays (like an oscilloscope in X-Y mode).

Interesting theoretical enterprise, but I really don't see the point. And you wouldn't outside of a contextual reference frame that allowed you to understand where/why this might be important. It's a very narrow field of application. Not mainstream at all.

Well, I'm still not sure I understand it, after reading all the above.  The reason for this is to convert from one video fomat (HD broadcast?) to another (high-end

computer LCD monitor - DVI) without introducing a one (or more) frame delay? But, apparently, you ARE forced to delay the 2nd field, to make it show on a non-interlaced display.

Or, do the different scan rates come into play, as the output frame rate has no relationship to the input frame rate?

Jon

Reply to
Jon Elson

Because of the nature of the work I can't get into the sort of detail that would paint the whole picture for you. I apologize for that.

One way to look at it might be from the point of view of resources, data rates, etc. As you hike up in resolution/frame rate (say, 4K x 4K at 60 frames per second, which is what I'm working on) you need some pretty massive frame store widths to be able to slow things down to where the processing is manageable. I was looking into the idea of not having to add yet another frame buffer for something as "simple" as drawing very basic graphic primitives (let's just call them cursors used to mark things). If this could be done in real time, as the actual display data is being output it would/could make an important difference in the design.

I also have a requirement to have a 1 to 1 correspondence between input image and the corresponding sampled data which will appear on screen as these graphic primitives. No big deal. The display data actually goes to another processor at the same time it hits goes to a display system. If you are rendering your graphics to a separate frame buffer you will have to add one more frame of delay to the output image in order to guarantee coincidence. The memory required is not as much of an issue as the added frame delay. I truly can't get into it much farther than this.

Again, just a look-see for a better way to do it in real time. I'm already doing it in real time. So, I know it is possible. Just looking for a better way, if it existed and was publicly available.

--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Martin Euredjian

To send private email:
0_0_0_0_@pacbell.net
where
"0_0_0_0_"  =  "martineu"
Reply to
Martin Euredjian

Eeesh, shading with vectors. If there's a will, and a wallet, there's a way I suppose.

out when

I guess it's like software defined radios where more and more of the analog processing gets pushed into the digital world, for the flexibility.

So back then vector graphics was quite viable, but they underestimated how quickly memory technology would advance. One of those fabled "paradigm shifts?"

Thanks for the insights Jon. Looks like E&S is still chugging along, on the raster bandwagon

formatting link

--Vinh

Reply to
Vinh Pham

Oh, if you just want to superimpose cursors, selection boxes, and things like that onto a live video signal, I think that may be very easy to do without a frame buffer. There are many systems that do this sort of thing, and have been doing it since the 70's.

Jon

Reply to
Jon Elson

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.