Bezier lengths w/o lots of resources

Hi,

Not the *ideal* group for this post -- but the "right" group would undoubtedly throw *way* more resources at the solution... resources that I don't *have*! :<

I'm looking for a reasonably efficient (time + space) technique at approximating Bezier curve lengths from folks who may have already "been there"...

I was thinking of subdividing the curve per De Casteljau. Then, computing the lengths of the individual segments obtained therefrom. Summing these as a lower limit on the curve length with the length of the control polygon acting as an upper limit.

But, that's still a lot of number crunching! If I work in a quasi fixed point representation of the space, it still leaves me with lots of sqrt() operations (which are hard to reduce to table lookups in all but a trivial space! :< )

So, obviously, I need a "trick". Something to throw away that gives me a "good enough" answer -- the sort of thing the "ideal" group wouldn't consider settling for ;-)

Suggestions? Pointers?

Thx,

--don

Reply to
D Yuniskis
Loading thread data ...

You might want to talk to some of the game groups like gamedev.net for algorthms

formatting link

Regards,

Walter..

-- Walter Banks Byte Craft Limited

formatting link

Reply to
Walter Banks

I don't know if the code works but here's possible solution that doesn't look too involved:

formatting link

George

Reply to
George Neuner

That's a pretty straightforward "subdivide-and-sum-chords" approach. Note that it hides lots of expensive math!

flt l0 = p0.length(); flt l1 = p1.length(); flt l3 = p3.length();

(i.e., the "length" method computes euclidean distance... two mults and a sqrt!)

[and were they charging by the character when this guy wrote this code?? jeez, who the hell uses *ell* as a symbolic identifier?? esp lowercase! :-/ ]

I have to sort out the criteria he is using to determine when the fit is "good enough". Appears to be looking at the shape of the control polygon at each instance to see if it is "too crooked", etc. (I suspect the magic numbers are truly magic!)

The problem with this approach is it discards all of the computations "before the FIRST return" most of the time.

If you think about it, the control polygon at this point defines an upper limit on the length of the (soon to be subdivided) curve. But, you also have points which can be used to create chords that bound the *lower* limit of the curve. This lets you get a better approximation "quicker" (assuming you can keep the cost of the math down).

Reply to
D Yuniskis

algorthms

Ah, good point! I have some friends in that industry. I'll see if they have any better pointers (note that they tend to have *lots* of resources, relatively speaking, to throw at the problem... :< )

Reply to
D Yuniskis

algorthms

Lots of resources but not much time to do it. :) They know a lot about what is good enough for the problem.

The link I posted is of a Bezier length discussion.

Regards,

Walter..

-- Walter Banks Byte Craft Limited

formatting link

Reply to
Walter Banks

Since Bezier curves are polynomial, can't you find an exact solution for each segment and solve it? Or is that what you find too expensive?

(This could be a naive question -- I haven't done the math to see how complicated the answer is).

--
Tim Wescott
Control system and signal processing consulting
www.wescottdesign.com
Reply to
Tim Wescott

The math everyone is avoiding is for each line segment Xl = Xs-Xe; // X end points Yl = Ys-Ye; // Y end points len = sqrt( (Xl * Xl) + (Yl *Yl));

For lots of short segments.

w..

Reply to
Walter Banks

There isn't a closed-form solution for a cubic (or higher) Bezier curve.

The length of a curve is the integral of the length of the tangent vector, sqrt(x'(t)^2+y'(t)^2). For a cubic curve, x'(t) and y'(t) are quadratic, giving sqrt(f(t)) where f(t) is quartic.

For a quadratic Bezier curve, the tangent length is the square root of a quadratic expression, for which the integral has a closed-form solution.

Reply to
Nobody

algorthms

Yes, there are lots of academic papers, forum discussions, etc. re: this topic. But, they tend to be looking for general solutions *and* use "general computations" in getting to them!

As with most things embedded, you (I) have a specific application domain to address. You don't need a general solution. There are often constraints that apply which can be exploited to give you an edge in solving the problem more "practically".

So, finding the right combination of "tricks" is the name of the game :>

Reply to
D Yuniskis

You can walk the curve (varying t over [0..1]) but that gives you more detail than you need -- and at a very high cost.

So, you approach it as a simpler problem -- find some convenient points (easy to compute) that you know lie on the curve (*hoping* they are far enough apart that you can quickly progress along the length of the curve) and hope that consecutive points fit the curve "close enough" that you can skip everything in the middle :>

Since you are then trying to do a piecewise linear approximation to the curve, you need to be able to compute the lengths of each of those segments (which is expensive "in general" and, since you need to use floats/doubles to get any accuracy, it's *really* expensive IN PARTICULAR!)

So, you want to make sure *every* calculation you perform is used wisely -- even if the approximation(s) aren't currently good enough. A lot of the published algorithms concentrate on expressing the algorithm instead of expressing it in an efficient way.

Reply to
D Yuniskis

Yeah, the problem is that you can't (?) constrain the locations of the control points (shape of the curve) to ensure a particular maximum curvature, etc.

As a result, you can end up with *very* short segments in order to keep the area between segment and curve small (i.e., good fit). So, you end up needing lots of bits to the right of the binary point. OTOH, you can't (?) constrain the total length of the curve so you can also need lots of bits to the *left* of the binary point.

I.e., doing this on a machine with hardware floating point support is *much* easier than on a platform where that is implemented in a *library*! :<

Reply to
D Yuniskis

What kind of representation are you using (number of bits, binary point location) and how many bits should the length have ?

How expensive is the sqrt using the hardware (if it exists) or using the standard run time library (which must also handle special cases) ?

A huge table could be replaced with a shift, two table lookups, one multiply and one odd.

If the sqrt argument is in floating point format, the exponent is divided by 2 and the normalized could go through a table lookup. A single step normalization (increment exponent by one, shift right mantissa by one bit) may be required.

A suitable floating point representation would be 1 byte 2's complement exponent and 16 bit mantissa (without any hidden bit tricks).

Reply to
Paul Keinanen

Yes, but that's like numerically integrating exp(x) step by step because you don't know how to do integration using algebra. Of course, it may be that its much more like exp(-x^2)...

Even for numeric integration can't you use a Romberg style scheme:

You can compute the length exactly by summing infinitely small linear segments. Call this L0. If instead you use finite step size h of the control parameter then the approximation will be L[h] = L0+O(h^n) for some n (n is typically an integer but haven't checked for Beziers).

Let's say L[h] = L0 + k*(h^n) + O(h^m) with m > n

the do the integration again with a different step size. For simplicity, say

2h, which has the advantage that it can share much of the calculation. Then we'd expect

L[2h] = L0 + k*((2h)^n) + O(h^m).

so now (2^n)*L[h] - L[2h] = ((2^n)-1)*L0 + O(h^m) i.e. h^n term has been cancelled.

So a new estimate of L0 is L0 = ((2^n)*L[h] - L[2h])/((2^n)-1) + O(h^m) which converges more quickly with reducing h than does the original.

Note that this says nothing about how you calculate the length only about how to get a better estimate from two poor estimates. The point being that the size of h you can get away with can be much larger and so the overall amount of work can be (much) less.

Still, I'm with Tim on this one. Try some math(s) first.

Peter

Reply to
Peter Dickerson

Part of the reason for the question is to see just how far into the mud I can drive these issues.

Note that the precision of all the math plays a role in the magnitude of the error (and accumulated error) -- not just the "cost" of the operation (space + time).

If the curve is constrained to an N.0 x N.0 bit rectangular space (2D curves, not the 3D curves illustrated in the example George posted), then the length of the longest such curve would probably be ~4N.

Note, however, that the control points for such curves could conceivably lie beyond that NxN space -- without violating this constraint. I think 2Nx2N would be a safe assumption for them (i.e., this affects *some* of the math as the points must be representable, computationally).

Sorting out how far to the right of the binary point to carry computations is a lot harder without constraints on the curves themselves. What I have been trying to do is just code a generic "text book" algorithm and allow me to pass masks to it that essentially discard precision after each (floating point) operation just so I can see how this affects the results FOR SAMPLE CURVES. I.e., an empirical analysis instead of a theoretical one. Hopefully, it will give me a handle on what I can get away with computationally. And, let me experiment with different characteristics of curves to see what aggravates the math the most/least!

Think in terms of "days of yore" :> I.e., I am trying to drive the algorithm to a point where I can get by with just shifts and adds (instead of multiplication, division, etc.).

Ideally, I can find a numeric representation and algorithm that I can push into silicon. I don't see any way to get around the sqrt operator but doing that via CORDIC wouldn't be too expensive if I *really* want to can the whole thing.

I think you can cheat even more *knowing* what subsequent operations

*will* be. E.g., postpone the normalization until after the sums have been computed, etc. Small optimizations but they might save a bit of time and precision.

The problem is deciding what constitutes "suitable" from a computational aspect -- as you accumulate "length", you can't sacrifice "resolution". E.g., even if N is only 10-12 bits, you quickly eat up that 16 bits as you are often summing very small segment lengths where you want to *carry* that precision in your sum (yet are forced to discard it as the accumulated length increases and consumes bits in your representation).

I'm just trying to get a handle on how *best* to approach the algorithm itself so I can better evaluate the consequences of any representation scheme on the "goodness" of the results. Then, see what it all *costs* :-/

Reply to
D Yuniskis

Working the numbers from "t" leaves you burning through lots of iterations of "oops, too big!" before you can get to suitably small increments that give you close enough approximations (there has been a LOT of research on this problem as it is so commonly used to represent curves/surfaces :< ).

Unless you can constrain the shapes of the curves, there is nothing you can do a priori (in your algorithm) to guarantee that an approach, choice of t, etc. will work.

E.g., years ago, I wrote an algorithm to draw lines of constant time-difference for a LORAN-C pen plotter. Basically, families of "concentric" (?) hyperbolae. But, you can't know a priori where on the family of curves you will be operating so the algorithm had to handle all cases. Computing *a* point on the curve took a full second -- you can't compute many such points before the user will grow impatient with you :>

But, this computation may just end up telling you, "I can't get to that point with a linear approximation as there is too much curvature here". So, you just *wasted* a second. Watching the algorithm creep into regions of increasing curvature was *painful* (as each successive estimate got "less good")

So, you sort out how you can get *some* information from the datum that, hopefully, makes your next actions a *little* bit better -- or, maybe not as painfully *bad*!

This algorithm (bezier length) isn't *nearly* as expensive. But, the same applies: make every computation "count" (towards achieving your final goal).

Reply to
D Yuniskis

but probably fewer than the naive approach with the same the same degree of non-contraint on the problem.

it isn't an algoritm per se, its an optimization to avoid finer step sizes.

years ago I wrote a rotine to plot mass spectra which involved solving cubics for it data point on an early 68k machine. A couple of thousand data points plotted as fast as anyone would care about, But, this computation may just end up telling you, "I can't

in this case the previous and current result are used to improve the estimate in the hope of going to higher resolution.

peter

Reply to
Peter Dickerson

For the polynomials you can't. The catch is that they're polynomial in the curve parameter, not in one of the coordinates. The exact solution for a third-degree Bezier spline would involve integrating the square root of a 4th-degree polynomial. Closed-form exact solutions for that don't exist.

Reply to
Hans-Bernhard Bröker

What's the "naive approach"? The algorithm I outlined drills down to increasingly finer segment sizes until it achieves a desired level of "fit". If the curve is *linear*, the length is computed as the distance between the endpoints!

Then you weren't working in a resource starved environment. :>

(e.g., the plotter ran on a 3MHz 8085, 10KB of code and 512 bytes of RAM -- for the entire instrument!).

I've found that failing to meet a ~300ms response time WITH A CLEVER ALGORITHM is a good shirtcuff definition for resource starved. E.g., if I push a button and can perceive the delay in the attendant action, then the code is struggling to do more than it can...

Yes, as does the algorithm I proposed. If the points you end up with don't look to be a good fit, you subdivide and try again (recurse). Using the first control polygon and the subsequent subdivided points as upper and lower limits on the "actual" value lets you come up with a numerical factor that bounds your error: i.e., the length is in [a,b] - which you can examine at run time and use to decide if you need to further subdivide. Watching how a and b vary with each refinement lets you see when you are just spinning your wheels.

Reply to
D Yuniskis

Adding/subtracting is easy to do in integer/fixed point arithmetic, but very nasty to do in floating point format due to demoralization and normalization stages. Also integer/fixed to float conversion is nasty due to normalization.

Doing sqrt((Xs-Xe)^2+(Ys-Ye)^2) might be easiest to implement by doing the differences in integer, take abs of both (to reduce table size) and use a table for squaring and using integer arithmetic for the sum and then convert to float for sqrt, especially if the result is needed in floating point representation.

Single cycle normalisation/denormalisation in hardware requires a barrel shifter capable of shifting multiple bits left or right. For normalization (to [0.5..1.0] or to [1..2]) a priority encoder is required to detect the highest bit set, so that the correct shift can be applied in the barrel shifter.

In order to take the sqrt from a integer/fixed point argument and generate floating point sqrt, group two bits at a time starting from the binary point (left and right), take OR of each 2 bit group and feed the outputs to a priority encoder and use the barrel shifter to normalize the result to [0.25..1.0] or [1..4]. Use the normalized mantissa for table lookup and use the shift count (after shifting out the LSB) as the exponent.

The floating point representation is very bad if you try to accumulate small increments to a huge sum. Sooner or later adding a small value does not change the sum at all.

If you really have to use such incremental system, at least accumulate (say 100..1000) samples into a temporary sum, before adding to a final sum.

Reply to
Paul Keinanen

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.