Searching for Vision Concavity Algorithm

I am looking for a morphology operator that will detect the concavity of binarized shapes. And, if any one is in the know, an explanation of the term "bulk hull", and where it came from. Very mysterious to me.

Thanks,

Brad b r a d @ a i v i s i o n . c o m

415-661-8068
Reply to
Brad Smallridge
Loading thread data ...

Is "bulk hull" the same as "convex hull"? That's the form you get by (conceptually) pulling a piece of string tight around the shape, so that it hugs all the external convex parts and stretches across any concave regions of the shape's outline.

AIUI the concavity is 1-(A/H) where A is the shape's area and H is the area of its convex hull. But I could easily be wrong - it's ages since I did any of this stuff.

--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL, Verilog, SystemC, Perl, Tcl/Tk, Verification, Project Services

Doulos Ltd. Church Hatch, 22 Market Place, Ringwood, BH24 1AW, UK
Tel: +44 (0)1425 471223          mail:jonathan.bromley@doulos.com
Fax: +44 (0)1425 471573                Web: http://www.doulos.com

The contents of this message may contain personal views which 
are not the views of Doulos Ltd., unless specifically stated.
Reply to
Jonathan Bromley

Yes, bulk hull, I think so. I might have misspoken here. Nice analogy, the string.

Right. The convex hull is what is left if you subtract away the original shape.

But how do you calculate, or otherwise detect the concavities? I have done some initial work with small areas and bit patterns but one soon runs out of logic gates.

--
> Jonathan Bromley, Consultant
>
> DOULOS - Developing Design Know-how
> VHDL, Verilog, SystemC, Perl, Tcl/Tk, Verification, Project Services
>
> Doulos Ltd. Church Hatch, 22 Market Place, Ringwood, BH24 1AW, UK
> Tel: +44 (0)1425 471223          mail:jonathan.bromley@doulos.com
> Fax: +44 (0)1425 471573                Web: http://www.doulos.com
>
> The contents of this message may contain personal views which
> are not the views of Doulos Ltd., unless specifically stated.
Reply to
Brad Smallridge

Most of the traditional binary image manipulation algorithms use various types of linked memory structures for flexibility. These don't work at all well in FPGA. When faced with the limited memory and non-existent memory allocation opportunities in an FPGA, you'll need special algorithms that are sure to be application specific. The key questions, it seems to me, are...

1) How is the image presented to you? Do you get it pixel-by- pixel from a camera of some kind, or are you given ready- processed data structures from a CPU that writes to your FPGA? 2) How big is the largest object that you need to process? For the most interesting image processing operations, you need to be able to store a complete bitmap of the whole object, which in practice means storing every pixel in a rectangle at least big enough to hold the object. 3) Do you need to process multiple objects simultaneously? I ask this because, the last time I did any binary vision stuff, we were capturing images of codfish on a conveyor belt. The belt went under a line-scan camera, and it was wide enough that there could be several fish in view at any one time. Our software needed to process all of them, else you got an awful lot of waste fish on the floor :-) 4) Are you interested in any included holes in the object, or only its outline? If you care only about the outline, then the extent of the image on each scan line can be represented by only two numbers - the left and right boundaries of the object on that line.

Given the kind of data representation I outlined in (4), there's a relatively simple algorithm for extracting the convex hull. It requires only two additional bits of storage associated with each scan line, in addition to the left and right boundary information. However, it is not totally FPGA-friendly because at each scan line you have to look back over all previous scan lines on the object. You may find it's best to include a little CPU in the design, to help with the sequencing of this or similar algorithms.

I wonder... does your application REALLY need all the concavity information? It may be possible to get all the information you need from simple accumulated measurements such as the centre-of-gravity, area, and first and second moments of inertia of the object.

--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL, Verilog, SystemC, Perl, Tcl/Tk, Verification, Project Services

Doulos Ltd. Church Hatch, 22 Market Place, Ringwood, BH24 1AW, UK
Tel: +44 (0)1425 471223          mail:jonathan.bromley@doulos.com
Fax: +44 (0)1425 471573                Web: http://www.doulos.com

The contents of this message may contain personal views which 
are not the views of Doulos Ltd., unless specifically stated.
Reply to
Jonathan Bromley

Hi Jonathan,

Thank you for your consideration.

I have lots of little objects being fed to me one pixel at a time. It's a line scan sorting operation with mutltiple ejectors.

I have an algorithm now that does blob labeling. I am thinking that as the blob grows, the rate that pixels are added to the left and right side should first increase, then decrease. If I see increase, decrease, and increase, that might indicate a convexity, and should be fairly simple to detect. At least this is what I am thinking today.

I do need the concavity information because it has proven so far to be the best way to determine where to segment the objects that are touching. The standard erosion/dilation techniques only separate some of the objects.

I can remove holes with a filter if that becomes a problem.

So are you doing vision now?

Brad

Reply to
Brad Smallridge

OK. Since you get pixel-by-pixel, there are (I think) some iterative tricks that are worth trying. The idea I sketch out below is really line-by-line rather than pixel-by-pixel, and it assumes you've run-length encoded each line (i.e. you know the X coordinates of all the edges on the line).

(When I say "X" I mean the direction of camera scan; "Y" is the direction of travel of the belt. YMMV!)

I don't think it's quite that easy. You can get a concavity on an edge whose gradient is always positive...

*** '*' = object, '.' = concavity ****.. *****.... ******..... ************* **************

OK. Here's what I believe you do... Let's talk about the right-hand side of the object - left-hand side is processed exactly the same way, separately.

Keep track of the end of EACH AND EVERY line. This "end" is effectively a vertex of the object. What's more, add a single-bit flag to each of these line ends saying "this vertex is on the convex hull". By the time you're done, any vertex WITHOUT the marker is part of a concavity. Start with the right edge of the topmost line; this is obviously on the convex hull, so it gets a marker.

As you get each new vertex, mark it as being on the convex hull (it obviously is so at this stage, because it's at the bottom). Calculate its gradient (in fact I think you want the inverse gradient, dX/dY) back to the nearest vertex that HAS the convex-hull marker. First time around, this is sure to be the previous line. Then calculate (or look at a stored copy of) the gradient from THAT point back to the PREVIOUS point on the convex hull. If the new gradient is larger, then the edge has "turned outwards" and the previous point is no longer on the convex hull. Delete its marker and re-try the gradient comparison, to the *next* point back. So you go on until either you've reached a convexity or you hit the top.

When the object is closed-off, you now have a list of all its vertices and each vertex is labelled to say whether it's on or off the convex hull. Vertices off the hull mark one boundary of a concavity; the outer (hull-side) limit of the concavity can be determined by interpolating the hull between pairs of on-hull vertices.

It isn't; I just wondered whether you *needed* the holes. When I did the fish-finder ten years ago, I was given some software that used the Lumia/Shapiro algorithm to label blobs. It was dog-slow. I re-wrote it and came up with a fast labelling algorithm that builds a tree structure for each object - top of the tree is the object itself, next level of hierarchy is one record per hole, then each hole may have another level for objects wholly in the hole (!), etc. Given that sort of scheme, it's trivial to ignore holes - you just throw away anything in the hierarchy below the top-level object.

However, the algorithm I developed wouldn't make much sense in FPGA because it maintains dynamically allocated and linked data structures. I'd be very interested to know how you did it in hardware.

No longer. But I still think it's about the most fun you can have in electronics without exciting too much attention from the authorities :-)

--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL, Verilog, SystemC, Perl, Tcl/Tk, Verification, Project Services

Doulos Ltd. Church Hatch, 22 Market Place, Ringwood, BH24 1AW, UK
Tel: +44 (0)1425 471223          mail:jonathan.bromley@doulos.com
Fax: +44 (0)1425 471573                Web: http://www.doulos.com

The contents of this message may contain personal views which 
are not the views of Doulos Ltd., unless specifically stated.
Reply to
Jonathan Bromley

I think the PITA part of vision is the hardware simulation. It is usually dog slow since it is a simulation usually of multiple video frames and the hardware might be working at 10's to 100's of MHz (eg, one vision system I designed used a 108 MHz process clock and NTSC video input. A video field takes 1.8 Million clocks, or the better part of a day per field to simulate the full FPGA design).

--
--Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email ray@andraka.com  
http://www.andraka.com  

 "They that give up essential liberty to obtain a little 
  temporary safety deserve neither liberty nor safety."
                                          -Benjamin Franklin, 1759
Reply to
Ray Andraka

In this example of yours, the number of pixels added to the lines are 1,1,1,7,1, and the 7 can easily be picked out of the crowd. I did the coding for this last night, however, I ran into a problem with steeper edges like:

** ** *** *** ****

And here the number of pixels added are 0,1,0,1 which would indicate two convexities where there should be none.

The algorithm I use comes out of the 60's and is very clever in popping in and popping off labels and label markers on a FIFO stack. It can generate a memory list of linked labels, if required, for a second pass at the image and combine more complicated features, however, I don't need this, I am trying to separate blobs.

Agreed.

Reply to
Brad Smallridge

Hi, Ray. Yes, slow indeed. I am using the free Xilinx version of ModelSim. I generally run a few test BMP files during the day and then run a real image over night. What I could use is a compiled VHDL that runs fast since when I run the larger images there is really too much information to track down anyway, I just look at the resulting output BMP file.

Reply to
Brad Smallridge

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.