How do you turn a camera into a 'mouse'?

basically a mouse chip, an encoder for the scroll wheel, three microswitches (for the buttons) a LED (or a laser), a plastic lens and light-pipe and some hardware.

1W of electricity gets quite a lot of LED illumination, how "giant" does the mouse need to be?

some mouse drivers do dynamic scaling where faster speeds move the pointer further, if you don't want it as a mouse you may need interface with it at the USB HID level.

--
umop apisdn 


--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
Reply to
Jasen Betts
Loading thread data ...

The 'features' that made the mouse more useable had to disable to turn it into a 'linear' position sensor.

sounds like I need to research that 'mouse chip', anybody know who makes them, spec sheet?

Was looking at a way to do 'self contained' position monitoring in a 100 ft by 200 ft or such range. To have the same resolution need 100 mils per 'click' 2 inch by 2 inch 1 mil click is like only 200 inches by 200 inches with 100 mil clicks, oh oh. guess could make that 1 inch click to get the

160+ft but better to scale back the 'spread'

back to 100 mil click and live with smaller sections of 10 by 20 ft. Now we're getting into the empirical stuff so not possible to 'decide' ahead of time. Have to wait and see what the data sets look like to determine the requirement.

Reply to
RobertMacy

A mouse doesn't need the same sort of linearity as a real encoder, and offsets don't mean anything. What you do is to figure out where the object you're tracking is (if it's brighter than the background, for instance, you can just apply a clip level), and compute the weighted centroid of the patch:

Xbar = sum x*I(x,y)/sum I(x,y)

Ybar = sum y*I(x,y)/sum I(x,y)

The granularity of those centroids can easily be 100x finer than the pixel pitch. Things like shadows can be a problem in some instances, but that's how it is in machine vision problems.

Tracking a modulated LED works well too, as long as it's defocused enough that you get a spot diameter of 10-20 pixels.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

On a sunny day (Tue, 15 Jul 2014 08:59:48 -0400) it happened Phil Hobbs wrote in :

Yes, that works.

But there are real life conditions, it is very possible the average brightness of your object matches the average brightness of the background I'd think. In that case you can move all you want and nothing will ever change,

In any case, the way I understand what OP wants is 1/10 inch position accuracy in a 200x200 feet area. Just _movement_ detection is not that hard, mount a mouse on it looking at the floor with a laptop and wifi and transmit. It is the absolute accuracy that is challenging.

I'd go for acoustic: 2 transmitters and a repeater on the object, or same with microwaves. .1 inch should be doable in phase at a few cm wavelength.

ping from East, time of flight + retransmit delay = distance, phase is vernier. ping from South, time of flight, phase, same.

This can be done with 2 small stations at the east and south edge of the 200x200 feet area. With such a re-transmit system many problems no longer exist.

Reply to
Jan Panteltje

no posts or 'extra' stuff allowed. must be self-contained.

I've gotten a cheap ball mouse to do an excellent job over a 1 inch square region WITHOUT slippage! The $1 mouse replaced a $200+x-y position system that had less resolution. I'm trying to do similar on a larger area.

Reply to
RobertMacy

Way too vulnerable to wind. The velocity of sound in air is only 300 m/s, so a 5 mph breeze will put you off by 2% of full scale. Even convection currents in still air will be way too large.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

1.5% of full scale.
--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

On a sunny day (Tue, 15 Jul 2014 09:25:18 -0400) it happened Phil Hobbs wrote in :

Good point, so microwaves it is then ...

Reply to
Jan Panteltje

On a sunny day (Tue, 15 Jul 2014 06:22:21 -0700) it happened RobertMacy wrote in :

I am glad I no longer have a ball mouse, those picked up dust, mainly cookies residue here, and then had a LARGE error :-) (if the ball still rolled at all). Cleaning was a regular thing with those mice.

Reply to
Jan Panteltje

google lists several, sparkfun has one type.

My mouse seems to have a single chip solution (or perhaps there's a surface mount microcontroller on the other side of the PCB)

One of the first hits I got was this

formatting link

Quadrature X and Y output like an old-school rolling-ball mouse, might even shoud be proportionate to distance moved.

I'd be interested to hear how you plan to avoid sensor skew.

--
umop apisdn 


--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
Reply to
Jasen Betts

Thanks for that URL! Now to figure out how to 'adapt' it!

That is the first data sheet I've ever seen that no where on it could I find the manufacturer's name. Did I miss it somewhere? 2004 and through-hole chip!

I didn't understand your question regarding, "...avoid sensor skew." is that non orthogonal x-y? or x and y not having the same scale? or is that 'slippage'? what?

Reply to
RobertMacy

Just to bring ALL up to date.

I took the very cheap Walmart Vivitar camera, set it to vga format [640 by

480 pixels], held the camera about 1 foot above my floor, and started snapping pictures as I slowly moved along. The resulting test image set consisted of only ten pictures, in color, which indicated, yes, moving along. [I don't know how to set this camera to taking videos, but know that's possible too]

I then used irfanview and converted the ten .jpg pictures to .bmp, the .bmp I can read into octave using special program I wrote a few years ago for obtaining the r, g, b matrices from a color bmp image. Alas, the images were just too large, and that process merely crashed octave. so....

I then opened each .bmp image with Win98 PAINT to convert to Black and White, that is, each pixel is either black or white. Again, using irfanview, could immediately see that the 'motion' had been preserved in these high contrast images, so...

As many know .bmp formats are different whether the image is color or BW, wo wrote a new function for octave to read in the black and white format of bmp. But, that still produced a 640 by 480 matrix for each image although the data of each pixel was only 1 or 0. Still fairly large sets of data.

I then reduced the 640 by 480 in several steps of halving by using a function I wrote for this purpose, which linearly interpolates between points, called reducerows.m and reducolumns.m, thus

which halves the matrix 3 times, making a handy sized matrix of around 80 by 60

now I've got ten matrices of 80 by 40 with 'gray' scale from 0 to 1, in each matrix the adjacent pixels have been averaged. Plotting the matrices in sequence makes a great little video clearly showing motion. From inspection there is some z axis motion AND tilt. With some extreme refinement to the processing it appears that both aspects can be calculated/derived from the images, especially if the images were more frequent. Amazingly even though black and white and reduced there is still a wealth of information preserved in the matrices.

Now back to resolution: the original camera's field of view was around 10 inches on the floor, so that means the original resolution is approx 16 mils

reducing the matrices to 80 by 40 means resolution is also reduced to around 125 mils, close to the original goal, but since these matrices have preserved the gray scale, it should be possible to get 'partial' pixel resolution. Fortuitously, from inspection the images with less reduction show much more 'motion', so I'll probably back off the term '3' [a reduction of 3 is dividing by half three times, or 1/8 original size] to '2' which will then result in 60 mil resolution, more than the original goal WITHOUT using partial pixels. And, each matrix will be a relatively small 160 by 120.

All a bit of handwaving here, but has been VERY educational. I am very encouraged that it may be possible to accomplish a cheap portable self contained x-y position determining system with a lot of resolution and fairly decent accuracy using a SINGLE camera and some smarts in the processing.

Reply to
RobertMacy

RobertMacy moved along:

Did not you say no special markings on the floor? Now you are cheating. Try it on an even gray floor.

Reply to
Jan Panteltje

I don't have gray flooring. 'tis true, I have travertine with chiseled edge in a versailles pattern so the grout lines do look a bit like tape all over the floor. ;) Also, the individual 'mottled' stones create some very interesting patterns on their own.

There is carpeting in the bedrooms [Gack! spit! spit! curse begone!] but those images clearly show all the little carpeting tufts again clearly indicating motion.

The final application will be an industrial situation, or outside. Outside will be interesting with its enormous variety of 'rough' images. It may be possible to run into a uniformly gray painted industrial concrete floor, but I have yet to see any surface that does not convert to some level of patterns with HIGH contrast. Even a sheet of white paper has surface variety to it.

It may just be a 'feature' of the system to not operate well over a uniform gray floor area, TOUGH! Now bring out the additional tools.

The original goal was to create an x-y position determining system that is self-contained, portable. That way, the Operator can be simply meandering around, not that noticeable. And, if the Operator has to abuprtly leave an area, he doesn't have to remember to go back and grab all his stakes or other parapherinalia(sp?). Don't want to leave anything behind.

Reply to
RobertMacy

On a sunny day (Wed, 16 Jul 2014 10:12:50 -0700) it happened RobertMacy wrote in :

Fukushima? Was just reading an article on that

formatting link
it's in German but the pic-tjures say a lot.

Reply to
Jan Panteltje

Like this?

formatting link

(Although it's not clear whether the conversion is mouse-->camera or camera-->mouse).

Grins, James

Reply to
dagmargoodboat

Do yourself a favour and install the netpbm command-line tools. There's an amazing range of easily-scriptable features in there.

You're going to an awful lot of work to rediscover some of the most basic elements of computer vision, which is fun but hardly productive. If you want to actually do something worthwhile, I suggest you start reading about feature extraction; maybe look at the excellent OpenCV library as a start, since most CV apps are being built on it these days.

Clifford Heath

Reply to
Clifford Heath

:

brightness of your object

accuracy in a 200x200 feet area.

at the floor with a laptop and wifi

vernier.

200x200 feet area.

Yawn. Reinventions of VOR and TACAN.

?-)

Reply to
josephkk

thanks for the 'heads up' on irfanviews batch file processing, forgot about that feature. In defense, when concentrating on a problem's solution tend to become focused on one task at a time, then after understand ALL the sequences go back and streamline. That's my story and I'm sticking to it. ;)

Glad to hear linus is a viable OS for doing all this, really getting tired of Windows' intrusions into the software and timing sequences.

Reply to
RobertMacy

On Wed, 16 Jul 2014 17:12:35 -0700, Clifford Heath wrote:

what is netpbm? URL? what OS will it work on? it is free? yes?

Thanks, forgot about OpenCV. A friend is successfully using that for his h-robot! and to add insult to injury, *I* recommended he look at OpenCV! sigh.

It's interesting that all these terms suddenly get bandied about, like 'feature extraction'. which unless you're actually involved in vision systems, one can only imagine exactly what that entails/means.

Doing all this 'reinventing the wheel' has been VERY productive. For example, I now have tools for some image processing that applies to another project. Additionally, I learned how to make 'autocorrelation' better. For example in a BW image a 1 and a 1 match and a 0 and a 0 match therefore the best way to autocorrelate an image is to XOR them. But, what do you do when the image has gray scale? In an attempt to be more forgiving of slight camera rotation I tried reducing the image with linear interpolation only twice, from 640 by 480 down to 160 by 120, but you end up with a lot of levels of gray scale, caught me way off guard, thought I'd only get 8 levels of gray, no you get a LOT more. So now begs the question "how to correlate two gray scale images?" where equals is a great match and not equal is not so good a match. Is there a standard 'formula' for doing this? Most of my textbook autocorrelation functions do things like reverse time scales and convolution, but those tend to fail when the signal spends a great deal around zero. In imaging, zero and zero means the signals correlate and THAT information should not be thrown away. So I had to write a whole new program to correlate the images. The first approach only 'pigeonholed' the correlation into four bins, close match = 1, not so close nearby range = .75, don't know for sure =.5, and way off =.25, which included completely off. hmmm. see a flaw there. four bins should be 1, .67, .33, nd 0

Comparing between using single pixels, where camera rotation can affect the outcome, versus reducing the image by four times, where adjacent pixels get a bit 'smushed' the correlation at the peak did gain S/N. For example, correlation using single pixel was .84 peak with .61 background, and correlation using reduced image was .88 peak with .55 background, so obviously did reduce sensitivity to distortion over the field of view AND any camera rotation. But, either way, the 'shift' calcuated to be the same x-y position change. Except with image reduction needed to use 'partial' pixels.

Reply to
RobertMacy

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.