How do you turn a camera into a 'mouse'?

interesting! so what's inside an infrared mouse? how much more power will this take? interesting concept. I need to confirm the 'relative' linearity of the infrard mouse, but this is possibly doable.

And has the advantage of almost no change to the present software!!

Reply to
RobertMacy
Loading thread data ...

Yes, I know. maybe I should have said 'mouse-like'

Historically, I used a $1 mouse with the 'proportional' turned off to obtain 1 mil resolution as a very cheap x-y position indicator. Worked great over a 2 inch by 2 inch area.

Now I need a larger area. more like 100-200 ft square area. so I took my $10 camera from Walmart, held it above my floor, set for vga resolution [600-800 pixels], walked very slowly while pointing the camera at the floor, and snapped pictures every inch or so.

I could see that even removing all color and simply making the images high contrast black and white it was possible to 'see' where the camera was in an x-y grid, thus my question.

This is not part of a closed loop input system, but more of an x-y position indication. [I said that somewhere else in the thread]

Reply to
RobertMacy

Glad you approached from a navigation problem: Historically, I used a $1 mouse with the 'proportional' turned off to obtain 1 mil resolution as a very cheap x-y position indicator. Worked great over a 2 inch by 2 inch area. Now I need a larger area. more like 100-200 ft square area. so I took my $10 camera from Walmart, held it above my floor, set for vga resolution [600-800 pixels], walked very slowly while pointing the camera at the floor, and snapped pictures every inch or so. I could see that even removing all color and simply making the images high contrast black and white it was possible to 'see' where the camera was in an x-y grid, thus my question.

As people know, my ability to research and find previous efforts is non-existant! someone could have been building this type of thing for years and unless I stumbled over it, would never know it existed. At least by my building up from zero, you gain total understanding, better control of the design, and most important trust in performance. But this is all aside.

Thanks for the farm URL. There is another project requirement for 6 degrees of freedom in a small room. but has to fit inside something the size of a "Defiant" portable flashlight.

basically an x-y position indicator for an area of around 100-200 ft, self contained and portable, with no requirement for preconditions to the area, nor modifications to the area [no tapes on the floor, no posts put in the ground, etc], portable to the extent of walk in carrying and turn on, from then on know where you are until turn off. Accuracy? distortion is allowed, but monotonic errors are more acceptable than sudden x or y shifts errors. resolution to be comparable to the smaller system, I would like to get better than 0.1 inch, note that's relative resolution, 'gentle' distortion is allowed, especially if repeatable. error of position like 1/f. Like could be off absolutely, but repeatably, maybe up to several inches, but relative accuracy pretty tight. and position must update when asked at around 10 times per second. or 20 times per second. [parbly could relax this to use linear interpolation with a non-synchronous stream, but would be nice to be able to ask, "Where am I?" be told and keep going.

Reply to
RobertMacy

On a sunny day (Mon, 14 Jul 2014 07:42:45 -0700) it happened RobertMacy wrote in :

You are aware that 200 ft is about 66 meters .1 inch is 2.54 mm

66000 / 2.54 = 25,984 scanlines MINIMUM if you want to use a camera for positioning. You'r dreaming.

You should know ;-)

Reply to
Jan Panteltje

And there is no optics that will do that either,

Reply to
Jan Panteltje

RobertMacy wrote in news:op.xizq6iri2cx0wh@ajm:

Integrated optical sensor (low res CCD camera) and motion processing chip, an IR LED and a plastic lens assembly that also provides a prism that reflects the LED light to illuminate the sensor's field of view. I'm ignoring the buttons etc. and the main microcontroller that provides the PC interface. The sensor normally has some flavour of SPI interface.

As a rough estimate it will scale with (r^2)/ratio_of_lens_areas where r is the lens to surface distance and the ratio of lens areas is with respect to the originsl mouse lens. This assumes effective optics on the iluminating LED(s) as well to concentrate the illumination in the field of view.

If possible, you want to back it up with a camera and markers at strategic locations so the system can compensate for any cumulative errors. Because you get to choose the marker appearance, and aren't trying to continuously derive the position from the image, the processing requirements are much reduced. Other types of accurate limited range absolute position sensors could also be used.

--
Ian Malcolm.   London, ENGLAND.  (NEWSGROUP REPLY PREFERRED)  
ianm[at]the[dash]malcolms[dot]freeserve[dot]co[dot]uk  
[at]=@, [dash]=- & [dot]=. *Warning* HTML & >32K emails --> NUL
Reply to
Ian Malcolm

Here is my dream: if the camera has a 4 inch view, that would be only 400 lines. 4 inch view is roughly 600 'pages' of 4 inch views in 200 feet.

But good point, I should orient the camera so that the motion is most liikely 'along' a line, rather than 'cross' a line, I think.

Reply to
RobertMacy

No markers. Although may think in terms of enhancement or 'optional' but only to increase accuracy and repeatability.

What other position sensors were you tninking of? g-force sensor chips? they drift way too much way too fast.

Reply to
RobertMacy

On a sunny day (Mon, 14 Jul 2014 08:26:55 -0700) it happened RobertMacy wrote in :

Optial distortion (as to linearity) of camera optics is HUGE, it would not even be close to linear in that 4 inch view,.

I did think of putting some reflectors on your 'object' and shoot light at it from 2 angles and do interferometry combined with time of flight. Phil Hobbs you should ask for that sort of thing. Putting a laser scanner on the object, and processing it's output would give you a GROSS idea of the space, and where it is in that space, but I think resolution is not that great, anyways last time I looked those were several thousand dollars.

Reply to
Jan Panteltje

Sounds perfect. Based upon some experimentations here, it doesn't take a lot of camera resolution to detect the motion.

If you find that software, let me know!

Reply to
RobertMacy

again, distortion is not as bad as non-monotonic distortion

Yeah LIDAR is out of range for me, used on robotic vehicles and military stuff, but at $50k to $150k a bit out of range for me.

Velodyn of Morgan Hill, CA Lidar Products

Principles of Operation The HDL-64E operates on a rather simple premise: instead of a single laser firing through a rotating mirror, 64 lasers are mounted on upper and lower blocks of 32 lasers each and the entire unit spins. This design allows for

64 separate lasers to each fire thousands of times per second, providing far more data points per second and a much richer point cloud than conventional designs. The 64 lasers are employed with each laser/detector pair precisely aligned at predetermined vertical angles, resulting in a 26.8 degree vertical FOV. By spinning the entire unit at speeds up to 900RPM (15 Hz), a 360 degree FOV is inherently delivered. Regardless of the spin rate, 1.5 million data points (i.e. pixels) are generated each second, providing an exponentially richer point cloud than ever before possible.

The HDL-64E supplies returns out to 120 meters. It features ~1 inch distance accuracy and excellent repeatability. Radial resolution is dictated by spin rate, with radial accuracy as precise as .05 degrees. Additionally, state-of-the-art signal processing and waveform analysis are employed to provide high accuracy, extended distance sensing and intensity data.

HDL-64E Output Data Details The HDL-64E outputs UDP Ethernet packets at 100 Mbps. Interpretation of the data is fully explained in the user's manual, and each unit comes with a customized calibration file that guarantees ultra-high precision on both range and angular resolution.

Reply to
RobertMacy

64 lasers would be overkill. But you probably need 4 to 8 cameras to get a full view of the area. Cameras are cheap. Can get all for less than $100.
Reply to
edward.ming.lee

interesting. be like multiple ADC's to compensate for nonlinearities. and 'jostling' will be like noise dithering to improve resolution. hmmm

Reply to
RobertMacy

RobertMacy wrote in news:op.xizun0002cx0wh@ajm:

Proximity sensors to enable accurate location with respect to nearby known obstacles. Optical, ultrasonic or electromagnetic with a range output, not just simple object detection, though even that can help 'home' to a known position.

Back to the mouse sensor: this article from NASA may be helpful:

--
Ian Malcolm.   London, ENGLAND.  (NEWSGROUP REPLY PREFERRED)  
ianm[at]the[dash]malcolms[dot]freeserve[dot]co[dot]uk  
[at]=@, [dash]=- & [dot]=. *Warning* HTML & >32K emails --> NUL
Reply to
Ian Malcolm

On a sunny day (Mon, 14 Jul 2014 09:22:18 -0700) it happened RobertMacy wrote in :

When I first did read you posting I was thinking:

2 pieces of nylon fishing line, conencted you your object A

roation sensor disc for angle? wall roll ------[A] | | | roll wall

The 'roll' could be a stepper driven reel. Or maybe DC motor with rotary enmcoder. Motors keep string strung (so to speak). Count revolutions or part thereof.

You can do this with microwaves too, and maybe even acoustically (echo from 2 90 degrees walls at 2 differrent frequencies). Does not have to cost a zillion.

acoustics.... phase, time of flight. Depends a bit one if there are wall and how those are oriented.

Or just no walls and let A retransmit (rolls are now acoustical transmitters). Bats.. you know. :-)

Reply to
Jan Panteltje

If you're looking at a big enough object, you can compute the position of its centroid to a fraction of the pixel pitch.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

thanks for that article. perfect! although the article was written badly, or my brain has gone defunct. But the premise in these 'vision systems' is to quit trying to 'view' the field, but rather 'interpret' the field. VERY intersting.

Since chip makers are addressing this issue for drones, should be cheap and power friendly!

...Now, use this concept with a different sensor array, not just optical. Makes for a lot of interesting potential here. toss in an fpga that can do tons of processing and may be easy.

Reply to
RobertMacy

no walls, my first thought were two orthogonal wheels, but I can't count on the surface to be contiguous, and might be way too rough.

indoors would work, but not get into the 'corners' so has a major disadvanatge too.

Reply to
RobertMacy

I was kind of counting on that, too.

Reply to
RobertMacy

On a sunny day (Mon, 14 Jul 2014 19:09:01 -0400) it happened Phil Hobbs wrote in :

Not really, enlighten us, are you talking about the brightness interpolation between 2 lines? The geometric distortion of the optics in even high end cameras is in the percentage range, you would need a model of the lens too. And it would be so vibration sensitive.. useless. Many years ago I did read about some prof using piezo activators to move a camera a fraction of a line to get better (sub pixel) resolution, but that only in one axis AFAIR. The other issue is the edges, a black object or one without any relief or detail poses an other problem, your optical mouse does not work on all surfaces either. But I like to learn, show how you do it!

Reply to
Jan Panteltje

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.