How do you turn a camera into a 'mouse'?

How do you turn a camera into a mouse? like those infrared mice that seem to look at the patterns going by, how do you point a camera at the floor, move it x-y position, and make it act like a 'giant' mouse?

And what free software ?

Reply to
RobertMacy
Loading thread data ...

Motion estimation -- given successive images, identify a portion of the old image that's in the new image, and estimate the motion required to move it there. Keep doing that, and you're tracking motion

Reply to
artie

Probably by doing some sort of 2-d autocorellation...

Reply to
Johann Klammer

Sample a few lines, skip a few frames, sample a few lines & try for some sort of quadrature decoding?

Reply to
DTJ

On a sunny day (Sat, 12 Jul 2014 20:15:14 -0700) it happened RobertMacy wrote in :

mpegenc, motion vectors.

Reply to
Jan Panteltje

I can't resist...

Have you tried cross-breeding? How about cloning?

:)

Reply to
John S

As usualy google shoves my search full of TMPGEnc from Pegasys.

And the few mpegenc results show no URL, mainly bloggs, etc

Do you have an exact URL for information? for download?

Reply to
RobertMacy

libjpeg

I am working on something like this to detect motion. I use guvcview to ca pture frames. Decompress the image in memory. Substract the frames to det ect center of moving object. Extract and shift the object in the new frame . Repeat if the new delta is smaller. This can detect motion pretty well, if the frame rate is high enough or the motion is slow enough.

Unfortunately, i can current get only one frame every two to three seconds, due to possible bug in guvcview.

Reply to
edward.ming.lee

On a sunny day (Sun, 13 Jul 2014 06:30:12 -0700) it happened RobertMacy wrote in :

I no longer see mpgenc on my current systems You should then look at the source of mpeg2enc, it is part of mjpegtools:

formatting link
formatting link

From man mpeg2enc: DESCRIPTION mpeg2enc is heavily enhanced derivative of the MPEG Software Simulation Group's MPEG-2 reference encoder. It accepts streams in a simple planar YUV format "YUV4MPEG" produced by the lav2yuv and related filters (e.g. yuvscaler(1)) from the mjpegtools(1) package. An output plug-in to the mpeg2dec(1) MPEG decoder is available to permit its use in transcoding applications. The encoder currently fully sup- ports the generation of elementary MPEG-1, progressive and interlaced frame MPEG-2 streams. Field encoded MPEG-2 is also possible but is not currently maintained or supported.

The C code shows how the motion vectors are generated. So it needs some programming to extract those and use those for whatever it is you want to do.

A good tutorial on mpeg encoding would also be a good idea.

IIRC I did this once last century? But had no practal application for it, as I only needed 'change detection', and did that by comparing frames.

So I dunno if you can read - or program in C, else keep looking.

If you just want the basics:

formatting link
look at the left for funxtions like motion.c

click link on top

formatting link

click motion.c

formatting link

Well...

Reply to
Jan Panteltje

Not exactly what your looking for.... Search for "OpenCV mouse camera"

They use Background difference and contour mapping.

Cheers

Reply to
Martin Riddle

Commonly used for handicapped access to computahs: Also search for "gesture control" which usually includes moving a pointer around. Plenty to choose from.

--
Jeff Liebermann     jeffl@cruzio.com 
150 Felker St #D    http://www.LearnByDestroying.com 
 Click to see the full signature
Reply to
Jeff Liebermann

I used the 2011 version for about 4 days straight, while sitting on my hands and no mouse in sight. It was rough, but usable. I don't use it much today, but have it handy for when I forget or misplace my mouse. I hate trackpads.

I setup the 2011 version to help a Parkinson's Disease patient operate his computer. His hands and arms vibrated wildly, but the head and neck were rock steady. Lots of complaints and suggestions, but nothing that I would consider a show stopper.

Two other handicapped people needed an on screen keyboard so I tried Staggered Speech and Midas Touch which they both declared to be "clunky": I just went back to Camera Mouse and just setup the Windoze OSK (On Screen Keyboard)[1] and one of the text to speech programs. That was less "clunky" but still not good enough. So, I added a 2nd monitor, and set it up as one big desktop. The left screen usually had the program being used, while the right had the OSK, some icons, rear view camera window, and some big button macros for common activities, such screen grab, print, and enable/disable voice control. With Camera Mouse, it was easy to switch from one screen to the other by just looking at the monitor. Having a 2nd keeps the working program, and the controls separated.

First, try the Camera Mouse Suite 1.1 (Windoze only) which adds eyebrow mouse click and blink mouse click:

For open source, there's:

I just threw together an Acer C720 Chromebook, now running Mint 17 with Xfce. Of course, the trackpad driver is misplaced in the kernel and I don't feel like building my own. I guess this would be a good excuse to try one of these.

If you want to dig deeper, see: At least one of the programs mentioned will run on Linux (finger counter).

There are other programs that will do camera mouse emulation, some of which will play on Linux: (Old site) However, it looks like the Linux version disappeared. Searching.... foundit:

I forgot to mumble that cheap cameras are problematic. It really should use an HD camera:

I'm about 3 years behind on the technology, so much of the above is new to me. if you want, I can dig out some of my notes and see what I can excavate.

More links:

[1] Run "OSK" from the Windoze "run" command line.
--
Jeff Liebermann     jeffl@cruzio.com 
150 Felker St #D    http://www.LearnByDestroying.com 
 Click to see the full signature
Reply to
Jeff Liebermann

More:

Any red in the webcam picture can act as a mouse:

Scraping bottom (note the date):

--
Jeff Liebermann     jeffl@cruzio.com 
150 Felker St #D    http://www.LearnByDestroying.com 
 Click to see the full signature
Reply to
Jeff Liebermann

Wow! thanks for all the URLs...

What I need is something that allows me to mount a camera on a movable platform pointing at the floor and upon moving the platform know where the platform has moved to, like the infrared mouse. gesture identification sounds like the camera is rigid and I move by, wait! that is the same, never mind.

I need enough feature recognition to make the motion detection a fairly linear relationship. not just some 'rubbery' movement thingy. Like know the platform has moved nothing in the y direction, but in the x direction has moved 1 inch, then another 7/8 inch, and then 1 inch, not 'the platform has moved something, something, something.

like a realtor's distance measuring wheel. except x-y, not just one vector.

Reply to
RobertMacy

It's not the same. Moving something in fixed reference frame is easier. Moving the reference frame makes the problem a lot more difficult.

Can you place markers on the floor? Or it has to be a random floor.

Again, i think libjpeg is your friend. Doesn't sound like any existing software will solve your problem.

Reply to
edward.ming.lee

absolutely no preconditions to the floor. It's like answering the question on each frame, "How much do I need to shift x and y to get this new picture?" Assuming not moving too fast so that each frame still overlaps.

what URL?

Reply to
RobertMacy

y
w

n

Libjpeg should be in every Linux distribution. There should also be a wind ow port somewhere. Libjpeg gives you the tools to compress and decompress jpeg images. Once you load the image in memory, it's a simple matter to sh ift it in any X or Y direction, then compare the delta of both frames. Whe n you get to the minimum delta, you would have you X & Y. By the way, this is also applicable in video stabilization.

Reply to
edward.ming.lee

It would be nice to know what you're trying to accomplish, instead of how plan to do it. It's so much easier to offer suggestions when I know what problem you're trying to solve. In this case, I think you've take a side road off the beaten path. I have some experience in navigation hardware, so I'll approach the problem from there.

Optical position locating has plenty of advantages, but not with a single viewport (camera) and no plane of reference. Moving sideways (assuming a constant height above the floor) is going to introduce parallax. The position indicated on a 2D image plane (the camera sensor) doesn't correctly indicate distances in 3D. You can do it by triangulation, using two cameras and a known altitude above the floor, but not with only one camera.

I've run into what I would presume to be a similar problem flying RC airplanes, navigating robots, and tracking race cars. Things work so much better if one knows their exact position, preferably in 3D. The common GPS doesn't provide enough accuracy (even with WAAS) so some better form of differential correction (DGPS) is needed. The technology is quite common. For example, it's used to automatically steer a tractor when plowing row crops. The system can operate in 7 dimensions (x, y, z, time, yaw, pitch, and roll). Once setup, it knows the position, speed, and direction, relative to a pseudolite or DGPS xmitter, in 2D or 3D, to within centimeters.

There are also various TDOA (time difference of arrival) system. You could build something using radio frequencies, ultrasonic pulses, or light pulses. Essentially it's Loran C, where the lines of equal delay form multiple hyperbolas on a 2D surface. While not as neat as GPS/DGPS, such schemes respond to changes in direction faster than GPS and also work indoors. (Bug me if you're interested in how I used 3 mountain top receivers to build a vehicle location system using hyperbolic navigation in about 1979).

Enough generalities (and time for dinner) for now..

So, what problem are you trying to solve?

--
Jeff Liebermann     jeffl@cruzio.com 
150 Felker St #D    http://www.LearnByDestroying.com 
 Click to see the full signature
Reply to
Jeff Liebermann

But don't forget that a mouse is used within a feedback loop. Absolute positional accuracy is not needed nor provided.

tm

Reply to
Tom Miller

wouldn't it be easier to take a regular mouse, upgrade the light source, and change the optics to focus at a longer distance?

--
umop apisdn 


--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
Reply to
Jasen Betts

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.