Python example code for Kinect on R-Pi?

I idly browsed through some Linux magazines in a newsagent, and saw an article on how to add vision to a Raspberry PI using a Kinect for an XBox360. There was also Python code for some analysis of the depth-mapped image. That seemed interesting, but I had no intention of buying the magazine (I forget which it was) because Linux magazines are over-expensive and I'd no intention of buying a Kinect just to try it out.

On the way home, however, I looked in the window of a "CeX" second-hand

I plugged it into my main computer and ran "xawtv" which is a good program for finding video devices, and it just worked, showing a video colour feed from the Kinect. Amazing.

I then Googled for how to get the depth-map when using Ubuntu 14.04, and the first link told me to install "freenect" from the Ubuntu repository, add "blacklist gspca_kinect" to "/etc/modprobe.d/blacklist.conf" to stop the standard video driver from grabbing the device, reboot, then run "freenect-glview". And it just worked showing the depth map and the standard video feed side by side. So it was up and running on my main computer in 10 minutes.

Now, being cheap, I'd rather not go back into town to buy a magazine for a short Python code listing. Anybody know of example Python code on the net that would run on a Pi for analysis of a Kinect depth map?

Reply to
Dave Farrance
Loading thread data ...

all computer magazines are expensive now, as they get very little advertising revenue. And one wonders why quality magazines go out of business...

So, you'd rather other people do your work for you rather than fork out

Reply to
chris

Reply to
The Natural Philosopher

An initial Google didn't seem to show anything suitable, so I thought maybe someone might already happen to know. This being Usenet discussion groups for such purposes. I'm continuing Googling, of course.

Reply to
Dave Farrance

Indeed. Actually, I'm well aware of the issues facing the print media, having looked into it following the disappearance of some magazines that I had been buying. It's essentially the loss of advertising revenue. The Newspaper Association of America publishes figures, so I charted it with Python/Matplotlib here:

formatting link

That probably also explains why the magazines are so expensive now. Well, nothing lasts forever, and I guess that much of the "legacy" media deserves to come to an end.

Reply to
Dave Farrance

The days when I browsed a news stand rather than the internet for information ended about ten years ago I am afraid.

It is the end of the specialist (paper) magazine I think.

--
Everything you read in newspapers is absolutely true, except for the  
rare story of which you happen to have first-hand knowledge. ? Erwin Knoll
Reply to
The Natural Philosopher

OK. Never mind. I've found enough to be going on with.

Googling around "freenect" and "python" got me a way to import images into Python for analysis. The quality is a bit rubbish and noisy though. I'll probably look into averaging several images another time. A fairly large object was required to see its shape.

I normalised and greyscale-inverted the image in The Gimp and ran it through "openstereogram". Can you see what it is? It's one of those diverged-eye stereograms where you have to look "through" the image then carefully bring it back into focus -- and this one seems a bit tricky to hit the right level, but it does work:

formatting link

The page that gave a useful sample code snippet was this one:

formatting link

So I tacked

cv2.imwrite('image.png', blur) break

to the end of that "while" loop and thus captured and saved a depth map with Python.

Reply to
Dave Farrance

It's not what you asked for, but maybe of interest - I have this bookmarked for if I ever get round to using the kinnect for binaural head tracking.

formatting link

Reply to
Andy Furniss

Thanks. I'll bookmark that too and try it sometime. Working out the orientation of the head must be made difficult by some of the problems with the depth map.

This is a self-portrait. You can see that the fingers of my right hand have got closer than the minimum distance of 1 metre and have got cut off. The sides of my head are reflecting the infrared away from the Kinect rather than towards it so that also gets cut away:

formatting link

Here, I've boosted the contrast to show that there is a fair amount of depth detail in the parts that are shown:

formatting link

I've added a 20-second countdown timer to the displayed image so that I could position myself while viewing the image. Code:

#!/usr/bin/env python2

import freenect, cv2, numpy, time start=time.time()

while True: depth, timestamp = freenect.sync_get_depth() depth = numpy.clip(depth /4, 0, 255).astype(numpy.uint8) timer = 20 + start - time.time() if cv2.waitKey(20) != -1 or timer < 0: break cv2.putText(depth,('%4.1f' % timer),(10,100),0,3,(0,0,0),10) cv2.imshow('image', depth)

cv2.imwrite('image.png', depth)

And to get the contrast boosted image, I temporarily changed the depth formatting line to:

depth = numpy.clip((depth - 450) * 2 , 0, 255).astype(numpy.uint8)

Reply to
Dave Farrance

Am 27.08.2014 18:24, schrieb Dave Farrance:

I see a (wooden) chair.

Although I prefer stereograms where you cross your eyes. That way the image is very much more easy to focus and is much more stable against moving your head or look around in the picture.

Nothing for you to bother, I guess, but I just wanted to say it once. :-)

--
Robin Koch
Reply to
Robin Koch

That's cool - took me ages to see it though.

Reply to
Andy Furniss

Correct. Sorry, I meant to get back to this promptly but got diverted. Here's the depth map.

formatting link

I've tried again, but with a less decentralized object, and which has a mostly matt surface that reflects enough light back to the Kinect so that it doesn't leave inappropriate holes in it. I've found an image of grass to use as a texture. This does seems easier to view:

formatting link

Openstereogram, which is in most Linux distros can only create the wall-eye type "Magic Eye" autostereograms. In fact, a google-image search of stereograms, gave page after page of the wall-eye type, with no crossed-eye types in sight. That does make sense, though. To see such images, rather than straining your eye muscles, you _relax_ them so that the eyes are still converging, but not converging as much as much as normal.

For anybody that doesn't know how to view them: Start by moving too close to the screen and relaxing your eyes so that you look through it, then move back slowly, trying to bring your eyes into focus on the screen, but not converging your eyes completely so that they still converge at a point a few inches behind the screen.

Reply to
Dave Farrance

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.