Hi,
I started designing this application (process an image; search for a barcode label in the field; decode) and have been trying to identify and address the various ways a user can "go wrong" in such an application (jitter, focus, orientation, etc.).
Apparently, QR codes are commonly "photographed" with cell phone cameras (recall, I don't use a cell phone so ignorance, here). But, I imagine this is a very "focused" activity -- hold camera steady, orient it properly (more or less), make the image as large as possible in the field, etc.
As always, I want to loosen the constraints on the user and still keep the software reliable, robust, etc.
First question: how "casually" can you image a QR label and get "reliable results" (power up camera/app, focus on QR label, snap picture, deal with consequences)?
For example, imagine a QR code label affixed to the door of an office. You are passing the office and -- almost as an afterthought -- realize that *this* is where the "Big Meeting" next week will be. Imaging the QR code will magically get and store this information for you. (in a Land of Make-Believe)
Can you just hold up your camera phone "in passing"?
Do you have to stop, deliberately focus the image, center it, zoom to fill the screen, ensure it is "level" with the edges of the viewfinder, verify the camera is "normal" to the label, snap and then *wait* (hope!) to verify that it was a good decode (lest you have to repeat the process)?
It's relatively easy to deal with scale and "skew" issues. But, what about cases where the label is on a "non-flat" surface (e.g., on a cylinder)?
Lastly, what's the typical (data) density of these labels? Do they "push the envelope" or just aim for "small and simple"?
Thx,
--don