There have been several companies that have tried to automate tagging of images with meta-data to allow end-users to easily search pictures. The techniques currently break down down into three categories:
- Recognize the images through facial or image recognition (riya)
- Allow others to tag them for you (tagcow/google/facebook)
- Inherit tags from objects that someone else tagged (photosynth)
Image recognition can get 80-90% accuracy. While this sounds pretty good it's actually pretty frustrating because 10-20% are incorrect.
If you allow othes to tag your photos they only tag or recognize things that are obvious. The first image just looks like a building while to me it's Coolidge Corner, Brookline. The second photo looks like a car while to me it's a Ferrari I saw in Itally while on my honeymoon. While strangers can solve part of the problem they lack the context to do so correctly. (Plus you give up on privacy)
Auto-tagging based on other people's photos has a ton of potential. But only for comonly shared items or locations. It may be able to tag the Golden Gate Bridge but it'll be a lot harder to find photos taken inside or photos of non-descript locations.
For true auto-tagging to work facial recognition needs to improve. Cameras need to save their GPS coordinates and those coordinates need to be translated into the names of locations and places. Historical data (email, twitter, facebook) can be correlated by inference and proximity to photos. A lot of tagging solutions focus on creating new meta-data while ignoring the mountains of un-tapped data that already exist.