I spoke last night to a friend who lamented that his teenage daughter seemed fixated on taking selfies of her eyeball with his phone. He found these when he went to review his stored photos. I thought it was a little odd but suggested that maybe she was looking for just the right shot to send off to an ophthalmology lab for diagnosis.
Researchers at the Camera Culture Group, headed by Ramesh Raskar at the MIT Media Lab, have designed the eyeSelfie, an inexpensive hand-held device for taking a photograph of the retina, the optic nerve, and the vasculature, which is located all the way at the back of one’s eye.
Digital snapshots of the interior of the eye can help physicians detect and treat vision-threatening diseases such as glaucoma, macular degeneration, and diabetic retinopathy early. New research indicates that the snapshots can also be used to identify risks factors for hypertension, heart disease, multiple sclerosis, and Alzheimer’s disease.
Taking this back into the realm of speculation, let’s imagine that Google decides to use its new image recognition initiative to automatically analyze eyeball photos, like those of my friend’s daughter. If the technology improves enough it could give Google vastly more insight into users’ health status. It goes way beyond my speculation from eight years ago (What if Google finds out you have cancer before you do?) about Google’s ability to guess a person’s illness from search logs, even if the user hasn’t been diagnosed yet.
I could take this further. Right now the MIT camera is a specialized unit and it still takes a bit of jiggering to get a clear shot that can be analyzed. But as camera technology improves perhaps we’ll get to the point where we can analyze even regular smartphone snapshots, zooming in on the eyeballs of everyone in the frame and assessing their health status.
That will take a while but we should be prepared for when we get there.