Researchers working on facial recognition recently acquired millions of images from Flickr, raising the question of whether it is safe for private photos. These photos have now been downloaded by dozens of agencies and private companies, which are using them in the development of their own facial recognition technology – monitoring and spying on private citizens in the process.
Neither the researchers who first obtained the images nor those organizations using them have ever obtained any kind of permission or authorization from those who originally uploaded the photos to Flickr in the first place. This is providing a potential cause of action for these people whose privacy was violated and want to sue the company in a court of law.
If Flickr is Safe for Private Photos, How Did Researchers Get Hold of Them?
Flickr ostensibly gives users control over who is allowed to view their images by flagging them as either “public” or “private.” Images that are marked as private are by default visible only to the person who uploads them, although they may choose to make them available to family and friends.
According to an investigation by the New York Times, the problem started when facial recognition technology was still in its infancy and most research was conducted at public universities. Volunteers were recruited, who gave consent for their images to be used.
However, these researchers soon started taking instead of asking, gathering images from Internet websites and surveillance cameras. Unfortunately, these images were often of low quality. Those “scraped” from online tended to be primarily of celebrities and public figures.
In 2014, Yahoo, Flickr’s parent company at the time, announced that it would be releasing over 100 million images to the general public in order to “level the playing field” so that researchers at universities and small private companies could compete with Internet giants such as Google in the area of artificial intelligence.
However, these images were not made directly available: rather, they were accessed through hyperlinks. Thus, when a user deleted a photo or changed its privacy status, it could no longer be accessed.
There were two problems: (1) a “secret vulnerability” that made photos available even after they had been “deleted,” and (2) researchers had freely passed on their images to others. Among them was a team at the University of Washington in Seattle (UOW), who created “MegaFace.”
How Does MegaFace Play a Part?
Facial recognition technology doesn’t do well with images of children, because before now, children’s photos have been less available to researchers. Because families enjoy sharing their children’s photos (and believed their settings made them safe for private photos), the Flickr collection provided a veritable gold mine’s worth of such images.
It has also made MegaFace the “go-to” source for law enforcement, military and government, private corporations, academic institutions and anyone else working on facial recognition technology. While these photos do not come with names, they contain digital markers that allow them to be traced.
Isn’t It Illegal?
Under federal law, no. However, if you reside in certain states with strong privacy laws such as California or Illinois, you may have cause to bring legal action.
Please note: Top Class Actions is not a settlement administrator or law firm. Top Class Actions is a legal news source that reports on class action lawsuits, class action settlements, drug injury lawsuits and product liability lawsuits. Top Class Actions does not process claims and we cannot advise you on the status of any class action settlement claim. You must contact the settlement administrator or your attorney for any updates regarding your claim status, claim form or questions about when payments are expected to be mailed out.