Facebook goes ‘deep' in getting to know you

Facebook is laying odds that artificial intelligence (AI) can trump human intelligence or, to be more precise, a lack of human intelligence.

When your judgment is compromised by emotion, mind-altering substances or anything else, Facebook itself could be a better friend than any of the hundreds or thousands you allegedly have in that virtual world, by warning you that something you're about to post could come back to haunt you, professionally or personally.

That capability is one of the major goals of the company's AI guru, Yann LeCun, the New York University professor and researcher who has also been director of Facebook's Artificial Intelligence Research lab since December 2013.

In an interview with Wired last month, LeCun said he hopes the form of AI called "deep learning" will be able to recognize your face when it's not looking like it usually does. That would mean if you're drunk out of your mind and start posting a selfie in that condition, Facebook will know, and at least attempt to save you from yourself.

Beyond that, LeCun says deep learning can help Facebook deliver essentially designer content to users, so they see more of what they want to see and less of what they don't. The site is already using facial recognition to help tag people in photos that are posted, and, according to LeCun, will soon analyze the text in posts to suggest Twitter hashtags.

All of which sounds benevolent a free digital assistant that helps keep you out of trouble.

But it also raises the possibility of a computer program knowing users better than they may want to be known. Most might welcome a digital "nanny" to let them know they're about to get themselves in trouble, but what assurance is there that such relatively intimate knowledge will always be used for benevolent purposes?

Michele Fincher, chief influencing agent at Social-Engineer, noted that Facebook does offer users ways to manage their privacy.

"But the bottom line is that users are responsible for knowing the limits and their rights," she said. "After all, it is a public platform for the posting and sharing of information, which is in direct opposition to privacy."

Michele Fincher, chief influencing agent, Social-Engineer

LeCun himself told CSO that the goal of his research is more, not less, privacy for Facebook users, if that is what they want. The "drunken selfie" warning, he said, is not yet a reality.

[ Messenger app users worry how Facebook uses a device's phone, camera ]

"It is a fictitious example to illustrate the use of image recognition technology for privacy protection not invasion," he said. "Although the technology is within the realm what is possible today, it is not an actual product, or even a prototype. But given the amount of press it generated largely positive the idea struck a chord."

LeCun said such a system would not necessarily recognize specific faces, but would be based more on, "facial expressions and the context of the scene is it in a bar, do people hold drinks, etc."

If it becomes an actual product, and, "you found it creepy or not useful, you would turn it off," he said, adding that Facebook's "blue privacy Dinosaur" is activated when it thinks the privacy settings for a post may be too broad.

"Again, this is designed to help you protect your privacy, not to help anyone invade it," he said.

But, of course, the issue is not just privacy from, or visibility to, other users of Facebook, but from Facebook itself. In other words, does "deep learning" go deep enough to become invasive?

The Facebook press office did not respond to an email request for comment.

Fincher said it should be obvious to users that they control how much Facebook knows about them, through what they choose to post or even "like."

"If information is posted online, it's not private, period," she said. "Once information has left your hands, or your computer, you no longer have control over it."

That is the same basic message from Rob Yoegel, vice president of marketing at Gaggle. "No one should be putting anything online that they wouldn't say to someone in person," he said. "That's the issue at its core.

"The term digital footprint is used a lot, but it's really a digital tattoo. Parents are creating it at birth. The first time they post a photo of their newborn, their child at the park, their child holding an award-winning science fair project - this content will stay attached to them forever," he said.

[ Six steps to better Facebook privacy management ]

And, of course Facebook collects information beyond actual posts.

Rob Yoegel, vice president of marketing, Gaggle

A study released earlier this month by researchers from the University of Cambridge in the UK and Stanford University in California concluded that a computer algorithm studying Facebook likes can predict behavior and five key personality traits better than a person's other Facebook friends and, in some cases, "even outperform the self-rated personality scores."

So, the reality is that Facebook can collect a lot of information on users even if they never post a thing. That is one of the things that bother Rebecca Herold, CEO of The Privacy Professor. She said to Facebook's credit, its user privacy options are, "significantly greater than other types of social media sites.

"However, there are still a lot of unknowns about how they share data with third parties," she said, "and all the many types of tracking data they use not only to control what is shown in each user's news feed, but also to determine what posts on a person's timeline they will show to others."

Herold said she has asked Facebook for details about its collection of user activities, metadata and other information the company says is not personal information.

"I've been told that they cannot tell me, because those questions relate to their intellectual property, not to privacy," she said.

The bottom line, she said, is that, "AI is inherently privacy invasive."

Rebecca Herold, CEO, The Privacy Professor

Yoegel agrees that it is invasive, but notes that so are a lot of other things we do without thinking about it, including giving a restaurant server our credit card, or giving a home address to a pizza delivery person.

"The bigger issue is how these companies present their terms and conditions to users," he said, noting that those on all social networking sites "aren't easy to consume. They never have been. I'm sure very few people ever read them. You just scroll down a page and click Accept.'

"And they're always getting changed, but email messages and site alerts about those changes are likely being ignored or dismissed," he said.

Herold said she believes the intent of Facebook's AI initiative is good. "But the possibilities for really bad things to happen are just as great as the possibilities for great good," she said. As HAL (from the movie, "2001") said, I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission. And I want to help you.'

"And you know how that story ended."

Join the CSO newsletter!

Error: Please check your email address.

Tags securityapplication securityAccess control and authenticationNew York Universitytwitterartificial intelligenceintelFacebook

More about CSOFacebookMessengerStanford UniversityYork University

Show Comments

Featured Whitepapers

Editor's Recommendations

Solution Centres

Stories by Taylor Armerding

Latest Videos

  • 150x50

    CSO Webinar: The Human Factor - Your people are your biggest security weakness

    ​Speakers: David Lacey, Researcher and former CISO Royal Mail David Turner - Global Risk Management Expert Mark Guntrip - Group Manager, Email Protection, Proofpoint

    Play Video

  • 150x50

    CSO Webinar: Current ransomware defences are failing – but machine learning can drive a more proactive solution

    Speakers • Ty Miller, Director, Threat Intelligence • Mark Gregory, Leader, Network Engineering Research Group, RMIT • Jeff Lanza, Retired FBI Agent (USA) • Andy Solterbeck, VP Asia Pacific, Cylance • David Braue, CSO MC/Moderator What to expect: ​Hear from industry experts on the local and global ransomware threat landscape. Explore a new approach to dealing with ransomware using machine-learning techniques and by thinking about the problem in a fundamentally different way. Apply techniques for gathering insight into ransomware behaviour and find out what elements must go into a truly effective ransomware defence. Get a first-hand look at how ransomware actually works in practice, and how machine-learning techniques can pick up on its activities long before your employees do.

    Play Video

  • 150x50

    CSO Webinar: Get real about metadata to avoid a false sense of security

    Speakers: • Anthony Caruana – CSO MC and moderator • Ian Farquhar, Worldwide Virtual Security Team Lead, Gigamon • John Lindsay, Former CTO, iiNet • Skeeve Stevens, Futurist, Future Sumo • David Vaile - Vice chair of APF, Co-Convenor of the Cyberspace Law And Policy Community, UNSW Law Faculty This webinar covers: - A 101 on metadata - what it is and how to use it - Insight into a typical attack, what happens and what we would find when looking into the metadata - How to collect metadata, use this to detect attacks and get greater insight into how you can use this to protect your organisation - Learn how much raw data and metadata to retain and how long for - Get a reality check on how you're using your metadata and if this is enough to secure your organisation

    Play Video

  • 150x50

    CSO Webinar: How banking trojans work and how you can stop them

    CSO Webinar: How banking trojans work and how you can stop them Featuring: • John Baird, Director of Global Technology Production, Deutsche Bank • Samantha Macleod, GM Cyber Security, ME Bank • Sherrod DeGrippo, Director of Emerging Threats, Proofpoint (USA)

    Play Video

  • 150x50

    IDG Live Webinar:The right collaboration strategy will help your business take flight

    Speakers - Mike Harris, Engineering Services Manager, Jetstar - Christopher Johnson, IT Director APAC, 20th Century Fox - Brent Maxwell, Director of Information Systems, THE ICONIC - IDG MC/Moderator Anthony Caruana

    Play Video

More videos

Blog Posts

Market Place