Microsoft takes the ‘ass’ out of Glasshole with a people-blocker

Microsoft takes the ‘ass’ out of Glasshole with a people-blocker

Microsoft researchers think technology could do a better job of preventing people becoming Glassholes than legislation and education.

A new prototype designed for Google’s Glass from Microsoft’s Research lab called “courteous Glass” aims to stop would-be Glass wearers from becoming Glassholes by fuzzing out human subjects until they’ve granted permission to be recorded.

The catch is that it would probably cripple Glass in public spaces and social settings where people could unwittingly walk in front of Glass while it’s recording.  

Nonetheless, it attempts to address two of the main concerns to come out of Google’s Glass beta program: the lack of outwardly visible signals to indicate when a wearer is filming a person and that Glass may be equipped with facial recognition software.

In response to growing public resentment toward Glassholes, Google earlier this year issued a list of do’s and don’ts for Glass Explorers. Focusing on social norms, it encouraged participants to ask a subject’s permission to record, not to expect to be ignored when wearing Glass and reminding them not to be a creep or rude aka “a Glasshole” when wearing Glass. Some Explorers venturing into places like restaurants however simply didn’t get the message.

While a new program was created to block Glass from joining a wi-fi network, Microsoft Research members Jaeyeon Jun and Matthai Philipose take a different approach, entertaining the possibility of encoding social norms in Glass with the aid of infrared sensors attached to the device that block recording when people haven’t given their consent.

Focussing on “well-intended users”, they argue that “these privacy issues arise because wearable cameras violate social norms that people developed around the use of hand-held cameras.”

They note that “wearable cameras are hard to be noticed by people who are in FoV (Filed of View), thus depriving them from an opportunity to opt out from recording (by walking outside FoV or covering the face). Second, as these cameras are often left on, even the wearer may not be aware of recording and fail to ask the consent to the people being recorded.”

Their answer to this problem is a protocol to handle gestures that mean a person doesn’t want to be recorded — either trying to block the camera’s FoV or getting out of the view — coupled with “far-infrared (FIR) imagers” that report a temperature reading at each pixel and capture thermal images instead of full colour images when it detects a warm-blooded human.

“The goal is to turn on recording devices only when no people are present in FoV or people in FoV have consented for recording,” they explain.

Their protocol says that if the device is recording and a new person who hasn’t given their consent to be recorded enters the FoV, then turn off recording. Otherwise if it is recording and detects an “off the record” gesture, tell the device to turn-off recording. If no people are in view, it would permit recording, but if a new person came in view it would launch an opt-in process. 

The FIR imager’s role would focus on this opt-in process, giving it a way to detect people entering its field of vision without relying on the full colour camera. As they note, FIR imagers could also capture simple gestures “reasonably well”. 

Their mock up proposes that the colour camera on a wearable could be blinded by a physical cover that slides over the lens when the FIR imager detects new people who haven’t given their consent.

They believe the technology could be feasible due to the drastic fall in cost of far-infrared imagers, which they estimate is in the “tens of dollars” range. And while adding some cost to the device, they argue the security and privacy benefits enabled by the extra sensor would outweigh the costs.   

The system might play havoc with Glass' use in social settings, but the researchers point out its functionality would be left in tact for most other purposes.

“We believe that the protocol would have a minimal impact on many tasks that are proposed to run on these cameras if the tasks do not involve people (e.g., reminding people to take a pill before eating, tracking the last location where a car key is left, keeping a record of meals taken),” the researchers note.

Despite their confidence that FIR imagers could be used to achieve the privacy principle of “the least privilege” , they concede a few technical challenges remain, including poor accuracy detecting in warm weather, and deciding what privacy rules to apply in public settings and how to automatically enforce them. 

Join the CSO newsletter!

Error: Please check your email address.

Tags facial recognition softwareGoogleGeorge JepseneducationGlassholeMicrosoftlegislationcourteous Glassprivacy issues

More about GoogleMicrosoft

Show Comments

Featured Whitepapers

Editor's Recommendations

Solution Centres

Stories by Liam Tung

Latest Videos

  • 150x50

    CSO Webinar: Will your data protection strategy be enough when disaster strikes?

    Speakers: - Paul O’Connor, Engagement leader - Performance Audit Group, Victorian Auditor-General’s Office (VAGO) - Nigel Phair, Managing Director, Centre for Internet Safety - Joshua Stenhouse, Technical Evangelist, Zerto - Anthony Caruana, CSO MC & Moderator

    Play Video

  • 150x50

    CSO Webinar: The Human Factor - Your people are your biggest security weakness

    ​Speakers: David Lacey, Researcher and former CISO Royal Mail David Turner - Global Risk Management Expert Mark Guntrip - Group Manager, Email Protection, Proofpoint

    Play Video

  • 150x50

    CSO Webinar: Current ransomware defences are failing – but machine learning can drive a more proactive solution

    Speakers • Ty Miller, Director, Threat Intelligence • Mark Gregory, Leader, Network Engineering Research Group, RMIT • Jeff Lanza, Retired FBI Agent (USA) • Andy Solterbeck, VP Asia Pacific, Cylance • David Braue, CSO MC/Moderator What to expect: ​Hear from industry experts on the local and global ransomware threat landscape. Explore a new approach to dealing with ransomware using machine-learning techniques and by thinking about the problem in a fundamentally different way. Apply techniques for gathering insight into ransomware behaviour and find out what elements must go into a truly effective ransomware defence. Get a first-hand look at how ransomware actually works in practice, and how machine-learning techniques can pick up on its activities long before your employees do.

    Play Video

  • 150x50

    CSO Webinar: Get real about metadata to avoid a false sense of security

    Speakers: • Anthony Caruana – CSO MC and moderator • Ian Farquhar, Worldwide Virtual Security Team Lead, Gigamon • John Lindsay, Former CTO, iiNet • Skeeve Stevens, Futurist, Future Sumo • David Vaile - Vice chair of APF, Co-Convenor of the Cyberspace Law And Policy Community, UNSW Law Faculty This webinar covers: - A 101 on metadata - what it is and how to use it - Insight into a typical attack, what happens and what we would find when looking into the metadata - How to collect metadata, use this to detect attacks and get greater insight into how you can use this to protect your organisation - Learn how much raw data and metadata to retain and how long for - Get a reality check on how you're using your metadata and if this is enough to secure your organisation

    Play Video

  • 150x50

    CSO Webinar: How banking trojans work and how you can stop them

    CSO Webinar: How banking trojans work and how you can stop them Featuring: • John Baird, Director of Global Technology Production, Deutsche Bank • Samantha Macleod, GM Cyber Security, ME Bank • Sherrod DeGrippo, Director of Emerging Threats, Proofpoint (USA)

    Play Video

More videos

Blog Posts

Market Place