Are your people who they say they are?

By Jeff Paine, CEO of ResponSight

Credit: ID 92822327 © Dezzor |

Every technology user has a habitual relationship with their device —a unique way they use their laptop or smartphone. Each time a user engages with their device, they leave traces of activity data that identifies how they have interacted with the technology, networks, files and how any information they access is shared. 

What organisations forget is this data can serve as valuable early warning information by understanding whether their behaviour is risky or insecure for the business.  Unlike “creepy-tech” which can be used to track and trace whether employees are doing their work, these relationships can be monitored to detect when the user’s behavior changes and to identify potentially suspicious and nefarious activities.

Identifying suspicious activity can be difficult when an employee’s login credentials to that device say it’s who they say they are. However, there are specific changes in their behaviour that suggests poor employee security practice or worse – in cases of stolen credentials or a laptop left unattended, it’s not the expected user at all.

Monitoring for abnormal user behavior such as examining whether a user is accessing data that is outside of their regular work pattern and whether they are sharing data to unusual destinations is no longer sufficient, nor is using a user’s geolocation to detect suspicious activity. The main flaw in these traditional approaches is that the source data cannot be trusted because it’s either often incomplete or can be easily manipulated by rogue insiders or malicious outsiders, producing high volumes of false positives.

The current challenge facing security technologies is that there is no clear and compelling link between the security credentials (username/password/MFA) and the person (or malware/bot) operating the device. Current technologies rely heavily on the username or hostname as the source of identity, and attackers have proven repeatedly that credentials are merely a hurdle to work past before the real nefarious activity can begin.

With this in mind, abnormal user behaviour that can be shown to be inconsistent with historical activity profiles among employees can foreshadow an incoming threat. The premise is simple – we all use our technologies pretty much the same way over time.  This habitual activity baseline becomes a yardstick against which current activity can be measured – attackers and malware don’t have the time or patience to try and learn and then mirror behavior, even if they could stay resident long enough on the device to even attempt it.

Access to behaviourial data allows organisations to take the opportunity to be proactive, rather than reactive to security. Predictive algorithms are trained to encode information present in historical data enabling approximation of key activity and behavioural indicators. To identify anomalous activity on a machine or within an organisation, a series of predictive contextual models can be used, where predictive error corresponds to deviations from typical activity, regardless of the cause of the deviation. Investment in services and applications that can provide early warning about any potential dangers can lower a business’ risk profile while identifying the employees responsible and whether further action needs to be taken.

Organisations can use the data to develop a cybersecurity risk management strategy and automate their cybersecurity prevention, detection and response systems to take the load off the people tasked with the detection process. Such designs will ensure that the system is carrying out pre-cursor research into the environment by pre-emptively examining behavioural changes in people to detect possible unknown threats. This approach enhances the effectiveness of existing security investments, by leveraging existing capabilities and providing a method for setting priorities.

Detecting and investigating unusual user behaviour as an indicator of changes in the threat landscape can reduce short and long term impact of impending cybercrimes. Attackers would be less effective if early warning systems looked at risk associated with activity profiles rather than looking for new needles in ever-growing haystacks. 

Organisations need to strike a balance between employee autonomy to maintain visibility of risk and a high-level of protection against nefarious activity. Enterprises can do this by measuring risk associated with behavioural and activity metrics continually in an objective way and proactively use these risk outcomes to make better business and security decisions.

Join the newsletter!


Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Follow our new CSO Australia LinkedIn
Follow our new social and we'll keep you in the loop for exclusive events and all things security!
Have an opinion on security? Want to have your articles published on CSO? Please contact CSO Content Manager for our guidelines.

Tags risk managementcybercrimeMulti-factor authenticationcybersecurity

More about

Show Comments

Featured Whitepapers

Editor's Recommendations

Brand Page

Stories by Jeff Paine

Latest Videos

More videos

Blog Posts