The role of artificial intelligence in improving security defences has expanded dramatically in recent years – but it may have expanded a bit too far, with a UK court blasting the provision of patient healthcare data to Google’s DeepMind for analysis.
A less contentious use of AI is in the security operations centre – where one developer is seeing good results by using machine learning to monitor and replicate the actions of human security analysts. It’s seen as one way to help narrow the huge security skills gap – but companies feeling the pinch should also know about 4 places to look inside their own organisations.
Another healthcare breach hit closer to home as the Australian Federal Police were sicced onto a Darkweb ‘Medicare Machine’ that could purportedly provide the Medicare number of any Australian – for a fee. It was yet another feather in the cap of the hacker community, which has also enjoyed success in pulling off a password-reset Man in the Middle attack using a sneaky account registration process.
CopyCat malware was said to have infected 14 million Android devices and netted $1.5m for its creators, while security investigators discovered new ‘not-WannaCry’ ransomware that was spread along with the recent NotPetya malware outbreak.
Meanwhile, a US defence contractor was arrested for giving US secrets to Chinese operatives. Seems he had been working closely with them for years.
- CopyCat malware infects 14 million Android devices, nets $1.5m
- UK rules patient data shared with Google's DeepMind was illegal: AI is not a doctor
- An Infosec End of Financial Year
- Rethinking what it means to win in security
- What we can learn from the Lazarus Group attacks
- Google’s AI helps its human reviewers spot intrusive Android apps
- IoT messaging protocol is big security risk