Cognitive computing and the future of cyber-security

19 July, 2016 by Ouriel Weisz

NURO_ibm_watson_cyber_security

Take the smartest people you know and show them 10 emails that appear to be from familiar brands and business partners.

Some are real, some fake.

How many of the bogus emails will your smart friends spot? 

The odds of 100% success are against them. When Intel-McAfee asked 100 cyber-security professionals to distinguish between a selection of real and bogus emails only six of the experts got them all right.  Most of the experts identified only six or seven of the 10 emails correctly. That’s the problem. With cyber security, one mistake and you’re opening the virtual door to malware or hackers. This one-in-three chance of making the wrong choice explains why email phishing has proved to be an evergreen criminal tactic. 

Bogus messages are still duping people into clicking-away their confidential data 20 years after it first emerged as a threat. And the costs continue to rise. The “starting point” for a business to recover from a security breach – counting the cost of business disruption, lost sales, recovery of assets, and fines and compensation – is now £1.46 million. That’s compared to £600,000 a year ago. 

And, according to the same research, “inadvertent human error” was behind about half of the worst security breaches suffered by British businesses last year.

We need a system that’s better at judging digital risk than the human brain.

False-positives: computers make mistakes too

There’s a problem. It’s not just the human brain that falls short. Computers are prone to misidentification too. Traditional cyber security tools take time to correctly identify data breaches. Malicious attacks can take an average of 256 days to identify, according to the Ponemon Institute’s 2015 Cost of Breach Study: Global Analysis. Meanwhile, data breaches caused by human error take an average of 158 days to identify.  Two-thirds of the time IT staff spend dealing with security alerts is spent on false-positives or false-negatives. Misidentification of a potential threat is a waste of precious business time and resources. 

What’s needed is a faster, more accurate way of interpreting and detecting threats and dealing with them. We need something faster and smarter than the technology that’s already in play.

Cognitive computing hits prime-time

So rewind to February 2011 … the moment a super-computer grabbed headlines around the world for beating two all-time human champions at the American quiz show Jeopardy. Take a bow IBM Watson, winner of the $1m jackpot. The quiz demonstrated Watson’s power to surpass the human brain in unravelling answers from fragments of information. 

Watson showed an understanding of how colourful and complex human language can be: showing comprehension of puns, double-meanings and riddles.  This was the beginning of the era of “cognitive computing”: machines built to interpret, learn and apply that knowledge to solve problems.

It’s a narrow form of artificial intelligence; not the super-intelligent machines of science fiction. Instead this technology designed to match and surpass human capability in a single or focused sphere. It can adapt and interpret different input signals. It can weigh risks. 

To date, one major obstacle to cognitive computing is the fact that the majority of information online is “unstructured” – a sprawling jungle of knowledge in different formats and online places, from news or scholarly articles to blogs and eBooks. Cognitive computers are trying to make sense of that jungle of data and detect hitherto unseen patterns and connections. 

Since Jeopardy, Watson has become a practical tool ranging across industries – starting with healthcare where it is used for both medical research and helping physicians diagnose and devise the best treatment plans for patients, particularly in the case of cancer. Now Watson is learning to become a virtual Sherlock and address cyber-threats when traditional tools like firewalls and antivirus are struggling to keep up.

Watson is drawing on IBM’s two decades’ worth of cyber security research and a library which contains eight million spam and phishing attacks and tens of thousands of known vulnerabilities. Eight universities in the US and Canada have been enlisted to help build its knowledge of cyber threats and tactics

But IBM is far from the only major player in the field. 

Last month, Daniel Kaufman, head of advanced technology projects at Google, revealed at the company’s I/O developers conference that behavioural biometric authenticators would be applied to its Android mobile platform next year.  That means the device could look at your location and Wi-Fi network, time and even your typing speed on the keyboard to assess risk based on your known patterns of behaviour.  The “trust score” would varying depending on the nature of the online activity. Android might decide to limit access a banking app, but in the same circumstances allow access to a gaming app.

Smart support

So think of cognitive computers as augmenting your performance. You’ll have virtual support as you navigate the online environment that increasingly helps guide your way so you avoid the traditional costly mis-steps.

Your cognitive computer will monitor and learn the patterns of behaviour to create a baseline of what “normal” is for you as a unique user.

Your normal online pattern of behaviour will be cross-referenced and compared to a vast corpus of online knowledge. 

So what will be the long-term impact of cognitive systems like Watson and Google’s behavioural biometric analysis? According to IBM, cyber-criminals will “find the payoffs to be harder and harder to achieve.” 

But for you? You’ll have more time to focus on the work that matters – and less time worrying about whether or not you’re going to make a mistake.

 

Credit: “Jon Simon/Feature Photo Service, published under Creative Commons via Flickr.