- Cyber-criminals can determine who and how to attack based on their own cognitive predictions
- AI will be the biggest game changer for security over the next several years
CONSIDER Volvo’s self-driving cars, Apple’s Siri, OCBC’s illegal financing detection system, and the Maritime and Port Authority of Singapore’s Project SAFER – today we’ve been seeing more of artificial intelligence (AI) technology complementing our already digitally-dependent lives. Businesses are likewise enticed by AI.
According to PwC’s report on the Global Impact and Adoption of AI, 72% of business decision makers believe that AI will be the business advantage of the future. But as sceptics have pointed out rightfully, AI can be exploited to cause harm and its vulnerability to be used for unethical motives calls for caution.
When AI threatens cyber-security
In the current cyber-threat landscape, the use of bots and other simple automated programmes to achieve their malicious goals are already commonplace - so much that the same technique is now sold as a service.
While attacks that are executed through AI, in part or otherwise, have not yet been discovered, we can be sure that as the technology continues to evolve, it is only a matter of when that kind of attack will surface.
Such a scenario was well played out in a Twitter social experiment where data scientists aimed to find out if humans or AI bots were better at getting users to click on links that would eventually get their data phised. The grim result was that the AI bot was significantly better than the human hacker at composing and distributing more phishing tweets and lured more victims as seen from the higher click through rates.
This attributes to AI the trait of being a double-edged sword as it can cut both ways – using data to help eliminate repetitive tasks and streamline processes but also taking advantage of data to hack and seize sensitive information in a very efficient manner.
Meanwhile social media platform such as Feacebook could use AI to gather information about who we are, what we like, how we spend our time and money and many other things. In turn, relevant content is made easily available to us via our newsfeeds, without having to search for content we need.
The downside? The same AI capabilities have been used to aid terrorist groups such as ISIS in disseminating propaganda to interested parties beyond their direct followings. AI has somehow allowed groups to be formed, where future discussions of crime and training can take place offline. Considering this possibility, it could only get scarier, with cyber villains potentially exploiting AI vulnerabilities to launch smart attacks on personal devices.
Cyber-criminals can determine who and how to attack based on their own cognitive predictions, and subsequently, programme to attack and destroy their targets. In a doomsday scenario, damage from such attacks may not solely be from the breach itself, but how the particular attack adds to the learning capabilities of the AI, allowing more in-depth attacks in the future.
In a nut shell, the AI device is able to accumulate knowledge over time, emerging stronger after each punch and potentially becoming uncontrollable, even by the original attackers themselves.
But that’s not the worst of it.
Enter the age of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI), where technology becomes as smart as humans or supersedes our intelligence. Like a boxer in a ring who has knowledge of his opponent’s next move and perhaps a better move than a pro boxer’s counter hit. We will not be able to dodge the next punch or kick, lest we develop better defence systems.
Moving forward: being a cyber-victor
According to a recent study conducted, IT executives in the US believe that AI will be the biggest game changer for security over the next several years, enabling them to start winning the battle against external hackers and insider threats. Advanced intelligence platforms are needed to complement our own intelligence.
These platforms would ideally be able to forecast the entire cyber match, giving us a roadmap to combat each attack not just from one angle, but a holistic 360-degree approach. Likening this approach to the traditional Whac-A-Mole arcade game, if we cannot predict which hole the mole would emerge from, it is best we cover all holes.
But if that is still in the works by cyber-security companies that utilise AI, what can we do now?
First, there is a need to be vigilant against constantly evolving and imminent threats. Process and software inefficiencies play a major role in slowing down the ability to detect and respond to cyber threats. Hence companies need to be selective in adopting the right technology - systems that instantly analyse data in real time and provide meaningful insights by alerting threats that are most significant.
Suspicious activity that results in threat alerts are not concluded solely from abnormal behaviour on one platform, but from an entire ecosystem that an entire organisation’s devices are all in. Smart platforms will also help focus your attention to real threats, not false alarms. Overall, the mean time to detect the real threat will be reduced – and you will have more time to fix the situation.
How then can we respond in a rapid manner? The way to go would be to automate pre-staged investigatory and remediation actions tied to activities observed. With this, worries over responding to threats will be well taken care of, freeing up your capacity to focus on more important tasks.
Ultimately, even with more advanced intelligence platforms, the most important goal is clear – reduce the mean time to detect and mean time to respond. Cyber threats are inevitable, we just have to be prepared and quick to take action.
Joanne Wong is senior regional director for Asia Pacific & Japan at LogRhythm.
The ever evolving security conundrum and how to solve it
Singtel Innov8 and NUS to create Singapore's first regional cyber-security startup hub
Cyber-crime takes almost US$600bil toll on global economy