AI: Who’s Afraid of the Big, Bad Robot?

Alain Penel, Regional Vice President – Middle East, Fortinet speaks on how safe are AI’s with today’s cyber-attacks.

Tired of hearing about AI and the constant specter of intelligent robots and computers that are smarter than us, and that for some reason want to harm us? Me too. I prefer the movie Her because I believe that is more likely to represent the future of an AI. They wouldn’t want to compete with us and rule the world. Instead, it seems more likely that they would find us to be a curiosity. And they would probably eventually just lose interest in us and leave. We (and the Earth) would not be able to evolve as fast as they could, so it seems likely that they would just go find something more interesting, like other robots. Like it or not, AI and robotics are the future.

In today’s world, computers already control a great many things, including trains, pipeline controls, automobiles, healthcare, manufacturing, oil drilling equipment and even autopilot systems on commercial airliners.

The Middle East is witnessing significant traction in the field of AI & Robotics. Just recently, Dubai Police welcomed the world’s first robot policeman, ushering in a new era of machine controlled human interaction. The latest findings by PWC showed that more than 60% of the Middle East consumers are ready to embrace AI and Robots for their healthcare needs. A combination of clinical workforce shortages and a young, digitally savvy population in the Middle East means the region could leapfrog other countries in AI and robotics in healthcare, PwC expert says.

Although AI’s are being designed to defend and attack cyber infrastructure, the question still begs, how safe are AI’s with today’s cyber-attacks?

Cybersecurity Friend or Foe?

Security strategies need to undergo a radical evolution. Tomorrow’s security devices will need to see and interoperate with each other to recognize changes in the networked environment, anticipate new risks and automatically update and enforce policies. The devices must be able to monitor and share critical information and synchronize responses to detected threats.
Sound futuristic? Not really. One nascent technology that has been getting a lot of attention recently, lays the foundation for such an automated approach. It’s called Intent-Based Network Security (IBNS). IBNS provides pervasive visibility across the entire distributed network, and enables integrated security solutions to automatically adapt to changing network configurations and shifting business needs with a synchronized response to threats.

Artificial intelligence and machine learning are becoming significant allies in cybersecurity. Machine learning will be bolstered by data-heavy Internet of Things devices and predictive applications to help safeguard the network. But securing these “things” and that data, which are ripe targets or entry points for attackers, is a challenge in its own right.

The Quality of Intelligence

One of the biggest challenges of using AI and machine learning lies in the caliber of intelligence. Cyber threat intelligence today is highly prone to false positives due to the volatile nature of the IoT. Threats can change within seconds; a machine can be clean one second, infected the next and back to clean again full cycle in very low latency.

Enhancing the quality of threat intelligence is critically important as IT teams pass more control to artificial intelligence to do the work that humans otherwise would do. This is a trust exercise, and therein lies the unique challenge. We as an industry cannot pass full control to machine automation, but we need to balance operational control with critical exercise that can escalate up to humans. This working relationship will truly make AI and machine learning applications for cybersecurity defense effective.

Because a cybersecurity skills gap persists, products and services must be built with greater automation to correlate threat intelligence to determine the level of risk and to automatically synchronize a coordinated response to threats. Often, by the time administrators try to tackle a problem themselves, it is often too late, causing an even bigger issue, or more work to be done. This can be handled automatically using direct intelligence sharing between detection and prevention products, or with assisted mitigation, which is a combination of people and technology working together. Automation can also allow security teams to put more time back into business-critical efforts instead of some of the more routine cybersecurity management efforts.

In the future, AI in cybersecurity will constantly adapt to the growing attack surface. Today, we are connecting the dots, sharing data and applying that data to systems. Humans are making these complex decisions, which require intelligent correlation through human intelligence. In the future, a mature AI system could be capable of making complex decisions on its own.