The new generation of artificial intelligence (AI) tools has been described as “intelligent assistants” and “co-pilots” – a way of telling us that we are still functioning in the cockpit. While this may be true today, it won’t last long, at least in the world of cybersecurity.
More than 60 startups and major vendors, including my company, have recently launched AI-powered products. While these announcements are notable, they are nothing compared to what follows: forrest research corp. It is expected that the artificial intelligence software market will expand to US$64 billion by 2024, with network security being the fastest growing category, with a compound annual growth rate of 22.3%.
In large part, this is good for cybersecurity and the organizations we protect.There are only 69 cybersecurity personnel per country 100 Vacancies. Even if we were able to fill all of these gaps, humans today simply cannot defend everything we need to protect on our own. we need artificial intelligence Respond with the speed and scale required by complex IT environments.
But just because cybersecurity requires artificial intelligence doesn’t mean adapting to its use will be easy. For decades, white hat defenders have been responsible for protecting everything from an organization’s printers to its most precious intellectual property rights. Human ingenuity, ability and a lot of blood, sweat and tears build, maintain and protect the platforms on which we live, work and play. Being asked to hand all this over to a robot is no small feat. Cybersecurity faces an identity crisis: Who we are, what we do, and the role we play will fundamentally change with the advent of artificial intelligence.
For years, only humans flew planes. In the next few years, we will be flying with robotic co-pilots. But our time in the cockpit inevitably comes to an end. We have to start planning our exit. While we will still have a role to play after we leave flying, what we do and the value we bring will be very different.
The threshold for artificial intelligence to replace human defenders
Artificial intelligence will still need our help to function in the coming years.this most Today’s cybersecurity vulnerabilities are partly due to human factors, including errors, privilege abuse, stolen passwords, and social engineering attacks.
Without getting too deep, artificial intelligence and some basic cybersecurity hygiene (such as multi-factor authentication, or MFA) can handle most of these incidents. AI can automate what happens when new users join an organization, provide them with the accounts they need to do their jobs, ensure they have MFA enabled, and monitor their account usage for any violations. While AI can manage day-to-day matters, human cybersecurity professionals will oversee more impactful decisions and handle exceptions, such as what happens if a user needs some specific resource that no one else in the organization does. .
Eventually, artificial intelligence will also be able to handle those high-stakes decisions and exceptions. This is the threshold where we need to exit the cockpit and trust artificial intelligence to handle critical events faster and more efficiently than humans.
Even so, we will still play the role of training, supervising and monitoring artificial intelligence. Consider what happened to Microsoft’s Tay, which went from “innocent artificial intelligence chatbot” to a racist, misogynistic troll in less than 24 hours. AI requires humans to set, monitor, refine, and change its parameters, its adaptability, and the way it interacts with other AIs.
From teacher of artificial intelligence to its guardian
Humans not only need to set the parameters of artificial intelligence and dictate how it should operate.Humans also need to make decisions Why Artificial intelligence is making a difference: we need to pose the right challenges to it and ask the right questions.
Today’s cutting-edge artificial intelligence represents a major leap forward in our ability to use new technologies to solve complex problems. But this is only possible if human researchers ask the right questions in the right way and provide them with the right information. One of the oldest rules of artificial intelligence is “garbage in, garbage out.” If AI is trained on low-fidelity, mislabeled, or inaccurate material, the output it creates will be worthless.
This shows the next role we need to play once artificial intelligence takes over day-to-day cybersecurity operations. As artificial intelligence becomes part of our cybersecurity architecture, threat actors will try to target it through data poisoning and instant injection. They will work to make artificial intelligence hallucinate or turn against us. Artificial intelligence will protect us—and we need to protect it.
The cybersecurity industry is always looking for ways to adapt to the latest threats and take into account the latest technologies, but artificial intelligence represents a new order that will test us and push us further than ever before.
Everything will change—and we must change with it.
Rohit Gay RSA is the global leader in identity and access management (IAM) solutions for security-first organizations. More than 9,000 organizations worldwide rely on RSA to manage more than 60 million identities.
More must-read comments by wealth:
The views expressed in Fortune Star review articles represent solely those of the author and do not necessarily reflect the following views and beliefs: wealth.
Svlook