AI-coded smart contracts may be flawed and ‘fail miserably’ when attacked: CertiK

Using artificial intelligence tools such as OpenAI’s ChatGPT to write smart contracts and build cryptocurrency projects will create more problems, errors and attack vectors, according to an executive at blockchain security firm CertiK.

CertiK chief security officer Kang Li explained to Cointelegraph at Korea Blockchain Week on Sept. 5 that ChatGPT cannot spot logic code errors like experienced developers can.

ChatGPT can generate more errors than it recognizes, Li said, which could be disastrous for first-time or amateur-level programmers who want to build their own projects.

“ChatGPT is going to enable a group of people who never had all this training, they can start now, and I’m starting to worry about the morphological design issues buried in it.”

“You write something, ChatGPT will help you build it, but with all these design flaws, when attackers start showing up, it can fail miserably,” he added.

Instead, Li thinks ChatGPT should be used as an engineer’s assistant because it can better explain what a line of code actually means.

“I think ChatGPT is a very useful tool for people doing code analysis and reverse engineering. It is definitely a good assistant and it will greatly improve our efficiency.”

The Korea Blockchain Week crowd gathered for a keynote speech. Source: Andrew Fenton/Cointelegraph

He stresses that it shouldn’t be relied upon to write code — especially for inexperienced programmers looking to build something profitable.

Li said he would stand by his claim for at least the next two to three years, acknowledging that rapid advances in artificial intelligence could greatly improve ChatGPT’s capabilities.

AI tech is getting better at social engineering

Meanwhile, Richard Ma, co-founder and CEO of Web3 security firm Quantstamp, told KBW’s Cointelegraph on Sept. 4 that AI tools are becoming increasingly successful at social engineering attacks, many of which are on par with human attempts. same.

Ma said Quantstamp’s clients reported a surprising number of increasingly sophisticated social engineering attempts.

“These recent ones, it looks like people have been using machine learning to compose emails and messages. This is more convincing than social engineering attempts from a few years ago.”

While ordinary internet users have been plagued by AI-generated spam for years, Ma believes we are approaching a stage where we won’t know whether a malicious message was generated by an AI or a human.

related: Twitter hack: ‘Social engineering attack’ on employee admin panel

“It’s going to be much more difficult to distinguish between a human sending you a message (or) a very convincing AI sending you a message and composing a personal message,” he said.

Crypto industry experts have been targeted, while others have been impersonated by AI bots. Ma believes the situation will only get worse.

“In the crypto space, there are a lot of databases that have all the contact information of key people on every project. So hackers can access that information (and) they have artificial intelligence and they can basically try to message people in different ways.”

“It’s quite difficult to train the whole company not to react to these things,” Ma added.

Better anti-phishing software is on the horizon to help businesses defend against potential attacks, Ma said.

Magazine: AI Eye: Apple Develops Pocket AI, Deep Fake Music Trading, Hypnotizing GPT-4