Each day this week, we’ll highlight a real, no-bullshit, no-hype use case for artificial intelligence in crypto. Today, the potential of using artificial intelligence for smart contract auditing and network security is so close to us, yet so far away.
One of the important use cases for artificial intelligence and cryptocurrencies in the future is auditing smart contracts and identifying cybersecurity vulnerabilities. There’s just one problem – currently, GPT-4 sucks.
Coinbase experimented with ChatGPT’s automated token security review feature earlier this year, and in 25% of cases, it incorrectly classified high-risk tokens as low-risk.
James Edwards, lead maintainer of Librehash, a cybersecurity investigator, believes that OpenAI is not keen on using robots for such tasks.
“I firmly believe that OpenAI has quietly chipped away at some of the capabilities of bots in terms of smart contracts, so that people don’t explicitly rely on their bots to make deployable smart contracts,” he said, explaining that OpenAI may not do this. Do not want to be held responsible for any bugs or exploits.
This is not to say that artificial intelligence has zero capabilities when it comes to smart contracts. AI Eye interviewed Melbourne-based digital artist Rhett Mankind back in May. He knew nothing about creating smart contracts, but through trial and error and multiple rewrites, he was able to get ChatGPT to create a meme coin called Turbo, which later reached a market cap of $100 million.
But as CertiK Chief Security Officer Kang Li points out, while you might get something with the help of ChatGPT, it’s likely to be filled with logic code errors and potential vulnerabilities:
“You write something, ChatGPT will help you build it, but because of all these design flaws, when attackers start showing up, it can fail miserably.”
As such, it’s definitely not good enough for standalone smart contract auditing, where a tiny mistake could cost a project tens of millions of dollars — although Li says it can be a “code analysis for people.” useful tool”.
Richard Ma of blockchain security company Quantstamp explained that a major problem with its current ability to audit smart contracts is that GPT-4’s training materials are too general.
Also read: Real Artificial Intelligence Use Cases in Cryptocurrency, Number One – The Best Currency for Artificial Intelligence is Cryptocurrency
“Because ChatGPT was trained on many servers and there is very little data on smart contracts, it is better at attacking servers than smart contracts,” he explained.
As a result, there is a race to leverage years of data on smart contract vulnerabilities and hacks to train the model so that it can learn to spot them.
Also read
feature
North Korea Cryptocurrency Hack: Separating Fact from Fiction
feature
Investments in knowledge deliver the greatest benefits: The perilous state of financial education
“There are some new models that allow you to input your own data, and that’s part of what we’ve been doing,” he said.
“We have a very large internal database of all different types of vulnerabilities. I started a company over six years ago and we have been tracking all different types of hacks. So this data is very useful for training artificial intelligence. Said it was very valuable.”
The race to create AI smart contract auditors has begun
Edwards is working on a similar project and is almost finished building an open source WizardCoder AI model that contains the Mando project’s smart contract vulnerability repository. It also uses Microsoft’s CodeBert pre-trained programming language model to help find problems.
Edwards said that in tests so far, artificial intelligence has been able to “review contracts with unprecedented accuracy, far exceeding what was expected and anticipated from GPT-4.”
Much of the work involved building a custom dataset of smart contract vulnerabilities to identify the lines of code responsible for the vulnerability. The next important tip is to train the model to discover patterns and similarities.
“Ideally, you want the model to be able to piece together connections between functions, variables, context, etc. that a human might not draw when looking at the same data.”
While he admits it’s not as good as a human auditor, it could already be a strong first step toward speeding up the auditor’s work and making it more comprehensive.
“It’s like LexisNexis helps lawyers. Except more efficiently,” he said.
Don’t believe the hype
Co-founder Illia Polushkin explains that smart contract vulnerabilities are often weird, niche edge cases with a one-in-a-billion chance of causing a smart contract to behave in an unexpected way.
But an LL.M. based on predicting the next word approaches the problem in the opposite direction, Polushkin said.
“Current models are trying to find the statistically most likely outcome, right? When you think about smart contracts or protocol engineering, you need to account for all the edge cases,” he explained.
Polushkin said his background in programming meant that while Near focused on artificial intelligence, the team developed programs to try to identify these rare situations.
“This is a more formal search program around code output. So I don’t think it’s completely impossible, and there are some startups now that are really investing in the use of code and its correctness,” he said.
But Polushkin believes that “in the next few years, artificial intelligence will not be as good as humans in auditing.” It will take a little longer. “
Also read: Real AI use cases in crypto, second – AI can run DAO
subscription
The most fascinating read in blockchain. Delivery is weekly.
Andrew Fenton
Andrew Fenton lives in Melbourne and is a journalist and editor covering cryptocurrency and blockchain. He has been national entertainment writer for News Corp Australia, film reporter for SA Weekend and reporter for Melbourne Weekly.
Follow the author @AndrewFenton
Svlook