Ethereum co-founder Vitalik Buterin has expressed his excitement about the potential of AI-assisted formal verification of code and bug finding.
In a Monday Tweet, Buterin noted that Ethereum’s most significant technical risk stems from bugs in code, and anything that could significantly change the game on that front would be groundbreaking.
“One application of AI that I am excited about is AI-assisted formal verification of code and bug finding. Right now, Ethereum’s biggest technical risk probably is bugs in code, and anything that could significantly change the game on that would be amazing,” he wrote.
Notably, one of the major challenges confronting Ethereum protocols is the exploitation of bugs, resulting in millions of dollars in losses for investors due to hacking incidents. According to a report by Chainalysis, crypto funds stolen in 2023 reached an astounding $1.7 billion, with approximately two-thirds of the total stolen funds being traced back to hacks that targeted DeFi protocols. And although the overall thefts have decreased compared to previous years, the threat of DeFi hacking remains a significant concern.
Meanwhile, although Buterin expresses optimism about AI’s potential, he has also noted caution. In a blog dated January 30, the Ethereum co-founder advised prudence when it comes to integrating AI with blockchain technology, underscoring the critical need for a careful and measured approach, especially in deploying AI in contexts that carry significant value and risk.
Notably, last year, smart contract development firm OpenZeppelin’s experiment using OpenAI’s GPT-4 to detect security issues in Solidity smart contracts yielded mixed results. As per a report by the firm, GPT-4 accurately identified vulnerabilities in 20 out of 28 challenges, but it also sometimes fabricated non-existent vulnerabilities.
Similarly, Kang Li, the Chief Security Officer at blockchain security firm CertiK, has cautioned against relying exclusively on AI-powered tools for coding. Speaking last September during Korean Blockchain Week, Li noted that AI tools can introduce more security issues than they resolve if employed without caution.
Li further emphasized that ChatGPT, for instance, may not be capable of detecting logical code bugs as adeptly as experienced developers. Instead, the pundit proposed utilizing AI assistants as support tools for seasoned developers, enabling them to understand code more effectively.
This news is republished from another source. You can check the original article here