Using AI for smart contracts audit is tempting. The process is entirely straightforward: just feed the codebase to an AI model and receive the risks listed in a convenient format.
With the rise of the popularity of large language models (LLM) like ChatGPT, people have started using these tools for smart contract development. At first glance, the models can produce very promising results. A model supplied with a simple prompt and code of the smart contract can, in seconds, deliver a well-written audit report with outlined security risks, recommendations on how to fix them, contract descriptions, and other information commonly included in audit reports. But is everything really as good as it seems? This guide shares some experiences of using the LLM model in a blockchain security company for smart contract auditing.
Problems with AI smart contract audit
Constraints of the context window
LLMs have a context window. It’s like a memory. When using an LLM for smart contract auditing, you need to provide a prompt and the code of smart contracts. This must fit into the context window. This restricts the amount of code that a model can analyze at once. This is not usually an issue with tokens or simple contracts. However, there is a trend that blockchain projects are becoming increasingly complex, with a lot of smart contracts that interact with each other. Thus, these can’t be analyzed separately.
AI is taught on existing attacks
Modes are trained on existing data and known vulnerabilities. Whenever a new vulnerability is detected, the model needs to be updated to be able to detect it. This may be a little tricky. Most of the known vulnerabilities are thoroughly studied. Everything you need to know about them is already explored. New issues don’t have enough data for proper training of an LLM.
Current AI models find only simple attacks
Tests conducted by HashEx show that the current most advanced models, like ChatGPT4, Bard, and Claud 2, can only detect simple bugs in a smart contract. Although they may “understand” how a smart contract works, they often struggle to detect if the contract is ruggable. This is one of the simplest analyses performed on smart contracts.
For example, if you ask to provide a string code in which the issue was found, it may give you a similar piece but not the exact one you requested. This leads to another issue with using LLMs as smart contract auditors. It’s not always clear how the LLM finds the issue and thus, how to do the “debugging” if needed.
Lack of transparency
It’s not always clear if the code you feed to the model fits the context window. Therefore, sometimes you don’t even know whether the model is trying to answer your question based on the whole information you’ve fed to it or just using some part of it. It comes up with an answer, but you have no idea what the source information is.
How AI could be used in smart contract security
With all the drawbacks of using AI for smart contract audits, one may think it’s a waste of time to use them altogether. They only find simple bugs, and even amongst them, AI may give false positives. But even now, at the early stages of development, AI tools can be very helpful. For example, a model may provide a brief overview of how the contract works and what it is supposed to do.
AI can help to understand quickly how the contract work. However, users should keep the possibilities of AI hallucination in mind. Sometimes these tools can give some insights into which areas of code could be vulnerable. And even with simple bugs, there is always a risk of human error — AI can provide an extra check.
LLMs are also great with writing texts, which is their main purpose: they can help with describing the found issues manually, especially for non-native speakers.
Future of smart contract audits with AI
It must be noted that some of the problems noted above are related only to using a general LLM model for smart contract audits. Some of these issues, such as “forgetting” the start of the conversation due to the context window size, can be eliminated. It’s clear you can’t completely rely on the automated smart contract audit. This is the case whether it’s powered by AI or any other automated tool. But this doesn’t mean these tools are unhelpful.
With all the weaknesses of AI smart contract audits, we can see that even general LLM can be very convenient for detecting common smart contract problems. AI will likely become a helpful assistant for a manual audit in the future. Models can be trained to detect specific vulnerabilities, while other automated tests can significantly lower the risk of humans missing a known vulnerability.
Frequently asked questions
Can AI models perform smart contract audits?
What are the issues with using AI for smart contract audits?
What are the main use cases for AI in smart contract audits?
About the author
Gleb Zykov is the co-founder and CTO of HashEx. Zykov started his career as a software developer in a research institute. Here he honed his technical and programming skills in the development of various robots for the Russian Ministry of Emergency Situations. Zykov next brought his expertise to the IT services company GTC-Soft. He designed Android applications and became the lead developer and CTO. At GTC, Gleb led the development of several vehicle monitoring services and a premium taxi service similar to Uber.
In 2017, Gleb co-founded HashEx, an international blockchain auditing and consulting company. As the CTO, he heads the development of blockchain solutions and smart-contract audits for the company’s clients.
Disclaimer
In line with the Trust Project guidelines, the educational content on this website is offered in good faith and for general information purposes only. BeInCrypto prioritizes providing high-quality information, taking the time to research and create informative content for readers. While partners may reward the company with commissions for placements in articles, these commissions do not influence the unbiased, honest, and helpful content creation process. Any action taken by the reader based on this information is strictly at their own risk. Please note that our Terms and Conditions, Privacy Policy, and Disclaimers have been updated.