The increasing investment in AI agents suggests a future of widespread automation, potentially even more transformative than the industrial revolution. As with any technological innovation, AI agents are bound to face problems in their development. Continuous improvement will be essential for responsible use and realizing AI agents’ full potential.
At Consensus Hong Kong, BeInCrypto interviewed Andrei Grachev, Managing Partner at DWF Labs, about the key challenges AI agents face in achieving mass adoption and what widespread use might look like.
Traditional Tech Sectors and Web3 Embrace AI
At this point, it’s safe to say that adopting artificial intelligence (AI) will soon be inevitable. Tech giants including Meta, Amazon, Alphabet, and Microsoft have already announced plans to invest up to $320 billion in AI and data centers in 2025.
During his first week in office, US President Trump announced Stargate, a new private joint venture focused on AI data center development. The venture, composed of OpenAI, Softbank, and Oracle, plans to build up to 20 large AI data centers across the United States.
The initial investment is estimated to be $100 billion, and expansion plans could bring the total to $500 billion by 2029.
Web3 projects are also making similar investments in AI. In December, DWF Labs, a leading crypto venture capital company, launched a $20 million AI agent fund to accelerate innovation in autonomous AI technologies.
Earlier this month, the NEAR Foundation, which supports the NEAR protocol, also announced its own $20 million fund focused on scaling the development of fully autonomous and verifiable agents built on NEAR technology.
“History shows that everything that can be automated will be automated, and definitely some business and normal life processes will be overtaken by AI agents,” Grachev told BeInCrypto.
But as AI development accelerates, the potential for its misuse becomes a growing concern.
Malicious Use of AI Agents
In Web3, AI agents are already quickly becoming mainstream. They offer diverse capabilities, from market analysis to autonomous crypto trading.
However, their increasing integration also presents critical challenges. AI misuse by malicious actors is a major concern, encompassing scenarios ranging from simple phishing campaigns to sophisticated ransomware attacks.
The widespread availability of generative AI since late 2022 has fundamentally changed content creation while also attracting malicious actors seeking to exploit the technology. This democratization of computing power has enhanced adversary capabilities and potentially lowered the barrier to entry for less sophisticated threat actors.
According to an Entrust report, digital document forgeries facilitated by AI tools now outnumber physical counterfeits, with a 244% year-over-year increase in 2024. Meanwhile, deepfakes accounted for 40% of all biometric fraud.

“It’s already being used for scams. It’s used for video calls when misrepresenting people and misrepresenting their voices,” said Grachev.
Examples of this type of exploitation have already made news headlines. Earlier this month, a finance worker at a multinational company in Hong Kong was tricked into authorizing a $25 million payment to fraudsters using deepfake technology.
The worker attended a video call with individuals he believed to be colleagues, including the company’s chief finance officer. Despite initial hesitation, the worker proceeded with the payment after the other participants appeared and sounded authentic, according to reports. It was later discovered that all attendees were deepfake fabrications.
From Early Adoption to Mainstream Acceptance
Grachev believes such malicious uses are inevitable. He notes that technological development is often accompanied by initial errors, which decrease as the technology matures. Grachev gave two distinct examples to prove his point: the early stages of the World Wide Web and Bitcoin.
“We should remember that the Internet started from the porn sites. It was like the first Bitcoin, which started from drug dealers and then improved,” he said.
Several reports side with Grachev. They suggest the adult entertainment industry played a crucial role in the early adoption and development of the Internet. Beyond providing a consumer base, it pioneered technologies like VCRs, video streaming, virtual reality, and any form of communication.
Porn acted as an onboarding tool. The adult entertainment industry has historically driven consumer adoption of new technologies.
Its early embrace and application of innovations, particularly when successfully meeting its audience’s demands, often leads to broader mainstream adoption.
“It started with fun, but fun onboarded a lot of people. Then you can build something on this audience,” Grachev said.
Over time, safeguards were also put in place to limit the frequency and accessibility of adult entertainment. Regardless, it is still one of the several services the Internet provides today.
Bitcoin’s Journey From Darknet to Disruption
The evolution of Bitcoin closely mirrors the Internet’s earliest use cases. Bitcoin’s early adoption was significantly associated with darknet markets and illicit activities, including drug trafficking, fraud, and money laundering. Its pseudonymous nature and the ease of global fund transfers made it appealing to criminals.
Despite its continued use in criminal activities, Bitcoin has found numerous legitimate applications. The blockchain technology underpinning cryptocurrencies provides solutions for real-world problems and disrupts traditional financial systems.
Although they are still very nascent industries, cryptocurrency and blockchain applications will continue to evolve. According to Grachev, the same will happen with the gradual employment of AI technology. For him, mistakes must be welcomed to learn from them and adjust accordingly.
“We should always remember that fraud happens first and then people start thinking about how to prevent it. Of course it will happen, but it is a normal process, it’s a learning curve,” Grachev said.
However, knowing these situations will happen in the future also raises questions about who should be held accountable.
Liability Concerns
Determining responsibility when harm occurs due to an agent’s actions is a complex legal and ethical issue. The question of how to hold AI liable inevitably arises.
The complexity of AI systems creates challenges in determining liability for harm. Their “black box” nature, unpredictable behavior, and continuous learning capabilities make it difficult to apply typical ideas about who’s at fault when something goes wrong.
Furthermore, the involvement of multiple parties in AI development and deployment complicates liability assessments, making it difficult to determine the culpability of AI failures.
Responsibility could lie with the manufacturer for design or production flaws, the software developer for issues with the code, or the user for not following instructions, installing updates, or maintaining security.
“I think the whole stuff is too new, and I think we should be able to learn from it. We should be able to stop some AI agents if it’s needed. But from my point of view, if there was no kind of bad intention to make it, no one is responsible for it because you are something really new,” Grachev told BeInCrypto.
However, according to him, these situations need to be carefully managed to avoid impacting continuous innovation.
“If you blame this entrepreneur, it would kill innovations because people would be afraid. But if it works in a bad way, right, it could eventually work. We need to have a way to stop it, learn, improve, and relearn,” Grachev added.
The fine line, however, remains razor-thin, especially in more extreme scenarios.
Addressing Trust Issues for Responsible AI Adoption
A common fear when discussing the future of artificial intelligence concerns situations in which AI agents become more powerful than humans.
“There are a lot of movies about it. If we are talking about, let’s say, police or government controls, or some army in some kind of war, of course automation is like a big scare. Some things can be automated to such a large level where they can hurt humans,” said Grachev.
When asked whether a scenario like this could happen, Grachev said that, in theory, it could. Regardless, he admitted that he could not know what would happen in the future.
However, situations like these are emblematic of the fundamental trust issues between humans and artificial intelligence. Grachev says the best way to approach this problem is by exposing humans to use cases where AI can actually be helpful.
“AI can be hard for people to believe. That’s why it should start with something simple, because trust to the AI agent won’t be built when someone explains that it’s trustworthy. People should get used to using it. For example, if you are talking about crypto, you can launch a meme, let’s say on Pump.fun, but why not launch it by voice message? With AI agents, just send a voice message that says ‘please launch this and that,’ and it’s launched. And then the next step [would be] to trust the agent with some more important decisions,” he said.
Ultimately, the journey toward widespread AI adoption will undoubtedly be marked by remarkable advancements and unforeseen challenges.
Balancing innovation with responsible implementation in this developing sector will be crucial for shaping a future where AI benefits all of humanity.
Disclaimer
In compliance with the Trust Project guidelines, this opinion article presents the author’s perspective and may not necessarily reflect the views of BeInCrypto. BeInCrypto remains committed to transparent reporting and upholding the highest standards of journalism. Readers are advised to verify information independently and consult with a professional before making decisions based on this content. Please note that our Terms and Conditions, Privacy Policy, and Disclaimers have been updated.
