See More

G7 Countries Push AI ‘Code of Conduct’ to Prevent Misuse of Technology

2 mins
Updated by Ali M.
Join our Trading Community on Telegram

In Brief

  • G7 nations to establish voluntary 'Code of Conduct' for companies innovating in advanced AI systems.
  • OpenAI forms Preparedness team to manage risks posed by AI models, echoing global call for safety.
  • The 'Code of Conduct' and OpenAI's team represent steps towards harnessing the power of AI responsibly.
  • promo

The Group of Seven (G7) industrial nations are poised to establish a voluntary ‘Code of Conduct’ for companies innovating in the field of advanced artificial intelligence (AI) systems. This initiative, emanating from the “Hiroshima AI process,” seeks to address potential misuse and risks associated with this transformative technology.

The G7, comprising Canada, France, Germany, Italy, Japan, Britain, and the United States, along with the European Union, initiated this process to set a precedent for AI governance.

G7 Nations Pitch Global AI Code of Conduct

Amid rising privacy concerns and security risks, the 11-point code is a beacon of hope. According to a G7 document, the code of conduct,

“Aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems.”

The code encourages companies to identify, evaluate, and mitigate risks throughout the AI lifecycle. Furthermore, it recommends publishing public reports on AI capabilities, limitations, and usage, emphasizing robust security controls.

Read more: The 6 Hottest Artificial Intelligence (AI) Jobs in 2023

European Commission digital chief Vera Jourova, at a forum on internet governance, stated,

“A Code of Conduct was a strong basis to ensure safety and that it would act as a bridge until regulation is in place.”

Ethical concerns surrounding AI systems. Source: State of AI in Enterprise

OpenAI Joins the Cause

OpenAI, the parent company of ChatGPT, has also formed a Preparedness team to manage the risks posed by AI models. Spearheaded by Aleksander Madry, the team will address risks such as individualized persuasion, cybersecurity threats, and the propagation of misinformation.

This move is OpenAI’s contribution to the upcoming UK global AI summit, echoing the global call for safety and transparency in AI development.

The UK government defines Frontier AI as,

“Highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.”

OpenAI’s Preparedness team will focus on managing the risks, further solidifying the need for a global AI ‘Code of Conduct.’

As AI continues to evolve, the G7’s proactive stance and OpenAI’s commitment to risk mitigation are timely responses. The voluntary ‘Code of Conduct’ and the formation of a dedicated Preparedness team represent significant steps towards harnessing the power of AI responsibly. The aim is to ensure its benefits are maximized while potential risks are effectively managed.

Top crypto projects in the US | May 2024
Disclaimer

In adherence to the Trust Project guidelines, BeInCrypto is committed to unbiased, transparent reporting. This news article aims to provide accurate, timely information. However, readers are advised to verify facts independently and consult with a professional before making any decisions based on this content.
This article was initially compiled by an advanced AI, engineered to extract, analyze, and organize information from a broad array of sources. It operates devoid of personal beliefs, emotions, or biases, providing data-centric content. To ensure its relevance, accuracy, and adherence to BeInCrypto’s editorial standards, a human editor meticulously reviewed, edited, and approved the article for publication.

images-e1706008039676.jpeg
Advertorial
Advertorial is the universal author name for all the sponsored content provided by BeInCrypto partners. Therefore, these articles, created by third parties for promotional purposes, may not align with BeInCrypto views or opinion. Although we make efforts to verify the credibility of featured projects, these pieces are intended for advertising and should not be regarded as financial advice. Readers are encouraged to conduct independent research (DYOR) and exercise caution. Decisions based on...
READ FULL BIO
Sponsored
Sponsored