China’s internet watchdog will censor information artificial intelligence (AI) products use and hold developers responsible for their outputs as the nation battles for AI supremacy.
The Cyberspace Administration of China states that organizations will have 10 days to register generative products after launch as part of regulations government will finalize this month.
Will China Censorship Hurt Bid for Supremacy?
However, Beijing must satisfy the conundrum of innovating under Communist-style censorship, which could impede the usefulness of AI tools.
An early draft of the new regulations said AI efforts should “embody core socialist values” and foster national unity. But time will tell whether companies consider compliance feasible in light of friendlier regimes fighting China for AI supremacy. A Hong Kong professor Angela Zhang said companies must filter non-compliant data or risk facing severe penalties.
Large-language models behind generative AI tools consume text information from the internet to create humanlike digital content in response to user prompts. Controversies have surrounded which information these models should be allowed to access.
Recently, Elon Musk cited scraping by AI bots as a “galling” practice. The US Recording Academy also prevents artificially generated, non-human content from winning an award.
On the flip side, China’s new rules hold developers almost solely responsible for the outputs their large language models produce. Recently, Chinese companies Baidu and Alibaba released generative tools that didn’t violate communist ideals.
Western Regulators Express Similar Concerns
Last month, UK Labour spokesperson Lucy Powell called for regulations requiring AI product developers to acquire a license. Prime Minister Rishi Sunak proposed laws similar to the framework the European Organization for Nuclear Research uses.
In the meantime, European companies, including Renault and Airbus, have opposed the European Parliament’s draft of AI rules. They argue that the proposed regulations threaten the foundations of the language models without addressing AI’s risks.
While some believe AI can conquer humanity in about two years, its more pressing threats include the societal fallout from the spread of disinformation.
Across the Atlantic, Sam Altman, the CEO of ChatGPT creator OpenAI, has lobbied the US Congress for new regulations to address AI’s future rather than present risks. However, the technology has already forced financial institutions to rethink their approach to trading after a fake image of a Pentagon explosion tanked the S&P 500 in 20 minutes.
In adherence to the Trust Project guidelines, BeInCrypto is committed to unbiased, transparent reporting. This news article aims to provide accurate, timely information. However, readers are advised to verify facts independently and consult with a professional before making any decisions based on this content.