Restrictions on ChatGPT in Italy
Italy has become the first Western country to ban ChatGPT, a popular artificial intelligence chatbot from OpenAI. This follows an order by the Italian Data Protection Watchdog to OpenAI to temporarily stop processing Italian users’ data amid an investigation into a suspected breach of Europe’s strict privacy regulations. Garante, the Italian regulator, cited a data breach at OpenAI that allowed users to view the titles of conversations other users were having with the chatbot. Garante also flagged concerns about a lack of age restrictions on ChatGPT and how the chatbot can provide factually incorrect information in its responses. OpenAI risks a fine of €20 million ($21.8 million), or 4% of its global annual revenue if it does not address the situation in 20 days.
Approach to AI Regulation in the EU UK, US, and China
EU
The European Union has proposed the European AI Act, which would heavily restrict the use of AI in critical infrastructure, education, law enforcement, and the judicial system. The proposed rules will work with the EU’s General Data Protection Regulation. The EU’s draft rules consider ChatGPT a general-purpose AI used in high-risk applications. The commission defines high-risk AI systems as those that could affect people’s fundamental rights or safety.
UK
The UK has taken a non-statutory approach to AI regulation. Rather than establish new regulations, the government has asked regulators in different sectors to apply existing rules to AI. The UK proposals outline some critical principles for companies to follow when using AI in their products, including safety, transparency, fairness, accountability, and contestability. The UK is not proposing restrictions on ChatGPT or any AI. Instead, it wants to ensure that companies are developing and using AI tools responsibly and giving users enough information about how and why certain decisions are taken.
US
The United States has yet to enact specific laws or regulations focused solely on AI, but various government agencies have begun exploring the issue. In 2019, the White House released a set of principles for regulating AI, which includes promoting innovation and public trust, ensuring fairness, and protecting national security and privacy. However, these principles are voluntary and not enforceable. The US also has several laws and regulations related to data privacy and protection, which can apply to AI. For example, the California Consumer Privacy Act (CCPA) requires companies to disclose what personal data they collect, how it is used, and to whom it is sold or shared. Companies must also give California residents the right to opt out of having their data sold.
China
China has become a leader in AI research and development but has also faced criticism for its lack of regulation in this area. In 2017, China released a plan to become a world leader in AI by 2030, and the government has invested heavily in research and development in this area. China has also developed a social credit system that uses AI and other technologies to monitor and score citizens based on their behavior. Critics argue that this system violates privacy and human rights. In 2020, China released a draft set of regulations on AI use, including requirements for transparency, fairness, and data protection. However, the regulations have been criticized for being too vague and not providing clear guidelines for enforcement.