Amid the growing popularity of emerging AI tools such as ChatGPT, the Group of Seven (G7) countries recently reaffirmed commitment to the adoption of “risk-based” AI regulations, as reported by Reuters. The digital ministries of G7 nations said in a joint statement that such regulation helps ensure “an open and enabling environment for AI development and deployment that is grounded in human rights and democratic values.”
Italy, a G7 member, decided to block ChatGPT owing to privacy concerns relating to the model. However, the ban was lifted later on Friday.
Created by OpenAI and backed by Microsoft, ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.
G7 ministers stressed the importance of international discussions on AI governance and interoperability between AI governance frameworks, while “recognising that like-minded approaches and policy instruments to achieve the common vision and goal of trustworthy AI may vary across G7 members.”
“Tools for trustworthy AI, such as regulatory and non-regulatory frameworks, technical standards and assurance techniques, can promote trustworthiness and can allow for the comparable assessment and evaluation of AI systems,” the joint statement said.