WORLD LEADERS READY TO SHUT DOWN AI

16 world leaders in the field of AI, including Google, Microsoft, IBM and Openai, signed obligations to deactivate their technologies in case of their potential dangerous exposure. The event occurred in South Korea at the AI ​​Safety Summit.

AI Safety Summit.

As part of the summit, new obligations on the safety of advanced technology of AI. The measures taken in case of excess of these levels. If it is impossible to reduce the risks below the established threshold, the companies undertake not to develop and not introduce the relevant models and systems.

Although the obligation sounds promising, the details have not yet been worked out. They have to be discussed at the summit on AI, which will be held at the beginning of 2025.

The companies that signed the document in Seoul also pledged:

  • Testing your advanced AI models;
  • share information;
  • invest in cybersecurity and prevent internal threats to protect unreasoned technologies;
  • Encourage the detection of the detection of the detection vulnerabilities by third-party researchers;
  • to label ii-content;
  • prioritize the studies of social risks associated with AI.

During the summit, it was also adopted Seoul Declaration. In a document is the importance of ensuring compatibility between the AI ​​control systems, based on a risk-oriented approach to maximize the advantages and eliminating a wide range of risks associated with AI. This is necessary for the safe, reliable and trustworthy design, development, deployment and use of AI.

Among the participants of the session were representatives of government of the G7 countries, Singapore, Australia, the UN, OESR and the EU, as well as representatives of the industry.

/Reports, release notes, official announcements.