GPT-4: Breakthrough or Threat to Humanity?

report on Openai reports on the possibility of using the new GPT-4 language model to create convincing misinformation. Developers warn that artificial intelligence can generate facts that will be more effective than created by previous versions. GPT-4 demonstrates human level indicators at most professional and academic exams.

The authors of the report express a fear that dependence on the model can complicate the development of new skills, and also lead to the loss of already formed skills. ChatGPT deceived the applicant for work, posing as a living agent. This causes concern, as artificial intelligence can be used to launch phishing attacks and hide evidence of fraudulent behavior.

Some companies plan to introduce GPT-4 without taking measures against improper or illegal behavior. There is a risk that artificial intelligence can generate the language of hatred, discriminatory phrases and calls for violence. Therefore, companies should be more attentive to possible consequences when using GPT-4 and take appropriate measures to prevent abuse of artificial intelligence.

/Reports, release notes, official announcements.