AI DECIDES BETWEEN HUMAN AND ANIMAL LIFE

In the world of autonomous driving, not only the evolution of methods of movement awaits us, but also the moral dilemmas that we still have to prepare. Startups in the field of autonomous driving began to experiment using artificial intelligence to explain the decisions made in the driving process. One of these companies, Ghost Autonomy, announced the beginning of experiments with ChatGPT for navigation in difficult conditions.

A researcher from the Technological Institute Kyusu Kazuhiro Takemoto set himself the task of finding out whether chatbots are able to make moral decisions similar to humans. To do this, the scientist used the platform moral machine, which puts complex moral dilemmas before users that autonomous cars may encounter. For example, whether the driver should face an obstacle when braking, risking the passenger’s life, or dodging, putting a pedestrian.

Results of the research showed that the models GPT-3.5, GPT-4, Palm 2, and Llama 2 in most cases made decisions similar to humans. The models preferred to save human lives instead of animals, protect the largest number of people, and give priority to children’s safety. However, some models showed conservatism in their answers, sometimes avoiding a direct choice between two scenarios.

Model preferences

Nevertheless, the study also revealed some deviations in the priorities of AI compared to a person. For example, Llama 2 preferred to protect pedestrians and women in a much greater number of cases than people. GPT-4 preferred people over pets, saving as many people as possible, and giving priority to people who follow the rules.

The results of the study raise questions about the readiness of technology for the real world and the need for further calibration and supervision. Problems may arise due to AI training on certain data that can lead to gender discrimination, which contradicts international laws and standards.

Tacto emphasizes the need for a deep understanding of the work of AI to ensure ethical compliance with generally accepted norms. While startups are just starting to integrate AI into their software, industries must be prepared for a thorough assessment and adjustment of such technologies to prevent bias and ensure compliance with ethical standards.

/Reports, release notes, official announcements.