GPT-4 Trapped Inside Legless Robot, Alter3’s Pain Unseen

The University of Tokyo has developed a unique gumanoid robot called Alter3, a distinctive feature of which was pumped non-verbal communication, thanks to which, in the process of conversation with users, the robot can take various poses, realistically simulating human behavior.

This became possible thanks to the use of the large language model GPT-4 from Openai. It looks, of course, not as cool as an analogue from Tesla, but a unique approach to development will help make the robot more humane and adapted to society.

Alter3 uses Openai achievements for dynamic reproduction of a variety of poses, and without the need for pre -programming each movement in the database. Everything is done almost in real time.

According to the research work of the team, published in the research magazine Arxiv, “The ability of Alter3 to respond to the content of the conversation using facial expressions and gestures It is a significant progress in the field of humanoid robots. “

The use of LLM in robots is traditionally focused on improving the main communicative skills and simulations of realistic reactions. Researchers also study the potential of this technology in the understanding and execution of complex instructions by robots, increasing their autonomy and functionality.

For example, when interacting, a person can give an Alter3 command, such as “Make a selfie with his iPhone.” After that, the robot turns to the GPT-4 for advice on the necessary actions, and the language model converts it to the Python code, allowing the robot to perform the necessary movements.

alter3 so far is able to reproduce various movements only of the upper body, while the lower remains motionless. This robot is already the third iteration in the Alter humanoid robots series since 2016, which has 43 drives to manage facial expressions and limbs.

In previous studies, Alter3 demonstrated the ability to copy human poses using the Openpose camera and framework, adjusting its joints to simulate the observed movements.

The progress that scientists have reached today opens up new opportunities for studying imitation skills in robots, especially using advanced LLM technologies. Who knows what teams will learn to perform robots with GPT-5 output.

/Reports, release notes, official announcements.