AI Tools Face Threat from 36 Vulnerabilities

More than three dozen vulnerabilities have been discovered in various open-source models of artificial intelligence and machine learning, some of which could allow attackers to remotely execute code and steal data. These issues were identified within the Protect AI rewards platform from Huntr in AI tools such as ChuanhuChatGPT, Lunary, and Locali (source).

The most significant threats include two critical vulnerabilities in Lunary. The first vulnerability, identified as CVE-2024-7474 (CVSS: 9.1), enables unauthorized access to data from other users. The second issue, CVE-2024-7475 (CVSS: 9.1), allows for the manipulation of configurations to log in as a different user.

Another vulnerability in Lunary, CVE-2024-7473 (CVSS: 7.5), permits attackers to modify requests from other users by manipulating transmitted data.

Chuanhuchatgpt is also affected by a critical vulnerability, CVE-2024-5982 (CVSS: 9.1), related to file downloads that could lead to arbitrary code execution and unauthorized access to sensitive information.

Two serious vulnerabilities have been found in Localai. The first issue, CVE-2024-6983

/Reports, release notes, official announcements.