Experts Create BlackMamba AI-Powered Polymorphic Malware

The introduction of ChatGPT and other large language models marked a peculiar revolution. The synthesis of the code has become simple, understandable, fast and free for everyone. This powerful and universal tool can be used, among other things, to create malware, which in the future can result in a new and extremely dangerous type of polymorphic cyberism.

Traditional safety solutions, such as EDR, use multi -level data analysis systems to combat some of the most complex modern threats. The developers of most automated security tools claim that their brainchild prevents new or non -standard behaviors of behavior almost every day. But in practice this happens very rarely.

Using new methods of creating malicious software, such as neural network algorithms, attackers can combine a number of easily detected actions into an unusual “team of hodgepodge” and effectively evade detection. Because antivirus models simply cannot recognize such software as harmful.

The problem will aggravate when artificial intelligence can independently “become at the helm” and completely lead the cyber attacks. Since the methods he has chosen can be very atypical compared to those that a person uses. In addition, the stunning speed with which these attacks can be made are made a threat more dangerous.

In order to demonstrate what the HYAS specialists are capable of based on AI, the HYAS specialists have created a simple POC exploitudes are usually classified and called the type of vulnerability that they use; Are they local or remote; as well as the result of the start of exploit (for example, EOP, DOS, Spulping). One of the schemes offering explosion of zero day is Exploit-A-A-Service.

/Reports, release notes, official announcements.