AI TO PLAY GAME AGAINST HUMANS IN 2024: CYBERCRIME FORECAST

in new reports of the Recorded Future, devoted to the safety of information technology, describes how Large language models (LLM) can be used to create a self -improved malicious software capable of bypassing the Yara.

rules.

Experiments showed that generative AI can effectively change the source code of harmful programs to evade detecting based on the Yara Rules, thereby reducing the likelihood of detecting them. This approach has already been investigated by cybercriminals to generate fragments of harmful code, creating phishing letters and conducting intelligence potential goals.

As an example, the company turned to LLM with a request for modifying the source code of the well -known malware in Steelhook to circumvent the detection without loss of functionality and errors in the syntax. The altered harmful in this way was able to avoid detection according to simple rules Yara.

Nevertheless, there are restrictions on such an approach to the rules related to the volume of text, which the model can process at a time, which makes it difficult to work with large code bases. However, according to Recorded Future, cybercriminals can bypass this restriction by uploading files to LLM tools.

The Malm -MACC CREATION TREATION TO LLM for bypass Yara

The study also shows that in 2024 the most likely malicious use of AI will be associated with diphs and operations of influence:

  • Dipfaces created using Open Source tools can be used to simulate the personalities of managers, and the Audio and video generated AI and the video can strengthen the social engineering campaign.
  • The cost of creating content for operations of influence will significantly decrease, which will simplify the cloning of websites or the creation of fake media. AI can also help developers of malware in the detection of detection, and attackers in reconnaissance, for example, in determining vulnerable industrial systems or searching for sensitive objects.

In addition to modifying malicious software, AI can be used to create dipfaces of high-ranking persons and conduct operations on influence, imitating legitimate websites. Generative AI is expected to accelerate the ability of attackers to reconnoiter critical infrastructure objects and receive information that can be used in subsequent attacks.

organizations are recommended to prepare for such threats, considering the voices and appearance of their leaders, websites and branding, as well as public images as part of their attacking surface. You should also expect more complicated use of AI to create malware, which evades detection.

/Reports, release notes, official announcements.