Against the backdrop of a recent scandal with an unauthorized experiment, scientists from Zurich who launched bots into the Reddit discussion branches, the new development of a student from India only added oil into the fire: 20-year-old sairaj bald href=”https://www.404Media.co/student-makes-tool-that-identifies-on-dddit-bots-to-engage-with-them/” presented prismx- AI-tool scanning Reddit and other social networks for “radical content”, evaluating users on an extremism scale and capable of entering a conversation with them on behalf of the bot in order to “derased”.
The program that the Pillaries demonstrated live, allows you to look for keywords, track accounts, analyze their publications and issue a “radical index” from 0 to 1. As an example, the student introduced the term “FGC9” into the search – designating the popular 3D printing weapons, often used by right extremists and rebels. The tool revealed users who mentioned this term and assigned them high points in “radicality”, arguing this to attempts to collect weapons according to prohibited schemes and with minimal costs.
One user was assigned 0.85 points marked that he requested assistance in creating FGC-9, referring to underground schemes and interested in secrecy issues. The program also evaluates “psychological markers”, “the potential of escalation” and “influence on groups”, trying to predict the likelihood of transition to active actions.
The key function of Prismx is the ability to enter into a dialogue with a suspect user. According to the description, the algorithm is adjusted to the tone of communication, shows sympathy and in stages leads in the direction of rejection of radical views. The developer himself admits that he does not have a specialized education in the field of psychology and describes himself as “a techie with a managerial bias.” According to him, he has not yet conducted real communication with Reddit users for ethical reasons.
Nevertheless, such projects cause more and more anxiety. Earlier, researchers from the University of Zurich expanded AI-BOTS in the Sabredite R/ChanghemyView, masking them under the “victim of rape”, “black enemy of the BLM movement” and “worker of the shelter for the victims of domestic violence.” This caused a wave of criticism, and Reddit completely demanded to stop the experiment and sent official legal claims to the university.
A new wave of interest in such projects emphasizes growing alarm: if one student can create a system that can massively monitor and influence the behavior of people on the network, then what opportunities do more large -scale and closed structures have? And most importantly – where is the line between the fight against extremism and a violation of human rights? Reddit has not yet commented on the situation.