GENTOO PROJECT BANS AI-GENERATED CHANGES

The Gentoo Linux District Managing Council approved Rules, prohibiting adoption of any content created using AI tools that process natural requests such as ChatGPT, Bard And Github Copilot. Similar tools should not be used when writing the code of the Gentoo components, creating EBUILD preparation of documentation and sending errors.

The main fears that are prohibited by the use of AI tools in Gentoo:

  • The uncertainty in the field of a possible violation of copyright in the contents created using models trained on a large array of information covered by works protected by copyright. The inability to guarantee compliance with licensing requirements in the code generated through AI tools is also mentioned. The generated AI code can be considered as the creation of a derivative work from the code that was used in the model of the model and is spreading under certain licenses. This requirement is not fulfilled, which can be considered as a violation of most open licenses, such as GPL, MIT and Apache. There may also be problems with licensed compatibility when inserting into projects under the permissive licenses of code generated using models trained on the code with copyleft-licenses.
  • Possible quality problems. The fears are related to the fact that the code or text referred by AI tools may look correct, but contain implicit problems and discrepancies with facts. The use of such content without verification can lead to a decrease in the quality of projects. For example, a synthesized code can repeat the errors of the code used in the model of the model, which will ultimately lead to the appearance of vulnerabilities and the absence of the necessary checks during the processing of external data. By analyzing the automatically generated error reports, developers are forced to waste a lot of time analyzing useless reports and double-check the information indicated there several times, since the external quality of the design causes confidence in the information and there is a feeling that the peer did not pass something.
  • Ethical issues related to copyright in training models, negative impact on the ecology due to large energy consumption when creating models, dismissal due to the replacement of personnel with AI services, reducing the quality of services after replacing support services on bots, and expanding the possibilities for spam and fraud.
/Reports, release notes, official announcements.