In a recent development, Linus Torvalds has accepted a new document into the Linux kernel. This document outlines the process for handling security-related errors, defines a threat model, explains how errors in the kernel are treated as vulnerabilities, and analyzes actions to be taken when errors are identified using artificial intelligence (AI). The document was prepared by Willy Tarreau, a seasoned Linux kernel developer known for maintaining several stable branches of the kernel and for his work on HAProxy.
The document is based on agreements reached during discussions about critical vulnerabilities in the kernel that were identified before fixes were published. Thanks to AI, working exploits were quickly created for these vulnerabilities. The document emphasizes that most security-related errors should be processed publicly to engage a wide audience in finding effective solutions. However, a separate private mailing list is recommended for communicating emergency messages about vulnerabilities that are easily exploitable and pose a significant threat to users.
Furthermore, the document encourages the public discussion of vulnerabilities identified using AI, as these issues are often discovered by multiple researchers simultaneously. In such cases, the exploit should not be disclosed in the public report, but it should be mentioned that it is available and can be shared privately upon request by the maintainer.
The document also sets out specific rules for transferring reports generated by AI assistants. While these reports have helped identify errors in certain parts of the code, maintainers sometimes disregard them due to their low quality and inaccuracies. Some key requirements for reports created with AI assistance include brevity, clarity, and the inclusion of verifiable facts rather than theoretical speculation.
- Brevity, without unnecessary details or embellishments.
- Submission of plain text reports without Markdown tags or unnecessary formatting.
- Understanding the threat model and providing factual information on vulnerabilities.
- Thorough testing of any exploit generated by AI before submitting the report.
- Utilizing AI to develop and test fixes for identified issues.