AI Assistants’ Confidential Data Leaked in Git Repo

With the growing popularity of AI assistants among developers cases of leakage of confidential data into public Git repositories, which settles among the information saved by AI tools in the project’s working tree, have become more frequent. AI assistants Claude Code, Cursor, Continue, Aider, OpenAI Codex, Copilot, Sourcegraph Cody and Amazon Q in the root directory of the project create local configuration files and directories in which, among other things, the history of operations and data about context.

The files saved by AI assistants may include API access keys, connection strings to the DBMS, links to internal resources and credentials for connecting to cloud environments obtained by the AI assistant during the execution of instructions, after working with local settings, or captured in a project-related context. For a developer, the settling of such data in the file hierarchy of the project is not obvious, so many forget to register the directories created by AI assistants in the .gitignore file and, after publishing the changes, transfer them to the public git repository.

To identify such leaks in public repositories on GitHub, a utility has been prepared claudleak. A test scan of GitHub showed that of the repositories that have subdirectories with AI assistant settings, approximately 2.4% contain actual keys or credentials, the validity of which has been confirmed by a separate check. The author of the utility encountered a problem when he noticed the file “.claude/settings.local.json” in his repository, in which among the information were access keys and passwords transmitted through environment variables.

Developers using AI assistants are recommended to add the directories .claude/, .cursor/, .continue/, .copilot/ and .aider/ to the .gitignore file, and You can also configure them to be ignored through the global filter with the command “git config –global core.excludesfile file_with_list”.

/Reports, release notes, official announcements.