A new feature called digital attestations has been released on PyPI, the Python Package Index, to bolster supply chain security for Python packages. These attestations essentially function as digital signatures, cryptographically linking packages published on PyPI to the specific source code used for their creation, thus offering stronger assurance that packages downloaded from PyPI haven’t been tampered with or injected with malicious code. This feature utilizes a mechanism that proves a trustworthy build system was used to generate and publish the package, starting with its source code on GitHub. This development significantly enhances the reliability and trust in Python package distribution by providing concrete evidence of package origin and authenticity, mitigating risks associated with malware injection or tampering within the Python ecosystem. While this feature is already available to those using the PyPI Trusted Publishers mechanism in GitHub Actions, a new API has been introduced for consumers and installers to verify published attestations, allowing for broader adoption and increased confidence in package provenance across the Python community.
Researchers at Protect AI plan to release a free, open-source tool that can find zero-day vulnerabilities in Python codebases with the help of Anthropic’s Claude AI model. This tool leverages the power of LLMs to analyze code and identify potential security issues, potentially improving the speed and efficiency of vulnerability detection. The tool is designed to help developers identify and mitigate vulnerabilities early in the development cycle, improving the overall security of Python applications. This highlights the potential of AI to be used for proactive security measures and to enhance the security posture of software applications.