The Urgent Need for Enhanced Security Practices in the AI Sector
As the race among AI companies heats up, a concerning trend has emerged: many organizations are ignoring foundational cybersecurity hygiene, leading to significant vulnerabilities. A recent report by Wiz highlights that 65% of the top 50 AI firms analyzed have leaked verified secrets via platforms like GitHub. This situation poses severe risks, with potentially damaging ramifications for companies and their clients.
Exposed Secrets: A Growing Concern
Among the types of sensitive information at risk, API keys, tokens, and credentials have been found hidden within code repositories—often overlooked by standard security tools. Glyn Morgan, Country Manager for UK&I at Salt Security, aptly describes these lapses as preventable errors that clearly demonstrate a lack of effective governance and security configurations. According to Morgan, accidentally exposing API keys presents "a glaring avoidable security failure" that invites attackers to exploit otherwise secured systems.
Supply Chain Security Risks
Wiz’s report sheds light on the expanding complexities of supply chain security risks in the AI landscape. Given the increasing collaboration between established enterprises and innovative AI startups, companies may inadvertently adopt the security flaws of their partners. The leaks discovered could potentially expose organizational structures, proprietary training data, and even proprietary AI models.
The financial stakes for these firms are substantial, with a combined valuation exceeding $400 billion. Examples of significant hackers’ targets include:
- LangChain, which exposed multiple Langsmith API keys, some granting access to manage the organization and list its members.
- An enterprise-tier API key for ElevenLabs was discovered simply sitting in a plaintext file, creating a significant vulnerability.
- An unnamed AI 50 company had a HuggingFace token exposed in a deleted code fork, allowing access to around 1,000 private models. This same company also leaked keys for WeightsAndBiases, putting training data at risk.
Inadequacies of Traditional Security Scanning
Wiz’s findings emphasize that conventional security scanning techniques are no longer sufficient. The report criticizes the reliance on basic scans of a company’s main GitHub repositories as a "commoditized approach," which fails to uncover critical risks.
Researchers liken the situation to an iceberg, where the most visible issues represent only a fraction of the actual risks. To effectively identify hidden threats, Wiz implemented a unique three-dimensional scanning methodology known as Depth, Perimeter, and Coverage:
Depth
The depth scan scrutinizes the entire commit history, including forks, deleted code, and workflow logs—areas typically ignored by standard scanners.
Perimeter
The perimeter scanning encompasses not only the primary company organization but also its members and contributors. Individual contributors may inadvertently expose company-related secrets in their own public repositories. Wiz’s team utilized techniques to trace code contributors and identify related accounts on platforms like HuggingFace and npm.
Coverage
Finally, the coverage aspect of the scan focuses on identifying new AI-related secret types often overlooked by conventional detection methods, such as keys for services like WeightsAndBiases, Groq, and Perplexity.
Security Maturity Gaps
The expanded attack surface showcased in the report raises alarms, particularly given the evident lack of security maturity in numerous fast-evolving companies. Alarmingly, when efforts were made to disclose the leaks, nearly half of the notifications went unanswered or failed to reach the intended recipients. Many firms lack official disclosure channels, indicating poor responsiveness to security issues.
Immediate Action Items for AI Firms
Wiz offers critical recommendations for enterprise technology executives to bolster their security infrastructures amid emerging risks. These include:
-
Incorporating Security Awareness During Onboarding
- Security leaders should treat employees as integral to their organization’s attack surface. A well-defined Version Control System (VCS) member policy should be established during onboarding. This policy must include mandatory practices such as multi-factor authentication for personal accounts and a strict separation between personal and official activities on platforms like GitHub.
-
Evolving Internal Secret Scanning
- Companies need to move beyond basic repository checks for internal secret scanning. It is crucial to adopt the Depth, Perimeter, and Coverage approach to uncover hidden threats to safeguard the organization effectively.
- Extending Scrutiny to the AI Supply Chain
- When evaluating potential tools from AI vendors, Chief Information Security Officers (CISOs) must inquire about the vendors’ secret management and vulnerability disclosure practices. The report notes a trend among AI service providers leaking their own API keys, calling for prioritized detection of their specific secret types.
The current landscape underscores a critical reality: the tools and technologies driving the next generation of innovation are often developing at a pace that outstrips security governance. For leaders within AI sectors, it’s imperative to realize that speed must not compromise security; for the enterprises relying on this rapid progression, the same cautionary principle holds true.
By addressing gaps in security practices and enhancing governance measures, the AI community can better protect its assets and maintain trust with its users and partners.
Inspired by: Source

