- Orestas Research
- Posts
- Orestas Research Issue #01
Orestas Research Issue #01
A deep dive on AI Vendor Security in the Age of AI, and more..

Research Deep Dive
Vendor Security in the Age of AI
There is an aspect of AI that most people have not considered: how does the security relationship between companies and their vendors change in the age of AI?
As expected, most corporations and large companies will approach integrating their services and offerings with AI cautiously and slowly. This is understandable. Most of these companies do not have internal models or full-fledged AI teams. They cannot simply start today by building a comprehensive AI department, which requires years to develop. Consequently, most of them will rely on third-party companies and SaaS solutions to build upon. Financially and in terms of timing, this approach makes sense and is the right decision.
78% of global companies and 82% in some regions are already using or actively exploring AI within their operations. Meanwhile, the global AI market is projected to reach $1.85 trillion by 2030.
With the pressure on companies, especially those that have not invested years in AI, they will attempt to accelerate this process to avoid being left behind. This means that most security flaws are likely to be overlooked or misunderstood, as much of what comes with AI is still unknown. This becomes evident when considering how rapidly attackers and malware are evolving with AI. These companies’ security teams will need to expand quickly to manage both the swift transition to AI and its impact on their internal systems.
Organizations frequently leave AI-related API access keys visible in their code repositories and commit histories. Orca’s report reveals that 20% of companies had exposed OpenAI keys, 35% had leaked API keys for Hugging Face’s machine learning platform, and 13% had exposed keys for Anthropic, the creator of the Claude language models.
Orca Security
How can corporations collaborate with third-party vendors while still maintaining and conducting their security compliance and data protection in the age of AI?
Corporations can effectively collaborate with third-party vendors in the age of AI by implementing a multilayered risk-management approach that includes pre-contract due diligence, clear contractual obligations, and continuous monitoring. Initially, firms should thoroughly assess vendor AI usage—how data is processed and stored and whether models are trained on client data—ensuring transparency on model architecture, data retention, and incident response protocols. These considerations must be embedded within contracts as enforceable clauses covering access controls, encryption standards, breach notification timelines, and compliance certifications like ISO 27001 or GDPR. Once partnerships are underway, organizations should deploy AI-enabled vendor risk management platforms to automate ongoing risk assessments, perform continuous monitoring, and dynamically re-score vendor trust levels—enabling rapid identification and mitigation of emerging issues.
How can vendors or small companies adjust their security postures to ensure they align with these large organizations they work with and pass the required security audits?
In the AI era, vendors face heightened scrutiny as corporations race to integrate AI quickly and safely. To meet the rigorous security and compliance benchmarks of their larger customers, small vendors must embed robust, audit-ready controls into their operation, and automation is critical. Platforms like Vanta offer a compelling solution. Over 4,000 businesses use it to automate evidence collection, up to 90% for SOC 2 and ISO 27001, freeing teams to focus on risk, not paperwork. Vanta’s Vendor Risk Management module alone slashes time spent on vendor reviews by up to 50%, cuts evidence collection delays by 62%, and boosts productivity by 54% through AI-powered document parsing and continuous monitoring. This is just one of the tools vendors can benefit from in this new age of technology shift. By proactively adopting such platforms, small vendors can demonstrate AI-aware compliance and security rigor, simplifying audits, sealing partnerships, and competing in the AI-driven marketplace.
In summary, as AI adoption accelerates, both corporations and vendors must rethink their security responsibilities. While large organizations need to manage third-party risks with greater precision, small vendors must rise to the challenge by embedding strong, automated compliance practices. The age of AI doesn’t just demand innovation and speed; it demands trust, and that trust begins with secure, transparent collaboration.
Briefs
Dangerous vulnerability in GitLab Ultimate Enterprise Edition
The 20 biggest data breaches of the 21st century
China-linked hackers target cybersecurity firms and governments in global espionage campaign
Managing the Rising Security Risks of Non-Human Identities
Funding, Acquisition, and Spotlight
$12m for Impart Security from San Francisco, US (Computer Hardware, Computer Security) with funding from 8-Bit Capital, CRV - Charles River Ventures, and 1 more.
$16m for Infisical from San Francisco, US (API, Cyber Security), with funding from Dynamic Fund US, Gradient Ventures, and 1 more.
$25m for Nooks (nooks.works) from Washington, D.C., US (Defense, Manufacturing) with funding from Lockheed Martin, SAIC, and 2 more.
Nok Nok Labs from San Jose, US (computer networking, Cyber Security) acquired by OneSpan.
$4m for Unbound Security AI from San Francisco, US (B2B/Enterprise, Computer Security) with funding from Y Combinator Summer 2024, Y Combinator, and 9 more
$40m for Cerby from Alameda, US (Computer Security, Cyber Security), with funding from Two Sigma Ventures, Salesforce Ventures, and 4 more
Fletch (fletch.ai) from San Francisco, US (Cyber Security, Security), acquired by F5 Inc.
Stories We are Reading
Major food wholesaler says cyberattack impacting distribution systems.
Meta AI’s public feed has raised serious privacy concerns. Business Insider criticized the app’s “Discover” stream, labeling it the internet’s most depressing corner after uncovering that users were unknowingly sharing deeply personal conversations—ranging from legal issues to medical conditions and expressions of grief.