Whitepaper: A Zero Trust Approach to AI and your CRM
The Trojan Horse in Your CRM: Why Embedded AI is a Security Ticking Time Bomb
Executive Summary
The rush to embed Artificial Intelligence (AI) directly into Customer Relationship Management (CRM) and business applications, championed by industry giants like Salesforce, HubSpot, and Zendesk, has created a new and formidable attack surface for enterprises. While promising unprecedented productivity gains, this "inside-the-SaaS" approach to AI deployment introduces a host of critical security and compliance risks that are often overlooked in the pursuit of innovation. These risks are not theoretical; they are being actively exploited, with recent security incidents affecting over 700 organizations and exposing millions of customer records [1, 2].
This whitepaper will demonstrate that the very architecture of embedded CRM AI—characterized by vendor-controlled vector stores, inherited permissions, and opaque data processing—fundamentally undermines an organization's ability to govern its own data and maintain a robust security posture. We will analyze the inherent security flaws of this model, including the dangers of vendor lock-in, the vulnerabilities introduced by third-party integrations, and the significant compliance gaps that arise from a lack of direct control over data.
In contrast, we will advocate for an API-first, external AI architecture as the demonstrably safer and more compliant alternative. By treating AI agents as untrusted external applications that interact with CRM systems through strictly-scoped, auditable APIs, organizations can retain full control over their data, enforce their own security policies, and meet their regulatory obligations without sacrificing the benefits of AI. This approach, grounded in the principles of Zero Trust, provides the necessary framework for secure and responsible AI adoption in the enterprise.
Through a detailed analysis of recent security breaches, a comparison of architectural models, and an examination of the compliance landscape, this whitepaper will provide a clear and compelling case for why your AI should live outside your business applications.
The New Frontier of Risk: Embedded CRM AI
The security risks associated with embedded CRM AI are not merely an extension of traditional cybersecurity threats; they represent a new and more complex frontier of vulnerability. The following risk matrix, based on an analysis of recent security incidents and vendor-disclosed information, highlights the most pressing threats posed by the "inside-the-SaaS" AI model.
As the matrix clearly illustrates, the most severe risks—including Third-Party Integration Vulnerabilities, Vendor Lock-in, and Compliance Blind Spots—are all characterized by a high likelihood and a high impact, making them critical concerns for any organization that relies on a major CRM platform.
The Anatomy of a CRM AI Breach: A Pattern of Exploitation
Recent security incidents have revealed a consistent pattern of exploitation that targets the inherent weaknesses of the embedded CRM AI model. The widespread campaign that affected over 700 organizations in August 2025 serves as a stark case study [1]. In this attack, threat actors did not need to breach the CRM platform itself; instead, they targeted a third-party AI tool, the Salesloft Drift AI chat agent, and used its compromised OAuth tokens to gain access to the Salesforce instances of hundreds of organizations. This highlights a critical flaw in the embedded model: the security of the entire CRM ecosystem is only as strong as its weakest link, and the proliferation of third-party integrations creates a vast and often unmonitored attack surface.
"A threat actor... targeted Salesforce instances using compromised OAuth tokens that were associated with the customer engagement vendor Salesloft’s Drift AI chat agent." [1]
The Allianz Life data breach, which exposed over 2.8 million customer and partner records, further underscores the dangers of this interconnected ecosystem [2]. In this case, the attackers used sophisticated vishing techniques to gain initial access to the Salesforce environment, and then moved laterally to exfiltrate massive amounts of data. This incident demonstrates that even with robust technical safeguards, the human element remains a significant vulnerability, and the broad permissions often granted to embedded AI agents create a massive blast radius in the event of a compromise.
"The groups focus on Salesforce environments for initial access and lateral movement, exfiltrating large datasets of customer and partner information for sale, extortion or public leak." [2]
These incidents are not isolated; they are part of a broader trend of attackers targeting SaaS applications, with a reported 300% year-over-year increase in SaaS-related attacks [2]. This new threat landscape requires a fundamental rethinking of how we approach AI security in the enterprise.
Architectural Flaws: Why Embedded CRM AI is Inherently Insecure
The security vulnerabilities of embedded CRM AI are not the result of poor implementation; they are a direct consequence of the architectural model itself. By ceding control of data processing, vector store management, and security policies to the CRM vendor, organizations create a security posture that is fundamentally misaligned with the principles of Zero Trust and data sovereignty.
The Illusion of Control: The Shared Responsibility Model in Practice
CRM vendors often tout a "shared responsibility model" as the solution to security concerns, but in practice, this model often creates more problems than it solves. While the vendor is responsible for securing the underlying infrastructure, the customer is left with the complex and often overwhelming task of managing permissions, ensuring data quality, and monitoring for threats—all within a system that they do not fully control.
"In this model, Salesforce is responsible for securing the infrastructure, platform, and services that enable AI... At the same time, customers are responsible for securing the applications and configurations that connect to the AI" [3]
This division of responsibility creates a dangerous gap in which critical security functions can be overlooked or mismanaged. For example, while Salesforce provides the tools to create data zones and restrict access, "it is up to the customer to determine what that data is and where it lives" [3]. In a large enterprise with thousands of users and millions of records, this is a monumental task that is often beyond the capabilities of even the most sophisticated IT teams.
The Black Box: Vendor-Controlled Vector Stores
At the heart of the embedded CRM AI model is the vendor-controlled vector store. This is where customer data is transformed into embeddings and used to train the AI models. However, this process is often a "black box," with little to no visibility into how the data is being used, who has access to it, or how it is being secured.
This lack of transparency creates a number of significant risks:
No Data Deletion Control: When a customer deletes a record from their CRM, there is often no guarantee that the corresponding embeddings will be deleted from the vendor's vector store. This leaves a persistent security risk, as sensitive data can remain in the vendor's possession long after it has been deleted from the customer's system.
Vendor Lock-in: By entrusting their data to a vendor's proprietary vector store, organizations create a significant barrier to switching CRM providers. The cost and complexity of migrating this data to a new platform can be prohibitive, effectively locking the customer into a long-term relationship with the vendor.
Limited Audit Trail: The audit trails provided by CRM vendors are often incomplete, providing little visibility into how data is being used by the AI models. This makes it difficult to investigate security incidents, demonstrate compliance with regulatory requirements, and ensure that data is being used in accordance with corporate policies.
As the comparison chart above illustrates, the embedded CRM AI model provides customers with very little control over their own data. This is in stark contrast to the API-first model, which empowers customers with full control over every aspect of the AI data lifecycle.
Data Flow: Following the Path of Risk
The data flow diagrams below illustrate the fundamental differences in how data is handled in embedded versus API-first CRM AI architectures. The embedded model is characterized by a lack of control and visibility, while the API-first model is designed to provide granular control and a complete audit trail at every step of the process.
The Compliance Quagmire: Navigating the Regulatory Landscape
The lack of control and visibility inherent in the embedded CRM AI model creates a significant compliance challenge for organizations, particularly those in regulated industries. The shared responsibility model, combined with the opacity of vendor-controlled vector stores, makes it difficult to demonstrate compliance with a wide range of regulations, including GDPR, HIPAA, SOX, and CCPA.
The compliance comparison chart above highlights the significant gaps in the embedded CRM AI model. The API-first model, by contrast, provides the tools and controls necessary to meet even the most stringent regulatory requirements.
The Path Forward: An API-First Approach to CRM AI
Given the significant security and compliance risks associated with embedded CRM AI, we strongly advocate for an API-first, external AI architecture. This model, which treats all AI agents as untrusted external applications, provides the necessary framework for secure and responsible AI adoption in the enterprise.
Key Principles of an API-First CRM AI Architecture:
AI Agents as External Applications: All AI agents, regardless of their origin, should be treated as untrusted third-party applications that connect to the CRM system through a secure API gateway.
Scoped API Access: AI agents should only be granted access to the specific data and functions that are necessary for them to perform their tasks. This is in stark contrast to the broad, inherited permissions of the embedded model.
Customer-Controlled Vector Stores: Organizations should maintain full control over their own vector stores, allowing them to manage their own embeddings, enforce their own security policies, and meet their regulatory obligations.
Comprehensive Audit Trails: Every API call, data access event, and AI-generated response should be logged and auditable, providing a complete and immutable record of all AI-related activities.
Data Classification and Governance: All data should be classified and governed before it is exposed to an AI agent, ensuring that sensitive information is protected and that all data processing is in compliance with corporate policies and regulatory requirements.
By adopting an API-first approach, organizations can reap the benefits of AI without sacrificing security, compliance, or control. This model empowers organizations to build a secure and governable AI ecosystem that is tailored to their specific needs and risk tolerance.
Recommendations for a Secure CRM AI Strategy:
Reject the Embedded Model: Do not deploy AI agents that are embedded directly into your CRM or business applications. The risks are too great, and the lack of control is unacceptable.
Embrace the API-First Approach: Adopt an API-first, external AI architecture that treats all AI agents as untrusted third-party applications.
Build Your Own Vector Store: Do not entrust your data to a vendor-controlled vector store. Build and manage your own vector store to maintain full control over your data.
Implement a Robust Governance Framework: Establish a comprehensive AI governance framework that includes data classification, security policies, approval workflows, and a complete audit trail.
Partner with a Trusted AI Platform: Work with an AI platform provider, such as raiaAI.com, that is committed to an API-first, customer-controlled approach to AI.
Conclusion
The allure of embedded CRM AI is undeniable, but the security and compliance risks are too significant to ignore. The "inside-the-SaaS" model, with its vendor-controlled vector stores, inherited permissions, and opaque data processing, creates a security posture that is fundamentally at odds with the principles of Zero Trust and data sovereignty.
The recent wave of security breaches targeting CRM AI implementations is a clear and present danger, and it is only a matter of time before we see a catastrophic failure of this model. The only responsible path forward is to adopt an API-first, external AI architecture that empowers organizations with the control, visibility, and governance they need to safely and securely deploy AI in the enterprise.
By treating AI as an untrusted external application and enforcing strict security controls at the API layer, organizations can unlock the transformative potential of AI without sacrificing their security, their compliance, or their peace of mind. The future of AI is not inside your CRM; it is outside, connected by a secure and auditable API.
References
[1] Jones, D. (2025, August 26). Hackers steal data from Salesforce instances in widespread campaign. Cybersecurity Dive. https://www.cybersecuritydive.com/news/hackers-steal-data-salesforce-instances/758676/
[2] Obsidian Security. (2025). Allianz Life Salesforce Data Breach: Scattered Spider & ShinyHunters Attack. https://www.obsidiansecurity.com/resource/allianz-data-leaked-in-major-wave-of-salesforce-attacks
[3] Varonis. (2024, December 13). Generative AI Security: Preparing for Salesforce Agentforce. https://www.varonis.com/blog/salesforce-agentforce-security
[4] Huble. (2024, November 29). HubSpot AI security FAQ: what CTOs and CIOs need to know. https://huble.com/blog/hubspot-ai-security
[5] Salesforce. (2024, May 2). New Salesforce White Paper Tackles LLM Security Risks. https://www.salesforce.com/news/stories/llm-security/
Appendix: Research Findings
Varonis Research: "Generative AI Security: Preparing for Salesforce Agentforce" (2024)
Source: https://www.varonis.com/blog/salesforce-agentforce-security Publication: Varonis, December 13, 2024
Key Findings Supporting CRM AI Security Concerns:
1. Permission Inheritance Issues
Key Finding: "Agentforce agents inherit the access and permissions of the Salesforce user, so it's imperative to mitigate risk by locking down critical data, ensuring that each user (and thereby agents) can only access what they need to do their job."
Key Finding: "Agentforce agents will surface all organizational data that an individual user can access"
This demonstrates the over-permissioning risk when AI is embedded directly in CRM systems
2. Complex Permission Management
Key Finding: "Salesforce permissions are highly complex and require significant effort to analyze and understand — especially considering a large enterprise can have up to 1,000 Permission Sets with dozens of permissions in each one."
Key Finding: "Security teams must rely on Salesforce teams to help them complete this process, and because Salesforce admins have their plates full with keeping the business running, completing this process can be overwhelming."
Shows the governance challenges of embedded AI in SaaS platforms
3. Data Quality and Control Issues
Key Finding: "Agentforce relies on your internal documentation and data to ground generative AI prompts with helpful context and provide accurate and relevant information."
Key Finding: "Agentforce pulls data from the Salesforce Data Cloud, which unifies multiple data sources, including your Salesforce environment and cloud storage (like AWS and Snowflake)."
Key Finding: "There is bound to be data in your environment that you don't want agents to be trained on or surface answers from; with Salesforce, you can create zones that section off data you don't want agents to access. However, it is up to the customer to determine what that data is and where it lives."
Highlights the challenge of data governance when AI is embedded in the platform
4. Shared Responsibility Model Limitations
Key Finding: "In this model, Salesforce is responsible for securing the infrastructure, platform, and services that enable AI... At the same time, customers are responsible for securing the applications and configurations that connect to the AI"
Key Finding: "Customers are responsible for: Permissions, Data, Usage"
Shows how embedded AI shifts security burden to customers without giving them full control
5. Prompt Injection Vulnerabilities
Key Finding: "This will also help you safeguard against prompt injection attacks, in which a malicious actor tries to provide instructions that trick the model into giving a response it shouldn't."
Confirms that embedded CRM AI is vulnerable to the same attacks as other AI systems
6. Use Cases Showing Broad Data Access
Use Cases: "Helping sales reps find leads, create opportunities, update records, schedule and summarize meetings"
Use Cases: "Enabling service agents to resolve cases faster, quickly access knowledge articles, and escalate issues"
Use Cases: "Assisting marketers in creating campaigns, writing emails, segmenting audiences, and analyzing results"
Shows how embedded AI agents access broad swaths of organizational data across multiple functions
Huble Research: "HubSpot AI security FAQ: what CTOs and CIOs need to know" (2024)
Source: https://huble.com/blog/hubspot-ai-security Publication: Huble, November 29, 2024
Key Findings Supporting CRM AI Security Concerns:
1. Data Security Challenges in Enterprise AI
Key Finding: "As enterprise teams increasingly use AI tools, data security becomes an important issue. Many external AI tools often lack security measures, putting sensitive company data at risk."
Key Finding: "For CTOs and CIOs, the biggest challenge lies in empowering their teams to effectively use AI tools while ensuring adherence to data protection regulations such as GDPR and CCPA."
Shows the fundamental challenge of balancing AI benefits with security requirements
2. Organizational Struggles with AI Risk Management
Key Finding: "Research by MIT Sloan Management Review underscores this challenge, revealing that over 70% of organisations are struggling to manage the emerging risks of AI."
Key Finding: "These findings highlight the critical need for organisations to secure sensitive information when integrating AI tools, ensuring that data remains protected and compliant with regulatory standards."
Demonstrates widespread industry challenges with AI governance
3. HubSpot AI Security Risks
Key Finding: "Potential risks include unauthorised access or misuse of data processed by AI."
Key Finding: "Data exposure: Mitigated through end-to-end encryption and isolated processing environments"
Key Finding: "Unauthorised access: Controlled via granular RBAC and continuous access monitoring"
Key Finding: "Third-party risk: Managed through strict contractual controls and zero-retention policies"
Key Finding: "AI output validation: Implemented through automated verification protocols"
Shows that even well-designed CRM AI systems face significant security challenges
4. Built-in AI Features vs External Tools
Key Finding: "In contrast, many CRM platforms, including HubSpot, offer built-in AI features"
Key Finding: "HubSpot mitigates these risks by: Restricting third-party providers from retaining data for training; Enforcing encryption and robust access controls; and Ensuring users have visibility into and control over their data, including the ability to opt out of certain features."
Highlights the vendor's perspective on why embedded AI is supposedly safer, but also reveals the control limitations
5. Third-Party Security Dependencies
Key Finding: "It is also crucial to work with trusted partners, such as Huble, who are ISO 27001 accredited. This certification ensures that your partners prioritise security, as you are only as strong as your weakest link, which is often third-party providers."
Key Finding: "Third-party vendors without proper security accreditation will be your biggest vulnerable entry points into your ecosystem"
Shows how embedded CRM AI creates complex third-party security dependencies
Salesforce Research: "New Salesforce White Paper Tackles LLM Security Risks" (2024)
Source: https://www.salesforce.com/news/stories/llm-security/ Publication: Salesforce, May 2, 2024
Key Findings Supporting CRM AI Security Concerns:
1. Prompt Injection Attacks
Key Finding: "Prompt injections: Bad actors can manipulate an LLM through malicious insertions within prompts and cause the LLM to act as a 'confused deputy' for the attacker."
Key Finding: "Safeguarding against these threats involves a two-pronged strategy – using machine learning defense strategies to intelligently detect and prevent malicious insertions, and using heuristic, learning-based strategies to safeguard against potential threats to prompts, such as deny list-based filtering and instruction defense."
Confirms that CRM AI systems are vulnerable to prompt injection attacks
2. Training Data Poisoning
Key Finding: "Training data poisoning: Attackers can manipulate training data or fine-tuning procedures of an LLM."
Key Finding: "Companies can protect against this by checking that training data inputted does not contain poisoned information, such as malicious code payloads, which could compromise the model's security and effectiveness, or lead to privacy violations and other security breaches."
Shows how embedded AI in CRM systems can be compromised through data poisoning
3. Supply Chain Vulnerabilities
Key Finding: "Supply chain vulnerabilities: Vulnerabilities can affect the entire application lifestyle, including traditional third-party libraries/packages, docker containers, base images, and service suppliers."
Key Finding: "Organizations can guard against these by ensuring that every part of the lifestyle meets the company's established security standards."
Highlights the complex supply chain risks in embedded CRM AI systems
4. Model Theft and Unauthorized Access
Key Finding: "Model theft: Only authenticated and authorized clients should be able to access a company's LLM. This prevents actors from compromising, physically stealing, and copying proprietary models."
Key Finding: "Businesses can also adopt measures such as requiring Just in Time (JIT) credentials, Multi-Factor Authentication (MFA), strong audit trails, and logging to prevent model theft."
Shows the intellectual property risks of embedded AI models
5. Training Environment Security
Key Finding: "Safe training grounds: Companies should hold the training environments — controlled settings where AI systems can learn and improve their capabilities — to the same security standards as the data environment itself."
Key Finding: "This is especially important as companies increasingly view training environments as a development environment and treat them with less security."
Reveals how embedded AI creates additional attack surfaces through training environments
Real-World CRM Security Breach Examples
Cybersecurity Dive: "Hackers steal data from Salesforce instances in widespread campaign" (2025)
Source: https://www.cybersecuritydive.com/news/hackers-steal-data-salesforce-instances/758676/ Publication: Cybersecurity Dive, August 26, 2025
Key Findings - Salesforce Drift AI Breach:
Scale of Attack: "Google's Threat Intelligence Group is aware of over 700 potentially impacted organizations"
Attack Method: "A threat actor that Google tracks as UNC6395 targeted Salesforce instances using compromised OAuth tokens that were associated with the customer engagement vendor Salesloft's Drift AI chat agent"
Automation: "The threat actor used a Python tool to automate the data theft process for each organization that was targeted"
Data Stolen: "After stealing the data, the hackers looked for sensitive credentials, including access keys and passwords for Amazon Web Services as well as access tokens for the Snowflake cloud platform"
Timeline: "The attacks largely occurred between Aug. 8 and Aug. 18"
Third-Party Risk: "The attacks did not involve any vulnerability in the Salesforce platform... A threat actor... targeted Salesforce instances using compromised OAuth tokens that were associated with the customer engagement vendor Salesloft's Drift AI chat agent"
Obsidian Security: "Allianz Life Salesforce Data Breach: Scattered Spider & ShinyHunters Attack" (2025)
Source: https://www.obsidiansecurity.com/resource/allianz-data-leaked-in-major-wave-of-salesforce-attacks Publication: Obsidian Security, 2025
Key Findings - Allianz Life Breach:
Scale: "hackers linked to Scattered Spider and ShinyHunters (also known as UNC6040) exfiltrated over 2.8 million Allianz Life customer and partner records"
Campaign Scope: "Approximately 20 organizations across Europe and the Americas—including sectors like retail, education, hospitality, aviation, fashion, luxury, finance, and tech—have been affected. Confirmed victims include Google, Adidas, LVMH (Dior, Louis Vuitton, Tiffany & Co.), Pandora, Qantas, and Air France"
Attack Method: "The ShinyHunters threat group is notorious for advanced vishing attacks that exploit human error to gain Salesforce CRM access, bypassing technical defenses"
Collaboration: "According to ShinyHunters, 'They provide us with initial access and we conduct the dump and exfiltration of the Salesforce CRM instances'"
Target Focus: "The groups focus on Salesforce environments for initial access and lateral movement, exfiltrating large datasets of customer and partner information for sale, extortion or public leak"
Industry Impact Analysis:
SaaS Blind Spot: "SaaS is a massive blindspot for most organizations. While investments flow into traditional defenses like zero trust architecture and IdP, attackers are targeting SaaS, where visibility is low and controls are fragmented"
AI-Enhanced Threats: "Threat actors are increasingly sophisticated, and with the rise of AI tools, it is likely that attacks will become more frequent and harder to detect. AI can enable more convincing phishing campaigns, automate reconnaissance, and scale attacks"
Attack Trend: "SaaS attacks are up 300% year over year, highlighting the need for proactive security"
Human Factor: "Humans are often the weakest link in the security chain. Despite robust technical safeguards, social engineering tactics like vishing exploit human vulnerabilities"
Summary of Research Validation for CRM AI Whitepaper
The research comprehensively validates the risks of embedded AI in CRM systems:
Over-Permissioning Risks: Confirmed by Varonis research on Salesforce Agentforce inheriting user permissions
Third-Party Integration Vulnerabilities: Validated by real-world breaches involving Drift AI and other third-party tools
Vendor Control Limitations: Demonstrated by shared responsibility models that shift security burden to customers
Scale of Impact: Proven by incidents affecting 700+ organizations and millions of records
AI-Enhanced Attack Sophistication: Confirmed by security experts noting AI's role in scaling attacks
Complex Permission Management: Validated by Salesforce's complex permission structure with up to 1,000 Permission Sets
Lack of Visibility and Control: Confirmed by industry reports noting SaaS as a "massive blindspot"
Last updated