Stay Updated
Get the best new AI tools in your inbox
Weekly roundup of the latest AI tools, trends, and tips — no spam, unsubscribe anytime

Every AI tool processes your data somehow. Before you paste sensitive information into a chatbot or connect your email to an AI assistant, here is what to check.
2026/04/17
Security evaluation is the step most people skip when adopting AI tools, and it is the one most likely to cause serious problems. Unlike traditional SaaS software, AI tools interact with your most sensitive information in distinctive ways: they process natural language inputs that may contain confidential business data, they generate outputs that may be influenced by other users' data if models are not properly isolated, and they often retain conversation history in ways that are not immediately obvious. Getting security evaluation right before you subscribe is not paranoia — it is professional due diligence.
When you use an AI tool, you are not just storing data on a vendor's servers — you are feeding that data through a model inference pipeline that may involve multiple third-party components. The vendor's infrastructure, the model provider (which may be a different company), the cloud provider hosting the compute, and any third-party integrations all represent points where your data could be exposed, retained, or used in ways you did not intend. This is a longer and more complex supply chain than most traditional software.
The training data risk is specific to AI and often misunderstood. Most consumer AI tools reserve the right to use your inputs and outputs to improve their models. This means confidential information you enter — client names, strategic plans, proprietary code, personally identifiable information — could potentially appear in future model outputs for other users. This is not a theoretical risk; it has happened with multiple AI coding assistants when sensitive code was entered and later surfaced in suggestions to unrelated users.
The first document to read before subscribing to any AI tool is the privacy policy's section on training data. Look for explicit statements about whether your inputs are used to train models, under what circumstances, and how to opt out. Paid enterprise plans almost always include training data opt-outs that consumer plans do not. If you cannot find a clear statement on training data use, assume the worst and contact the vendor for written clarification before sharing sensitive information.
Data retention policies are the second key area. How long does the vendor keep your conversation history, uploaded files, and generated outputs? Under what circumstances are they deleted? Who within the vendor's organization can access your data? Can you request deletion and is it actually honored? OpenAI, Anthropic, Google, and Microsoft all publish reasonably detailed answers to these questions for their enterprise products; consumer products from smaller vendors often do not.
SOC 2 Type II certification is the baseline security credential to look for in any AI tool you plan to use for business purposes. It indicates that an independent auditor has examined the vendor's security controls over a period of time (typically six to twelve months) and found them to be effective. SOC 2 Type I only covers controls at a point in time — less rigorous. SOC 2 Type II is the meaningful credential.
ISO 27001 certification provides equivalent assurance under an international standard, which matters if you operate across jurisdictions or work with European clients who prefer ISO frameworks. FedRAMP authorization is required for tools deployed in US federal government contexts. HITRUST certification is common in healthcare. Ask vendors for their current certification status, their certificate validity dates, and copies of their most recent audit reports if you need to satisfy your own organization's vendor management requirements.
If you handle data from European Union residents — customers, employees, or anyone in the EU — your AI tool vendor must be able to process that data in compliance with GDPR. Key requirements: a signed Data Processing Agreement (DPA) with the vendor; clarity on which legal basis the vendor relies on for processing (legitimate interest, contract necessity, or consent); and information about any sub-processors the vendor uses (cloud providers, model providers) and their GDPR compliance status.
Data residency — where your data is physically stored and processed — is a specific concern for regulated industries and government entities. Some AI vendors offer EU-specific deployment options where data never leaves EU infrastructure. Others process all data in the US by default with Standard Contractual Clauses (SCCs) as the cross-border transfer mechanism. Know which arrangement your vendor uses, and whether it satisfies your specific regulatory requirements before signing.
Look for encryption in transit (TLS 1.2 or 1.3) and encryption at rest (AES-256 is the current standard). These are table stakes — any vendor without them should be immediately disqualified. More meaningful differentiators are: whether customer data is encrypted with customer-managed encryption keys (CMEK), which means the vendor cannot access your data even if compelled to; whether the encryption keys are rotated on a defined schedule; and whether encryption applies to backup data as well as primary storage.
For tools that offer API access, evaluate the security of API key management. Are API keys generated with appropriate entropy? Can you set expiry dates and scope limitations on keys? Is there an audit log of API key usage? For tools integrated into development workflows, compromised API keys are a significant attack vector — the vendor's key management practices matter as much as their infrastructure security.
Most AI vendors do not operate every component of their service independently. They use cloud providers (AWS, Azure, Google Cloud), model providers (OpenAI, Anthropic, open-source models), analytics vendors, customer support platforms, and billing processors. Each of these is a potential data touchpoint. Request a full sub-processor list from vendors before signing, particularly for enterprise agreements. This list should include what data each sub-processor accesses, why, and what security agreements govern that access.
Advertising-supported AI products represent a specific third-party sharing risk. If the tool is free and ad-supported, it may share behavioral data with advertising networks. This is a categorical red flag for any business use. The presence of advertising in the business model is itself a signal that the vendor's primary incentive is not your data security.
Privacy policies are written by lawyers for legal protection, not for user clarity. Key sections to find and interpret: the data collection section (what they collect beyond what you explicitly provide, including metadata and behavioral data); the data use section (how they use your data, including any uses beyond service delivery); the data sharing section (who they share data with, under what circumstances, including law enforcement requests); and the user rights section (your rights to access, correct, delete, and export your data).
Watch for permissive language that sounds specific but is not: 'we may share data with trusted partners' without defining what makes a partner trusted; 'we use data to improve our services' without specifying which services or how; 'we store data only as long as necessary' without specifying a time period. Vague language in privacy policies is almost always vague intentionally.
Enterprise and consumer versions of the same product often have fundamentally different security architectures, not just different feature sets. Enterprise plans typically include: dedicated tenancy or at minimum stronger logical data isolation; data processing agreements; training data opt-out as a contractual commitment; SSO and directory integration; role-based access controls; audit logging; and a dedicated account team who can answer security questions specifically. If you are using a consumer plan for business purposes, you may be accepting security risks you are not aware of.
The enterprise security upgrade is almost always worth it for any organization subject to regulatory requirements, handling client data, or operating in a competitive industry where intellectual property protection matters. The price premium for enterprise plans is typically justified by the security controls alone, before accounting for the additional features and support.
Research the vendor's historical security incidents. Have they experienced data breaches? How quickly did they detect and disclose them? What was the scope of exposure? How did they communicate with affected users? A vendor with a clean record is reassuring, but how a vendor handles an incident is a stronger signal of their security culture than the absence of incidents. Companies that respond quickly, communicate transparently, and make affected users whole are demonstrably more trustworthy than those that minimize incidents or delay disclosure.
Check the vendor's security page and responsible disclosure policy. Organizations serious about security publish detailed information about their security practices and have a clear process for receiving vulnerability reports from external researchers. The absence of a security page or a responsible disclosure program is a yellow flag for newer vendors and a red flag for established ones.
Understand exactly what happens to your data when you cancel your subscription or explicitly request deletion. Some vendors delete data immediately upon cancellation; others retain it for a defined period (30 to 90 days is common) to handle disputes or accidental cancellations; some retain it indefinitely in anonymized form. Verify whether deletion applies to backup copies as well as primary storage — backup retention is a common source of data that persists longer than users expect.
Test the data deletion process before you need it. Request deletion of a conversation or uploaded document and verify that it actually disappears from the interface and cannot be retrieved. This sounds paranoid until you need to assure a client that their confidential information shared with an AI tool has been deleted — at which point having tested the deletion process becomes a meaningful assurance rather than a vendor promise.
For organizations with the highest security requirements — defense contractors, financial institutions, healthcare organizations subject to HIPAA — cloud AI tools may not meet data sovereignty requirements regardless of the vendor's security posture. On-premise deployment options, where the model runs on your own infrastructure, are available from several vendors including Microsoft (Azure OpenAI with private endpoints), Anthropic (Claude for Enterprise with VPC deployment), and through open-source models deployed via platforms like Ollama or vLLM.
On-premise deployment trades convenience for control. You bear responsibility for model updates, infrastructure maintenance, scaling, and security of the deployment environment. For most organizations, this tradeoff is not justified. But for those where data sovereignty is a non-negotiable requirement, it is the only viable path to using AI tools at all.
HIPAA (healthcare): AI tools used to process protected health information (PHI) must sign a Business Associate Agreement (BAA) with covered entities. Not all AI vendors offer BAAs; of those that do, the scope of which features are covered under the BAA varies. Verify that the specific feature you intend to use — not just the platform generally — is covered under the vendor's BAA. Microsoft Copilot for Healthcare, AWS HealthScribe, and Google Cloud Healthcare AI have established BAA frameworks; many smaller AI tools do not.
FERPA (education): AI tools used in educational institutions that handle student records must comply with FERPA. This means having appropriate data handling agreements and student privacy policies in place. Several AI vendors have published FERPA compliance documentation for educational institutions; others have not addressed educational use specifically in their compliance frameworks.
Use this checklist before committing to any AI tool for business use: Does the vendor offer a DPA or BAA appropriate to your industry? Is training data opt-out available on the plan you intend to use? What SOC 2 or equivalent certifications does the vendor hold, and are they current? Where is data stored and processed, and is that compliant with your data residency requirements? What is the full sub-processor list and are those sub-processors' security commitments documented? What are the data retention and deletion policies? Is encryption at rest with customer-managed keys available? Does the vendor have an established incident response history?
Score vendors against this checklist rather than relying on their marketing claims. Vendors that cannot answer these questions clearly and in writing are not ready for business use regardless of how impressive their AI capabilities are. Security evaluation may feel like friction in the adoption process, but it is far less friction than a data breach investigation, a regulatory fine, or a client relationship destroyed by a privacy incident that was preventable.