GDPR doesn't use the term "vendor risk assessment," but it requires the equivalent. Article 28(1) requires controllers to use only processors providing "sufficient guarantees" of compliance. Article 32 requires risk-appropriate security measures, including those involving processors. Article 24 requires the controller to be able to demonstrate ongoing compliance.
The compound effect: a controller has to assess every processor's risk, document the assessment, and revisit it. A questionnaire at onboarding is necessary but not sufficient.
What "risk" means here
Risk under GDPR is risk to the rights and freedoms of data subjects — not to the controller's business. The two often correlate (a vendor breach harms both), but the perspective is different. A vendor that handles a small volume of low-sensitivity data may be low-risk to your business and to the data subjects. A vendor that handles a small volume of health data may be low-risk to your business but high-risk to data subjects.
The risk assessment has to take both views — but if they conflict, the data subject view governs.
A working risk model
A useful starting frame:
Risk = Likelihood × Impact
= (Vendor security posture, threat landscape) × (Data sensitivity, volume, identifiability, potential harm)
Both sides have to be assessed; both sides change over time.
The likelihood side
Factors that affect how likely a vendor is to suffer a security incident or compliance failure:
- Security maturity. SOC 2 Type II, ISO 27001, audited. (Lower likelihood.)
- Track record. Public breach history, regulatory enforcement.
- Subprocessor sprawl. Each subprocessor is a separate point of failure. A vendor with 8 subprocessors is more exposed than one with 2.
- Geographic exposure. Operations and subprocessors in jurisdictions with active surveillance regimes.
- Personnel access. Number of employees with production access; geographic distribution.
- Change cadence. A vendor that updates their privacy stack monthly has a different risk profile than one that updates annually.
The impact side
Factors that affect how bad it would be if something went wrong:
- Data sensitivity. Special categories > financial > contact > behavioural.
- Volume. Number of data subjects affected.
- Identifiability. Direct identifiers > pseudonymised > aggregated.
- Potential harms. Discrimination, financial loss, reputational damage, physical harm.
- Reversibility. Some harms (medical disclosures) cannot be undone.
Tiering
Three tiers is enough for most organisations:
Tier 1 — Critical. Special-category data, large volumes of identifiable data, or systems integral to operations. Examples: payroll, identity verification, customer-facing CRM, cloud infrastructure providers. Tier 2 — Important. Routine personal data, moderate volumes, business-critical but recoverable. Examples: marketing platforms, analytics tools, support tooling. Tier 3 — Low. Minimal personal data, low volumes, replaceable. Examples: project management with no customer data, internal documentation tools.
The tiering decision is documented. It's the basis for differentiating monitoring frequency, contract requirements, and review depth.
What a risk assessment captures
A complete vendor risk record holds:
- Identification. Vendor name, contracting entity, primary contact.
- Business purpose. Why this vendor, what they do, who depends on them.
- Data inventory. Specific categories of personal data, volume estimate, source.
- Lawful basis. What basis applies to your processing through this vendor.
- Transfer mechanism. SCCs / BCRs / adequacy / Article 49 derogation, with version and module.
- Subprocessor exposure. Snapshot of subprocessor list at time of assessment.
- Security posture. Latest attestation, scope, observation period, bridge letters.
- Risk score. Tier and rationale.
- Mitigations. Contractual, technical, and operational controls reducing risk.
- Open questions. Items pending follow-up.
- Next review date.
The mistake to avoid
The most common failure is treating the risk assessment as a procurement-time artefact. The questionnaire gets completed, the score gets calculated, the vendor gets onboarded, and the record sits in a sharepoint folder until the next vendor review eighteen months later.
Between those two events, the vendor will:
- Add and remove subprocessors.
- Change retention periods.
- Update their security commitments.
- Possibly experience a breach you read about in the news.
- Possibly be acquired by a company in a different jurisdiction.
If the risk assessment doesn't have a mechanism for catching these between formal reviews, the assessment is stale within months.
Continuous risk assessment
The minimum for a working continuous-assessment model:
- Document monitoring. Daily automated checks on privacy policy, DPA, subprocessor list, ToS, trust page. Tooling like Thorgate handles this.
- Material change triage. A defined process for evaluating each detected change against the risk model — does this change move the vendor's tier or risk score?
- Incident integration. A breach reported by the vendor, or a regulatory action against them, automatically opens the risk record.
- Annual full review. Even with continuous monitoring, an annual deep review covers things automation misses (changes in management, financial health, scope of services used).
Documenting decisions
Every risk-assessment decision needs to be recorded with reasoning. "Tier 1, accepted" is not a decision — it's a conclusion without evidence. The auditor or regulator wants to see:
- What facts were considered.
- What the risk score was.
- What mitigations are in place.
- Why the residual risk is acceptable.
- Who decided, on what authority, on what date.
This is the part that, in a regulator's worst case, becomes evidence in an enforcement action. It needs to read well in that context.