The peer comparator lets you compare structured properties of vendors side by side — jurisdictions, subprocessors, retention periods, breach notification windows, and other facts extracted from the tracked documents.
Opening the comparator
From the sidebar, click Compare. The comparator shows a table with vendors as columns and structured properties as rows. By default it includes all your tracked vendors; use the filter to narrow to a subset.
What's in the comparator
The columns are populated by fact extraction — a pass that reads each vendor's most recent document versions and pulls out specific structured fields:
- Jurisdictions where data is stored, processed, or accessed.
- Subprocessors named in the vendor's most recent subprocessor list.
- Retention period stated in the privacy policy or DPA.
- Breach notification window stated in the DPA.
- AI training disclosures — whether the vendor uses customer data to train models.
- Sub-processor change notice period.
- Adequacy / SCC version referenced in the DPA.
Cells that couldn't be extracted appear as dashes. This usually means either the vendor doesn't disclose that property, or the document parser couldn't reliably find it.
When facts are refreshed
Facts are extracted automatically when:
- A vendor is first added.
- A document changes (the change-detection pipeline triggers a re-extraction).
- A manual "Refresh comparison" is triggered on the vendor.
The 60-minute manual cooldown that applies to crawls also applies to fact-extraction refreshes.
Sourcing and caching
Fact extraction uses Anthropic's Claude with a deterministic prompt against the most recent document versions. The result is cached at the catalog level (shared across all customers tracking the same vendor) keyed by the document content hash. When a customer triggers a refresh on a vendor whose facts haven't changed since the last extraction, the cached result is returned without a fresh API call.
This means:
- Refreshing a vendor whose documents haven't changed is fast and free.
- Refreshing a vendor whose documents have changed triggers a new extraction.
- Multiple customers tracking the same vendor share the extraction work.
Limits and accuracy
Fact extraction is a best-effort pass. It performs well on:
- Well-structured DPAs and privacy policies in English.
- Clean subprocessor lists in HTML tables.
- Standard breach notification language.
It performs less well on:
- Heavily customised legal language.
- Documents that mix marketing text with policy text.
- PDF subprocessor lists that have been OCR'd from images.
- Languages other than English.
Where extraction fails, the cell shows a dash. We're continuously improving the extraction prompts; refreshing a vendor a few weeks after launch may pick up improvements.
How to use it
The comparator is most useful for:
- Pre-renewal evaluation. Before renewing a vendor's contract, compare them against current peers to see how they stack up.
- Vendor selection. When choosing between candidates, structured comparison surfaces differences that aren't obvious from individual document review.
- Audit preparation. A peer comparison snapshot is a useful artefact for auditor walkthroughs of how you evaluate vendors.
Limits to rely on
The comparator does not:
- Score or rank vendors. The cells are facts, not opinions.
- Remember historical comparisons. Each refresh produces a current snapshot; older snapshots aren't preserved.
- Update in real time. Cells reflect the most recent extraction, which may lag a recent document change by a few minutes.