Everything You Need to Know About the Persona and OpenAI Identity Screening Investigation

An independent technical investigation published by vmfunc, MDLcsgo, and DziurwaF alleges that OpenAI’s identity verification vendor, Persona, has operated dedicated watchlist screening infrastructure tied to OpenAI since November 2023, and that a FedRAMP-branded Persona government deployment exposed approximately 53MB of JavaScript source maps that reconstruct a large internal dashboard codebase.

According to the researchers, that reconstructed codebase includes modules for:

  • Watchlist screening
  • Biometric face list management
  • Politically Exposed Person facial similarity scoring
  • Suspicious Activity Report workflows
  • Recurring risk re-screening

The report frames these findings as evidence of a large identity-screening architecture operating behind mainstream AI access.

It also draws clear lines about what it cannot prove.

This article breaks down:

  • What the investigation claims
  • What can be independently confirmed from public sources
  • What remains unproven
  • Why this matters for users in 2026
  • What you should do next

Original research source:
https://vmfunc.re/blog/persona

Research accounts:
https://x.com/vmfunc

What the Watchers Report Claims (Technical Breakdown)

1. A Dedicated OpenAI Watchlist Screening Service

The investigation references two Persona subdomains:

  • openai-watchlistdb.withpersona.com
  • openai-watchlistdb-testing.withpersona.com

The naming strongly suggests a dedicated watchlist database service associated with OpenAI identity verification.

In compliance architecture, a watchlist engine typically performs:

  • Sanctions screening
  • Politically Exposed Person (PEP) matching
  • Adverse media screening
  • Custom list comparisons
  • Recurring re-screening at defined intervals

How This Would Work Technically

  1. A user completes ID verification.
  2. Structured identity data and biometric media are collected.
  3. Data is sent to a screening engine.
  4. The engine returns allow, review, or deny decisions based on configurable thresholds.

The researchers state that certificate transparency logs indicate this infrastructure has been operational since late 2023, which predates some public disclosures around expanded identity verification requirements.

That timeline is one of the report’s most significant claims.

2. The Persona Government Platform and the “ONYX” Deployment

The report also describes a Persona government environment under withpersona-gov.com, identified as FedRAMP-related.

It further highlights a recently observed subdomain:

  • onyx.withpersona-gov.com

The name ONYX overlaps with Fivecast ONYX, a surveillance platform referenced in public reporting about ICE procurement.

However, the researchers explicitly state:

  • The extracted code did not contain direct references to ICE
  • No deportation workflows were identified
  • No confirmed Fivecast integration was found

Critical Technical Clarification

A naming overlap does not prove operational linkage.
It establishes correlation, not confirmation.

The responsible stance is to treat the ONYX overlap as a red flag worth investigating, not a concluded integration.

3. Exposed Source Maps and Dashboard Code Reconstruction

The most serious operational claim involves publicly accessible JavaScript source map files served under a development-style asset path.

The report describes the moment as:

“Download 53 megabytes of source code from a government endpoint that forgot to lock the door.”

What Is a Source Map?

A source map maps minified JavaScript back to original source code.

If built with sourcesContent enabled, it can embed original TypeScript source inside the map file.

If served publicly in production, this may allow reconstruction of:

  • Internal file structure
  • Frontend logic
  • Permission models
  • Feature flags
  • API route references
  • Compliance workflow modules

According to the report, approximately 53MB of source maps enabled reconstruction of thousands of source files from a Persona government dashboard environment.

What This Does Not Automatically Mean

  • It does not automatically expose backend databases.
  • It does not automatically reveal encryption keys.
  • It does not prove that specific workflows are active for OpenAI users.

It does expose architectural detail that normally remains private.

Modules Allegedly Identified in the Codebase

The report cites modules relating to:

  • Suspicious Activity Report (SAR) workflows
  • Suspicious Transaction Report (STR) schemas
  • Watchlist and adverse media screening configuration
  • Politically Exposed Person facial similarity scoring
  • Biometric list retention controls
  • Hundreds of enumerated verification checks
  • Crypto address screening integrations
  • An OpenAI-powered operator assistant

The Legal and Technical Distinction

The presence of a module demonstrates platform capability.
It does not prove deployment for a specific customer such as OpenAI.

This distinction is critical.

Capability is not evidence of use.

What Can Be Independently Confirmed from Public Sources

Separate from the source map claims, several facts are publicly documented:

Persona Markets OpenAI as a Customer

Persona’s own OpenAI case study references large-scale screening and identity verification workflows.

Persona Publicly States FedRAMP Authorization

Persona has announced FedRAMP Authorized status at the Low impact level and FedRAMP Ready at Moderate.

OpenAI’s Privacy Policy References Vendor-Based ID Verification

OpenAI’s privacy disclosures state that identity and age verification may be conducted by vendors operating on its behalf and that information may be disclosed to government authorities for legal reasons described.

None of this proves the more serious allegations.

It does confirm the general shape of a vendor-based identity flow.

What Remains Unproven

Even if source maps were exposed, several high-stakes claims require external confirmation:

  • Whether OpenAI identity checks trigger SAR filings
  • Whether biometric data is shared beyond legally compelled processes
  • Whether the “watchlistdb” contains proprietary user watchlists versus standard sanctions screening
  • Whether ONYX naming overlap reflects operational integration

The researchers explicitly state they did not hack systems and cannot prove where data ultimately flows.

This is the difference between:

Evidence of capability
and
Evidence of deployment

That distinction matters.

Why This Matters for Normal Users

Identity systems do not have to be malicious to become dangerous.

When platforms normalize:

  • Biometric capture
  • Recurring re-screening
  • Opaque denials without explanation

Access to digital tools becomes conditional on risk engines users cannot see or audit.

Even lawful systems carry risk.

The blast radius of:

  • Configuration mistakes
  • Vendor leaks
  • False positives
  • Over-broad data sharing

can be significant, especially when biometric identifiers are involved.

Unlike passwords, faces cannot be rotated.

Explained Simply: What This Means for a High School Student

Imagine signing up for an AI tool and being asked to upload your ID and record a selfie.

The system checks:

  • If your ID is real
  • If your selfie matches your ID
  • If your name matches certain lists
  • If your face resembles certain public figures

The report claims that developer blueprints for part of this system were publicly accessible.

That does not automatically mean your data was sent to the government.

It means the system is more complex than most users realize, and that parts of its structure may have been visible when they should not have been.

The Bigger Trend: Identity-Based Internet Access

Regardless of where this specific case settles, one thing is clear:

  • Biometric verification is normalizing
  • Age and identity gates are expanding
  • Screening systems are becoming automated and recurring

Access to digital platforms increasingly depends on proving who you are.

That shift has long-term implications for anonymity, speech, and participation.

How to Protect Your Online Privacy in 2026

You cannot always avoid identity verification requirements.

You can reduce unnecessary exposure.

Consider:

  • Using unique passwords with a password manager
  • Enabling hardware-based two-factor authentication
  • Minimizing public exposure of personal identifiers
  • Being cautious with ID upload prompts and phishing attempts

Strengthen Your Privacy With BuycatVPN

BuycatVPN encrypts your internet traffic, protects you on public WiFi, and reduces passive metadata visibility from internet service providers and network intermediaries.

A VPN does not bypass identity verification.
It does not override platform screening.

What it does is protect your connection layer.

As identity-based access expands, protecting everything outside those checkpoints becomes more important.

Download BuycatVPN:
https://www.boycat.io/vpn

Take control of your digital privacy before it becomes an afterthought.

The Bottom Line

The Watchers investigation raises serious architectural questions about Persona’s infrastructure and its relationship to OpenAI identity verification.

Some claims require confirmation.
Others describe capabilities common in compliance platforms.

What is undeniable is the direction of the internet.

Identity-conditional access is expanding.
Biometric verification is normalizing.
Screening is increasingly automated.

Understanding that shift is step one.
Protecting yourself is step two.

Sources

Primary Investigation

  1. vmfunc, MDLcsgo, DziurwaF
    The Watchers: How OpenAI, the US Government, and Persona Built an Identity Surveillance System
    https://vmfunc.re/blog/persona
  2. @vmfunc (Research Updates and Correspondence)
    https://x.com/vmfunc

Public Documentation and Corporate Disclosures

  1. Persona OpenAI Case Study
    https://withpersona.com/customers/openai
  2. Persona FedRAMP Announcement
    https://withpersona.com/blog/personas-fedramp-status
  3. OpenAI Privacy Policy
    https://openai.com/policies/row-privacy-policy/
  4. OpenAI Organization Verification Documentation
    https://platform.openai.com/docs/guides/organization-verification

Government and Regulatory Context

  1. FedRAMP Marketplace
    https://marketplace.fedramp.gov
  2. FinCEN (Financial Crimes Enforcement Network) SAR Overview
    https://www.fincen.gov
  3. FINTRAC (Financial Transactions and Reports Analysis Centre of Canada)
    https://fintrac-canafe.canada.ca
  4. EFF Reporting on ICE and Fivecast ONYX
    https://www.eff.org/deeplinks/2026/01/ice-going-surveillance-shopping-spree

Technical Artifacts Referenced in Investigation

  1. Certificate Transparency Logs (crt.sh)
    https://crt.sh
  2. Shodan Infrastructure Search
    https://www.shodan.io
  3. Google Trust Services Certificate Authority
    https://pki.goog