The UK’s Online Safety Act is no longer theoretical. In the space of six months, three of the four largest gaming platforms — Microsoft Xbox, Sony PlayStation, and Valve’s Steam — have rolled out mandatory age verification for UK users. Nintendo has tightened parental controls on the Switch 2 to match. And Ofcom, the UK’s online safety regulator, has already issued over £2.3 million in fines across 90+ investigations, making it clear that enforcement is real.
For any company operating in the gaming space — from AAA publishers to indie studios with user-generated-content features — this isn’t a UK-only story. The Online Safety Act is setting the precedent that other jurisdictions will follow. Here’s the landscape, what each platform chose, and what you should be building toward.
What the UK Online Safety Act Requires from Gaming Platforms
The Online Safety Act 2023 (OSA) places legal obligations on platforms that host user-generated content or allow user interaction — and that includes nearly every modern game with online features. The requirements fall into three categories:
Illegal content duties require platforms to proactively minimise exposure to illegal content, including child sexual exploitation material (CSAM), terrorism, and fraud. These duties are already in force.
Child safety duties require platforms where children are likely to be users to implement age verification or age estimation, restrict children’s access to harmful content, and conduct risk assessments. Ofcom’s codes of practice for these duties became enforceable in early 2026.
Categorised service duties apply to larger platforms (Category 1 services) and include additional transparency, user empowerment, and identity verification requirements. The Category 1 register is expected mid-2026.
The practical upshot for gaming: if your platform has chat, voice comms, user content sharing, or a marketplace, you’re in scope. If children are a likely audience, you must implement age assurance — not just parental controls, but a mechanism that can reliably distinguish adults from minors.
How the Big Three Responded
Each of the major platforms chose a different age verification strategy. The divergence is instructive.
Xbox: Early Mover, Minimal Friction
Microsoft was the first console platform to act, rolling out age verification for UK Xbox users in late 2025. The approach:
- What triggers it: Users must verify their age to retain access to social features — voice chat, text messaging, game invites, and party systems.
- What doesn’t change: Game purchases, achievements, gameplay history, and single-player access remain unaffected.
- Verification methods: Users can verify via a credit card (Ofcom recognises UK credit cards as a “highly effective age assurance measure” since you must be 18+ to hold one) or by uploading a government-issued ID.
- One-time check: Verification is a one-time event; once confirmed, the status persists.
Microsoft’s approach is pragmatic: gate the features that create child safety risk (communication), leave core gaming untouched, and use the least-friction verification method Ofcom will accept.
PlayStation: Yoti Partnership, June 2026 Deadline
Sony has taken a different path, partnering with identity verification provider Yoti and setting a hard deadline: UK and Ireland users must verify by June 2026 or lose access to messaging, voice chat, party systems, and live broadcasting features (including streaming to YouTube and Twitch).
Sony offers three verification methods:
- Mobile number verification — The lowest-friction option, linking age to a registered phone number.
- Government-issued ID upload — Passport or driving licence.
- Facial age estimation — Yoti’s AI-powered selfie scan, which estimates biological age without storing biometric data.
The facial age estimation option is notable. Yoti’s technology estimates age from a single selfie, processes the image on-device or in a secure enclave, and deletes the image immediately after estimation. No biometric template is stored. This is the same approach Discord initially selected (and later delayed) for its global rollout.
Like Xbox, PlayStation’s verification is one-time, and core gameplay, purchases, and trophies remain accessible without verification.
Steam: Credit Card as Age Gate
Valve took the most minimalist approach. UK Steam users are now required to have a valid UK credit card stored on their account to access mature-rated content — games, community hubs, and discussion forums tagged as adult.
Valve’s reasoning is transparent: “Among all age assurance mechanisms reviewed by Valve, this process preserves the maximum degree of user privacy.” No biometric data, no ID upload, no third-party verification provider. Just a payment instrument that, by UK law, requires the holder to be 18+.
The trade-off is obvious: credit card verification catches the “under 18” boundary but can’t distinguish a 14-year-old from a 16-year-old, and it’s trivially bypassable by anyone with access to a parent’s card. Ofcom has accepted it as a “highly effective” measure for now, but whether this standard holds as enforcement matures remains to be seen.
Nintendo: Parental Controls, Not Direct Verification
Nintendo hasn’t implemented direct age verification in the Xbox/PlayStation sense. Instead, the Switch 2 requires:
- Users under 16 to have parental controls configured via the Nintendo Switch Parental Controls app before accessing GameChat.
- Phone number registration for Nintendo Accounts to use communication features.
- Parent or guardian approval for child accounts to access chat features, including per-session approval for video chat.
This approach relies on the parent-child account structure rather than verifying the player’s age directly — a model that works when parents are engaged but falls short for unsupervised minors creating accounts independently.
The Enforcement Reality
This isn’t just platforms being cautious. Ofcom is actively enforcing. As of March 2026:
- 90+ investigations have been launched into platform compliance.
- Six fines have been issued, including £800,000 against Kick (the livestreaming platform) and £520,000 against 4chan.
- £1 million was levied against an adult content operator.
The fines are still small relative to the maximum penalties the OSA allows (up to £18 million or 10% of global revenue, whichever is greater), but the pace and breadth of investigations signals that Ofcom is building toward larger enforcement actions as the Category 1 register and additional codes of practice take effect.
For gaming companies specifically, Ofcom published guidance in early 2026 making clear that games with online social features are in scope and should be conducting regular OSA risk assessments, especially where children are a likely audience.
What This Means Beyond the UK
The UK is the first major market to force console and PC gaming platforms into mandatory age verification, but it won’t be the last. The regulatory direction is clear:
EU Digital Services Act (DSA): Article 28 already requires very large online platforms (VLOPs) to assess systemic risks to minors and implement mitigation measures, including age verification. The EU’s own age verification app — while technically troubled at launch — signals the Commission’s intent to build centralised identity infrastructure.
Australia’s Online Safety Act: Already in force, with enforcement extending to gaming platforms that host user interactions. Australia’s eSafety Commissioner has shown willingness to pursue international platforms.
US state laws: Over 20 US states have passed or are advancing age verification mandates covering platforms where minors interact. While enforcement varies, the patchwork is converging toward federal action.
Turkey: Just this week (April 23, 2026), the Turkish parliament passed a bill banning social media use for children under 15 and requiring e-Government identity verification — gaming platforms with social features will likely fall under enforcement scope.
The pattern is consistent: communication and social features in games trigger the same regulatory obligations as social media. If your game has chat, voice, UGC, or marketplace features, you’re a regulated platform in an increasing number of jurisdictions.
Implementation Lessons for Game Developers and Publishers
Based on how Xbox, PlayStation, and Steam have navigated the OSA, here’s what to build toward:
1. Gate Social Features, Not Gameplay
All three platforms followed the same playbook: age verification gates communication and social interaction, not core gameplay. This is both a compliance strategy and a UX decision — it minimises friction for the majority of users while addressing the specific risk vector (minor-to-adult communication) that regulators care about.
2. Offer Multiple Verification Methods
Sony’s three-method approach (phone, ID, facial estimation) is the gold standard for conversion optimisation. Different users have different comfort levels. A single verification method — especially document upload — will produce abandonment rates of 10-20%. Offering a low-friction alternative (phone number, credit card) alongside high-assurance options (ID, biometrics) lets users self-select.
3. Make It One-Time
Both Xbox and PlayStation implemented verification as a one-time event. This is critical. Repeated verification creates ongoing friction that drives users away from legitimate platforms and toward unregulated alternatives — exactly the opposite of what regulators want.
4. Plan for Multi-Jurisdictional Compliance
The UK’s OSA is the strictest gaming-specific mandate today, but it’s not the only one. Building an age verification integration that can swap verification providers, adjust age thresholds (18 in the UK, 16 in some EU contexts, 13 for COPPA in the US), and support both age estimation and document verification will save expensive re-engineering when the next jurisdiction comes online.
5. Use an API-Based Verification Provider
Integrating directly with Yoti, or building a bespoke verification flow, creates vendor lock-in and maintenance burden. An API-first age verification provider like Xident lets you implement once and support multiple verification methods — facial age estimation, document verification, NFC chip reading, and reusable age tokens — across any jurisdiction. When a new market adds age verification requirements, you update configuration, not code.
How Xident Fits
Xident is purpose-built for exactly this scenario: platforms that need to verify age across multiple jurisdictions, using multiple methods, without storing personal data they don’t need.
For gaming platforms specifically:
- Age threshold classification — Verify against +12, +15, +16, +18, or +21 thresholds depending on jurisdiction and content rating, all through a single API call.
- On-device liveness detection — Prevent spoofing without requiring users to leave the game client.
- Reusable age tokens — Verify once, prove everywhere. A player who verified on your platform can carry that verified status to your partner services without re-verifying.
- No PII storage — Xident’s architecture returns a yes/no age threshold result and a verification token. We don’t store facial images, document scans, or personal data on your behalf — which simplifies your GDPR, UK DPA, and data minimisation obligations.
- Multi-method support — Facial age estimation, document OCR, NFC chip verification, and credit card checks, all through a single integration.
The UK Online Safety Act is the first domino. The platforms that build flexible, privacy-preserving age verification now will have a structural advantage as the EU, Australia, the US, and Turkey follow. The ones that bolt on minimum-viable compliance for each jurisdiction individually will spend the next five years in perpetual re-engineering.
If you’re building or operating a gaming platform with online social features, talk to us about integration. The deadline isn’t theoretical anymore.