Great question — and it gets into a grey area, but here's the clearest breakdown under the
UK Online Safety Act 2023.
A
user on your site clicks on a safe embedded YouTube video, and
then keeps browsing YouTube, eventually encountering harmful content
not embedded, linked, or curated by you.
- The embedded video itself is appropriate.
- You are not recommending harmful content or using algorithms/plugins that do.
- The user’s journey into harmful content happens after leaving your site’s control, even if it started with an embedded video.
- You don’t have editorial control over what YouTube recommends next.
The
Online Safety Act targets platforms that allow user-generated content and
fail to mitigate risks on their own platform. YouTube’s recommendation system and its content fall under
YouTube’s own responsibility (and their obligations under the OSA, since they're a regulated service).
What
You're more exposed if:
- Your site intentionally embeds content to bait users into rabbit holes of harmful stuff.
- You're embedding harmful channels, playlists, or videos as a pattern.
- You’re somehow monetising or algorithmically curating these user journeys.
Best Practice:
Even though you're not liable in that scenario, it's good to:
- Have clear terms stating you don’t control external content (YouTube, etc.).
- Allow users to report links or embeds if something becomes inappropriate later.
- Avoid embedding videos from dodgy creators, even if the single video looks safe.