What the Online Safety Act does 

The Online Safety Act 2023 (the Act) is a new set of laws that protects children and adults online. It puts a range of new duties on social media companies and search services, making them more responsible for their users’ safety on their platforms. The Act will give providers new duties to implement systems and processes to reduce risks their services are used for illegal activity, and to take down illegal content when it does appear. 

The strongest protections in the Act have been designed for children. Platforms will be required to prevent children from accessing harmful and age-inappropriate content and provide parents and children with clear and accessible ways to report problems online when they do arise. 

The Act will also protect adult users, ensuring that major platforms will need to be more transparent about which kinds of potentially harmful content they allow, and give people more control over the types of content they want to see. 

Ofcom is the independent regulator of Online Safety. It will set out steps providers can take to fulfil their safety duties in codes of practice. It has a broad range of powers to assess and enforce providers’ compliance with the framework. 

Providers’ safety duties are proportionate to factors including the risk of harm to individuals, and the size and capacity of each provider. This makes sure that while safety measures will need to be put in place across the board, we aren’t requiring small services with limited functionality to take the same actions as the largest corporations. Ofcom is required to take users’ rights into account when setting out steps to take. And providers have simultaneous duties to pay particular regard to users’ rights when fulfilling their safety duties.

Who the Act applies to 

The Act’s duties apply to search services and services that allow users to post content online or to interact with each other. This includes a range of websites, apps and other services, including social media services, consumer file cloud storage and sharing sites, video-sharing platforms, online forums, dating services, and online instant messaging services. 

The Act applies to services even if the companies providing them are outside the UK should they have links to the UK. This includes if the service has a significant number of UK users, if the UK is a target market, or it is capable of being accessed by UK users and there is a material risk of significant harm to such users.

How the Online Safety Act is being implemented

The Act passed into law on 26 October 2023. Now work is being carried out to bring its protections into effect as quickly as possible. Ofcom published an updated roadmap setting out its implementation plans on 17 October 2024. 

Ofcom is leading work to implement the Act’s provisions and are taking a phased approach to bringing duties into effect. The government also needs to make secondary legislation in some areas to enable elements of the framework. 

The Act requires Ofcom to develop guidance and codes of practice that will set out how online platforms can meet their duties. Ofcom must carry out public consultations on draft codes of practice before finalising them, and the codes must be laid before Parliament before they take effect. The main phases are set out below:

Duties about illegal content – on 16 December 2024 Ofcom published its policy statement about protecting people from illegal harms online, alongside draft codes of practice which were laid in Parliament on the same day. Ofcom also published its illegal content risk assessment guidance, meaning that in-scope service providers will have three months to assess the risks of illegal content appearing on their service. We expect the illegal content duties to be in effect from early 2025, and Ofcom can then start to enforce against the regime.

Duties about content harmful to children – Ofcom has published draft guidance about use of age assurance to prevent children accessing online pornography. The consultation on this closed on 5 March 2024 and we expect the final guidance to be published in January 2025. The corresponding duty in the Act (section 81) is scheduled to come into force on 17 January 2025.

Ofcom has also published draft codes of practice and guidance about protecting children from harmful content such as promotion of self-harm or suicide. The consultation on these closed on 17 July 2024. Platforms will have to risk assess for harms to children from Spring 2025 and the child safety regime will be fully in effect by Summer 2025.

Duties for categorised services – some platforms will have to comply with additional requirements to protect users. The Act created categories of service and thresholds for each category (Category 1, 2A and 2B) will be defined through secondary legislation.

Once the regulations to set the thresholds have been laid and approved by Parliament, Ofcom will publish a register setting out which services fall into which categories and will publish further codes of practice for consultation.

New offences introduced by the Act

The criminal offences introduced by the Act came into effect on 31 January 2024. These offences cover: 

  • encouraging or assisting serious self-harm
  • cyberflashing
  • sending false information intended to cause non-trivial harm
  • threatening communications
  • intimate image abuse
  • epilepsy trolling

These new offences apply directly to the individuals sending them, and convictions have already been made under the cyberflashing and threatening communications offences. 

Types of content that the Act tackles

Illegal content

The Act requires all companies to take robust action against illegal content and activity. Platforms will be required to implement measures to reduce the risks their services are used for illegal offending. They will also need to put in place systems for removing illegal content when it does appear. Search services will also have new duties to take steps to reduce the risks users encounter illegal content via their services.

The Act sets out a list of priority offences. These reflect the most serious and prevalent illegal content and activity, against which companies must take proactive measures.

Platforms must also remove any other illegal content where there is an individual victim (actual or intended), where it is flagged to them by users, or where they become aware of it through any other means.

The illegal content duties are not just about removing existing illegal content; they are also about stopping it from appearing at all. Platforms need to think about how they design their sites to reduce the likelihood of them being used for criminal activity in the first place.

The kinds of illegal content and activity that platforms need to protect users from are set out in the Act, and this includes content relating to:

  • child sexual abuse
  • controlling or coercive behaviour
  • extreme sexual violence
  • extreme pornography
  • fraud
  • racially or religiously aggravated public order offences
  • inciting violence
  • illegal immigration and people smuggling
  • promoting or facilitating suicide
  • intimate image abuse
  • selling illegal drugs or weapons
  • sexual exploitation
  • terrorism

Content that is harmful to children 

Protecting children is at the heart of the Online Safety Act. Although some content is not illegal, it could be harmful or age-inappropriate for children and platforms need to protect children from it. 

Companies with websites that are likely to be accessed by children need to take steps to protect children from harmful content and behaviour.

The categories of harmful content that platforms need to protect children from encountering are set out in the Act. Children must be prevented from accessing Primary Priority Content, and should be given age-appropriate access to Priority Content. The types of content which fall into these categories are set out below.

Primary Priority Content

  • pornography
  • content that encourages, promotes, or provides instructions for either:
  • self-harm
  • eating disorders or
  • suicide

Priority Content

  • bullying
  • abusive or hateful content
  • content which depicts or encourages serious violence or injury
  • content which encourages dangerous stunts and challenges; and
  • content which encourages the ingestion, inhalation or exposure to harmful substances.

Age-appropriate experiences for children online

The Act requires social media companies to enforce their age limits consistently and protect their child users. 

Services must assess any risks to children from using their platforms and set appropriate age restrictions, ensuring that child users have age-appropriate experiences and are shielded from harmful content. Websites with age restrictions need to specify in their terms of service what measures they use to prevent underage access and apply these terms consistently. 

Different technologies can be used to check people’s ages online. These are called age assurance technologies.

The new laws mean social media companies will have to say what technology they are using, if any, and apply these measures consistently. Companies can no longer say their service is for users above a certain age in their terms of service and do nothing to prevent younger children accessing it.

Adults will have more control over the content they see

User to user online platforms over a designated threshold, known as Category 1 services, will be required to offer adult users tools to give them greater control over the kinds of content they see and who they engage with online.

Adult users of such services will be able to verify their identity and access tools which enable them to reduce the likelihood that they see content from non-verified users and prevent non-verified users from interacting with their content. This will help stop anonymous trolls from contacting them.

Following the publication of guidance by Ofcom, Category 1 services will also need to proactively offer adult users optional tools to help them reduce the likelihood that they will encounter certain types of legal content. These categories of content are set out in the Act and include content that does not meet a criminal threshold but encourages, promotes or provides instructions for suicide, self-harm or eating disorders. These tools also apply to abusive or hate content including where such content is racist, antisemitic, homophobic, or misogynist. The tools must be effective and easy to access.

The Act already protects children from seeing this content.

The Act will tackle suicide and self-harm content 

Any site that allows users to share content or interact with each other is in scope of the Online Safety Act. These laws also require sites to rapidly remove illegal suicide and self-harm content and proactively protect users from content that is illegal under the Suicide Act 1961. The Act has also introduced a new criminal offence for encouraging or assisting serious self-harm.

Services that are likely to be accessed by children must prevent children of all ages from encountering legal content that encourages, promotes or provides instruction for suicide and self-harm. 

The Act also requires major services (Category 1 services) to uphold their terms of service where they say they will remove or restrict content or suspend users. If a service says they prohibit certain kinds of suicide or self-harm content the Act requires them to enforce these terms consistently and transparently. These companies must also have effective reporting and redress mechanisms in place enabling users to raise concerns about companies’ enforcement of their terms of service, if users feel that companies are not fulfilling their duties.

How the Act will be enforced

Ofcom is now the regulator of online safety and must make sure that platforms are protecting their users. Once the new duties are in effect, following Ofcom’s publication of final codes and guidance, platforms will have to show they have processes in place to meet the requirements set out by the Act. Ofcom will monitor how effective those processes are at protecting internet users from harm. Ofcom will have powers to take action against companies which do not follow their new duties.

Companies can be fined up to £18 million or 10 percent of their qualifying worldwide revenue, whichever is greater. Criminal action can be taken against senior managers who fail to ensure companies follow information requests from Ofcom. Ofcom will also be able to hold companies and senior managers (where they are at fault) criminally liable if the provider fails to comply with Ofcom’s enforcement notices in relation to specific child safety duties or to child sexual abuse and exploitation on their service.

In the most extreme cases, with the agreement of the courts, Ofcom will be able to require payment providers, advertisers and internet service providers to stop working with a site, preventing it from generating money or being accessed from the UK.

How the Act affects companies that are not based in the UK 

The Act gives Ofcom the powers they need to take appropriate action against all companies in scope, no matter where they are based, where services have relevant links with the UK. This means services with a significant number of UK users or where UK users are a target market, as well as other services which have in-scope content that presents a risk of significant harm to people in the UK.

How the Act tackles harmful algorithms

The Act requires providers to specifically consider how algorithms could impact users’ exposure to illegal content – and children’s exposure content that is harmful to children – as part of their risk assessments.

Providers will then need to take steps to mitigate and effectively manage any identified risks. This includes considering their platform’s design, functionalities, algorithms, and any other features likely to meet the illegal content and child safety duties.

The law also makes it clear that harm can arise from the way content is disseminated, such as when an algorithm repeatedly pushes content to a child in large volumes over a short space of time.

Some platforms will be required to publish annual transparency reports containing online safety related information, such as information about the algorithms they use and their effect on users’ experience, including children.

How the Act protects women and girls

The most harmful illegal online content disproportionately affects women and girls, and the Act requires platforms to proactively tackle this. Illegal content includes harassment, stalking, controlling or coercive behaviour, extreme pornography, and revenge pornography.

All user-to-user and search services have duties to put in place systems and processes to remove this content when it is flagged to them. The measures companies must take to remove illegal content will be set out in Ofcom’s codes of practice.

When developing these codes, Ofcom is required to consult with the Victim’s Commissioner and Domestic Abuse Commissioner to guarantee that the voices and views of women, girls and victims are reflected. 

The Act also requires Ofcom to produce guidance that summarises in one clear place the measures that can be taken to tackle the abuse that women and girls disproportionately face online. This guidance will ensure it is easy for platforms to implement holistic and effective protections for women and girls across their various duties. We expect Ofcom’s draft guidance to be published in February 2025. 

How the Act tackles Misinformation and Disinformation    

The Online Safety Act takes a proportionate approach to mis- and disinformation by focusing on addressing the greatest risks of harm to users, whilst protecting freedom of expression.

Mis- and disinformation will be captured by the Online Safety Act where it is illegal or harmful to children. Services will be required to take steps to remove illegal disinformation content if they become aware of it on their services. This includes the removal of illegal, state-sponsored disinformation through the Foreign Interference Offence, forcing companies to take action against a range of state-sponsored disinformation and state-linked interference online. Companies must also assess whether their service is likely to be accessed by children and, if so deliver additional protections for them. This includes protections against in-scope mis- and disinformation.

Category 1 services will also need to remove certain types of mis- and disinformation if they are prohibited in their terms of services.

Independent Review of Pornography Regulation, Legislation and Enforcement

The past two decades have seen a dramatic change in the way we consume media and interact with content online. We need to ensure pornography regulation and legislation reflects this change.  

Separate to the Online Safety Act, the Independent Pornography Review was announced to assess the regulation, legislation and enforcement of online and offline pornographic content.

It investigates how exploitation and abuse is tackled in the industry and examines the potentially harmful impact of pornography. This review will help ensure the laws and regulations governing a dramatically changed pornography industry are once again fit for purpose.

The review has been led by the Independent Lead Reviewer, Baroness Gabby Bertin, since December 2023, and will conclude and present its recommendations to government by the end of 2024.

Background information: useful websites

Legislation 

Notices  

Guidance and letters (including Ofcom advice and DSIT Secretary of State letters)

Consultations and Calls for Evidence (including relevant Ofcom consultations)