AI update for 2026

Adapting to the evolving AI regulatory landscape

As AI continues to transform the business landscape, staying ahead of legal developments in this space has never been more critical. Global regulation poses a particular challenge, with jurisdictions adopting divergent approaches amid fierce international competition and concerns that excessive regulation could hinder innovation. At the same time, the way AI systems are trained and operate, and the way in which the market itself operates, creates unique legal issues, prompting new legislation, guidance and case law.

In this update, we will help you navigate this evolving web of digital regulation by examining key developments across:

  • AI-specific regulation
  • Intellectual property
  • Data privacy
  • AI litigation
  • Competition law

AI specific regulation

Regulators worldwide share concerns about AI risks. However, their approach to regulation varies significantly, reflecting today’s complex geopolitical landscape. At one end of the spectrum, the US pursues a strongly pro-innovation, light-touch stance at federal level (despite some states passing new AI laws). At the other, the EU has a comprehensive AI legislative package, albeit that its implementation is being refined through the current drive to simplify its digital rules. Countries like the UK arguably sit somewhere in between.

Diving a little deeper - in the EU, the AI Act has been in force since August 2024, with staged implementation over two plus years. The current focus on the EU’s competitiveness and publication of its digital omnibus mean that some of the rules around high risk AI which were due to apply from this summer are being slightly delayed. Their entry into application is also being linked (in part) to the availability of tools (including the necessary standards) to help organisations comply. Other proposed changes to the Act include extending some of the exemptions granted to SMEs and the availability of sandboxes, and reinforcing the AI Office’s powers to oversee AI systems built on General Purpose AI (GPAI) models.

Meanwhile, the UK has maintained its sector specific approach to AI regulation. The UK government has discussed introducing an AI Bill, although its scope and timing remain unclear. It is expected to be broader than originally planned, covering AI safety and possibly also IP, but is not expected to replicate the EU model.

Intellectual property

2025 was another busy year for AI and IP. This trend will continue into 2026, with a lot of the focus (again) being on copyright.

While progress in the UK will no doubt continue to take time, we can expect some development this year. The UK government is due to publish two AI and copyright-focussed reports by 18 March 2026 under the Data (Use and Access) Act 2025. The outcome of the UK consultation on copyright and AI is also expected later this year, with the UK government due to outline its plans on: (i) balancing the rights of AI developers and rights holders for AI-training purposes and (ii) UK copyright protection for AI-generated outputs. Further guidance may cover treatment of AI models trained abroad (particularly relevant in light of Getty Images v Stability AI), infringement and liability relating to AI-generated outputs, and whether individuals have sufficient control over use of their likeness.

On the disputes front, the UK Court of Appeal is expected to hear Getty’s appeal on secondary copyright infringement in its dispute with generative AI provider Stability AI.

Developments are also expected in the EU, with the European Commission currently consulting on protocols for reserving rights from text and data mining, the Court of Justice of the European Union expected to hand down its first decision in this space (in Like Company v Google) in late 2026 or early 2027, and further copyright and AI-related decisions expected in Germany and France.

Data privacy

AI remains a major focus for data privacy regulators and legislators, with them seeking to balance promoting innovation and protecting individuals. For example, provisions in the UK’s Data (Use and Access) Act 2025 will relax the data protection rules for AI, likely from January, particularly around automated decision making (ADM), while maintaining important guardrails for the riskiest use cases.

Regulators on both sides of the channel are developing guidance to support AI uptake:

  • The UK’s Information Commissioner’s Office (ICO) has promised updated guidance on ADM this winter, with a new AI code of practice to follow (to provide organisations with new clear and certain guidance).
  • The ICO is collaborating closely with other UK regulators, including via the Digital Regulation Cooperation Forum, to provide organisations with welcome regulatory consistency around AI, particularly in financial services.
  • The European Data Protection Board is developing guidance to support organisations to navigate the interaction of the EU General Data Protection Regulation and EU AI Act.

While the importance of incentivising innovation is front of mind, UK and EU data protection authorities are also increasing their AI enforcement activity. They are focusing on both developers and corporate deployers of AI solutions in circumstances where tools or models pose real privacy risks to individuals, which may mean further fines, potentially of higher value, in 2026.

AI litigation

With AI becoming ever more widespread, the risk of litigation when it goes wrong continues to grow.

The opacity of AI models, the potential for AI to produce inaccurate outputs (or “hallucinations”), and the ability for AI to replicate errors quickly at scale - create fertile ground for substantial claims against developers and the businesses deploying these technologies.

Regulators are also keeping a keen eye on so called “AI washing” - the practice of making false or exaggerated claims about the use of AI, with, for example, the US Federal Trade Commission having underscored its focus on “ensuring the promise of new technology isn’t misused as a means to mislead consumers”. Closer to home, the FCA is keen to ensure the “safe and responsible use of AI in UK financial markets.” Adverse regulatory findings may also serve as a catalyst for follow-on civil claims.

Fundamental questions of legal liability also remain unresolved: should responsibility for AI-driven errors rest with the developer, the deploying organisation, or even the AI model itself? How can one prove the underlying cause and mechanism of a “hallucination”? As AI-related claims reach the courts, judges will inevitably be required to address these types of questions, with the answers potentially providing some clarity on where the risks inherent in AI deployment ultimately lie.

Competition law

Competition authorities around the world are keeping a close eye on AI markets, recognising both the innovation potential and the risk of entrenched market positions.

They are continuing to monitor partnerships under the merger control and antitrust rules, having tested the boundaries of their jurisdiction to review such transactions under the merger control rules over the last couple of years. The UK Competition and Markets Authority (CMA), for example, has used its flexible jurisdictional thresholds to review non-traditional transaction structures like acquihires, commercial partnerships and non-controlling minority acquisitions.

On the antitrust front, authorities are moving beyond theoretical discussion of algorithmic pricing concerns to bring real enforcement cases – algorithmic collusion is at the centre of the RealPage litigation in the US, and the European Commission has indicated that it has several algorithmic pricing investigations underway. Classic forms of unilateral conduct, such as self-preferencing, price discrimination, predation or tying also remain on the radar - the European Commission has recently announced new probes into whether Google and Meta are favouring their own AI services.

Looking ahead, we can expect 2026 to bring some clarity on whether and how new digital markets regimes might be deployed to maintain contestability in AI markets. While the UK regime is already sufficiently flexible to include AI products and services in its scope if and when the time is right, the position under the Digital Markets Act (DMA) is less clear. With the AI sector a focus of the Commission’s current review of the DMA, we can expect some further clarity when that review wraps up in March.

Adapting to an AI age

As AI continues to reshape industries and challenge established legal frameworks, organisations must ensure that they adopt practical AI governance frameworks which fit within their risk appetite, manage specific risks linked to their particular AI use cases and are agile enough to adapt to a changing regulatory and technological landscape.

Related briefings

See all

This material is provided for general information only. It does not constitute legal or other professional advice.