How artificial intelligence is regulated, litigated, and reshaping the practice of law itself.
Law firms are deploying AI tools across their operations. Contract review platforms use natural language processing to extract and compare clauses across thousands of documents in a fraction of the time it would take a human team. Legal research tools powered by large language models can summarise case law, identify relevant precedents, and draft first-pass memoranda. Due diligence is being partly automated, with AI flagging change-of-control provisions, unusual liability clauses, and missing documents in data rooms. The profession is moving from scepticism to strategic adoption, but the critical question remains: how do you supervise AI output effectively, and where does professional liability sit when the machine gets it wrong?
The EU AI Act — which entered into force in August 2024 with phased implementation through 2027 — is the world's first comprehensive AI regulation. It classifies AI systems by risk level: unacceptable risk (banned outright, e.g., social scoring), high risk (subject to conformity assessments, human oversight, and transparency obligations), and lower-risk systems (lighter requirements). The UK has taken a deliberately different path, adopting a pro-innovation, principles-based framework that empowers existing sectoral regulators (FCA, Ofcom, CMA, ICO) to apply AI-specific guidance within their domains rather than creating a single AI regulator. For firms operating in both the UK and EU, navigating these divergent approaches is a growing compliance challenge.
AI raises fundamental questions about intellectual property. Can an AI system be an inventor for patent purposes? The UK Supreme Court said no in Thaler v Comptroller-General (2023), holding that only natural persons can be inventors under the Patents Act 1977. Who owns the copyright in AI-generated works? Under current UK law, copyright in computer-generated works belongs to the person who made the arrangements necessary for the work's creation — but this provision was drafted long before generative AI and its application is deeply uncertain. Meanwhile, the use of copyrighted material as training data for AI models is the subject of intense debate, proposed legislation, and litigation on both sides of the Atlantic.
When an AI system causes harm — a flawed medical diagnosis, a discriminatory hiring decision, a self-driving car accident — determining legal liability is complex. Traditional negligence principles require identifying a duty of care and a breach, but where does the fault lie when the decision-making process is opaque? Product liability under the Consumer Protection Act 1987 may apply if AI is treated as a product, but its application to software and AI-as-a-service models is unsettled. The EU's proposed AI Liability Directive would introduce a presumption of causation where a defendant has failed to comply with AI-specific regulations, easing the burden on claimants. The UK has not signalled equivalent legislation, leaving liability to develop through existing common law principles and case-by-case judicial interpretation.
The financial sector is one of the most intensive users of AI, from algorithmic trading systems executing thousands of trades per second to credit scoring models that determine who can borrow. The FCA and PRA are focused on model risk management — ensuring firms understand, validate, and can explain the AI models they rely on. Robo-advisers offering automated investment recommendations must comply with the same suitability and disclosure requirements as human advisers. The emergence of AI-driven fraud — deepfake audio for CEO impersonation, synthetic identity creation — is prompting regulators to consider whether existing financial crime frameworks are adequate for the AI age.
Generative AI — systems like large language models that create text, code, images, and audio — has moved from novelty to enterprise deployment in under three years. Law firms are building bespoke tools on top of foundation models, and the question has shifted from "should we use AI?" to "how do we govern it?" The proliferation of AI governance frameworks — internal policies covering procurement, use, data handling, and human oversight of AI tools — is creating a new area of advisory work. Deepfakes and AI-generated misinformation pose challenges for evidence law, defamation, and electoral regulation. Meanwhile, competition regulators globally are scrutinising the market structure of foundation model development, where a small number of companies control the compute, data, and distribution layers.
AI is transforming both the substance and the practice of law simultaneously. Firms want trainees who understand the legal questions AI raises — IP ownership, liability, regulatory classification — and who can engage intelligently with the AI tools the firm itself is adopting. This is no longer a niche topic: it appears in client work across every practice area, and being conversant with the EU AI Act, the UK's approach, and the key IP and liability questions gives you a genuine edge in interviews and on the job.
“What are the main legal issues that arise from the development and deployment of AI?”
What they're assessing
Awareness of AI as a legal and commercial topic — and the ability to identify specific legal disciplines rather than just speaking in generalities.
Answer skeleton
AI raises distinct issues across several legal areas. Intellectual property is fundamental: who owns the output of an AI system — the developer, the user, the data provider — remains contested, and whether AI-generated works attract copyright protection is still being resolved by courts. Liability is complex: if an AI system causes harm, is the developer, the deployer, or the user responsible? Data protection is central: training AI on personal data raises GDPR compliance questions, and the ICO has produced guidance on this. Competition law is emerging: dominant AI platforms may face scrutiny over access to data and algorithmic fairness. Regulatory classification matters — whether an AI system is categorised as a medical device, a financial service, or a general-purpose tool determines which regulatory regime applies.
“What is the EU AI Act and how does it approach AI regulation?”
What they're assessing
Knowledge of the most significant piece of AI legislation globally — and whether you understand what it actually does.
Answer skeleton
The EU AI Act, which entered into force in 2024, is the world's first comprehensive legal framework for AI. It adopts a risk-based approach: AI systems are classified by risk level, with different obligations at each tier. Prohibited AI (such as real-time biometric surveillance in public spaces and social scoring) is banned outright. High-risk AI — systems used in critical infrastructure, employment, law enforcement, or education — must meet requirements including transparency, human oversight, and conformity assessments. General-purpose AI models such as large language models face transparency obligations. Lower-risk AI has lighter requirements. The Act has extraterritorial effect — it applies to any AI system placed on the EU market, regardless of where the developer is based. For UK lawyers, it matters because most UK firms advising EU clients or deploying AI in EU contexts will need to understand compliance obligations.
“How are law firms currently using AI and what does this mean for trainees?”
What they're assessing
Practical awareness of AI adoption in legal practice — and a realistic, balanced view of implications rather than either panic or dismissal.
Answer skeleton
Law firms are actively deploying AI across several areas: document review and due diligence (using large language models to identify relevant clauses and flag issues across thousands of documents); contract analysis (AI tools that extract and compare key terms); legal research (AI-assisted case law and legislation search); and first-draft document generation for standard templates. The implications for trainees are nuanced. Some tasks that juniors traditionally did — reading every document in a data room, for example — are increasingly AI-assisted, which changes the nature of trainee work. The expectation is rising: trainees must add value at a higher level more quickly, with stronger analytical and client-facing skills relative to their experience. The firms doing this well are investing in training and integrating AI as a tool that amplifies junior capability rather than replacing it.
“A client wants to use a large language model to draft first-cut legal documents — what legal and professional responsibility issues should their law firm consider before agreeing?”
What they're assessing
The ability to apply professional regulation and legal risk thinking to a highly topical scenario — demonstrating awareness that AI raises distinct issues for lawyers specifically, not just technology users generally.
Answer skeleton
Context: law firms are deploying LLM tools to assist with document drafting, due diligence, and legal research — but the SRA's 2023 guidance makes clear that the use of AI does not reduce solicitors' professional obligations, including accuracy, confidentiality, and the duty not to mislead the court. Commercial implication: a firm that deploys AI tools must invest in quality control workflows to catch hallucinations and errors, and must assess the confidentiality implications of sending client data to external model providers — a commercial risk as well as a regulatory one. Legal angle: the professional responsibility issues include supervision obligations (a trainee cannot rely on AI output without checking it), confidentiality (client data processed by a third-party AI provider may breach SRA duties without appropriate data processing agreements), and potential liability if incorrect AI-generated output causes client loss. Current hook/your view: I think the SRA's guidance correctly places the accountability on the supervising solicitor — AI is a tool, not a defence — and I expect the professional indemnity insurance market to develop AI-specific underwriting criteria that will create further incentives for firms to invest in oversight protocols.
“How is intellectual property law adapting — or struggling to adapt — to AI-generated content, and what does this mean for commercial clients?”
What they're assessing
Understanding of a live IP law debate with commercial stakes — the candidate should be able to identify the key legal uncertainty and its practical consequences for clients, not just state that 'the law is developing'.
Answer skeleton
Context: AI systems generate text, images, code, and music that may be commercially valuable, but UK copyright law requires a human author — the CDPA 1988 s.9(3) provision for computer-generated works is narrow and contested, and there is no settled answer on whether AI training on copyrighted material constitutes infringement. Commercial implication: clients investing in AI-generated content face uncertainty about whether they own what the system produces and whether the training data they used creates infringement liability — this is a live M&A due diligence issue for any acquisition of an AI company. Legal angle: the IPO's 2023 consultation on AI and IP confirmed the government will not create a new sui generis right for AI-generated works; the US Copyright Office has declined to register AI-only works; and litigation in both jurisdictions (Getty Images v Stability AI in the UK; multiple US cases) will shape the law over the next few years. Current hook/your view: I think the lack of a clear ownership regime creates a genuine commercial gap — clients building AI-generated content businesses are operating in legal uncertainty, and I expect the first major court judgment to be a significant commercial event that will either validate or unsettle a large number of existing business models.