Key points from article :
A new artificial intelligence model trained on the medical records of 57 million people in England has sparked both excitement and ethical debate. The AI, called Foresight, was developed by Chris Tomlinson and his team at University College London, and detailed in recent press announcements. Foresight is designed to predict individual diagnoses and broader healthcare trends by analysing a staggering 10 billion data points from the National Health Service (NHS). It uses anonymized information, including hospital visits, vaccinations, and outpatient records, collected between 2018 and 2023.
Foresight builds on an earlier version developed in 2023 using GPT-3. This new model is based on Meta’s open-source Llama 2 AI and is said to be the world’s first generative AI trained on national-scale health data. The goal is to shift toward more preventative healthcare by forecasting complications before they occur, potentially transforming patient care.
However, the project has raised serious privacy concerns. Experts like Luc Rocher (University of Oxford) and Caroline Green (University of Oxford) warn that even “de-identified” data carries a risk of re-identification, especially in such rich datasets. NHS Digital admits there’s no absolute guarantee of anonymity. Critics argue that the public has little control or understanding of how their health data is being used, and existing opt-out systems don’t apply because the data is technically not classified as personal under current GDPR interpretations.
The Foresight team has not yet tested whether the model can unintentionally reveal specific patient data—a step experts consider crucial. While the AI is currently restricted to a secure NHS research environment and used for COVID-related research only, the project has reignited calls for stronger ethical frameworks and greater transparency around AI use in healthcare.