Key points from article :
Artificial intelligence is rapidly transforming how doctors might use faces as diagnostic tools. Harvard Medical School’s FaceAge algorithm, developed by radiologist Dr. Raymond Mak, aims to estimate a person’s “biological age” from facial photos as a quick measure of overall health. It’s part of a growing trend of AI-powered apps designed to detect conditions such as nasal congestion, seasonal allergies, autism, PTSD, and even pain levels in patients unable to communicate. By analysing subtle facial changes—like deepening folds or sagging skin—these systems promise to spot illness earlier, personalise treatments, and perhaps even predict lifespan.
Humans have long relied on facial cues to judge health, from rosy cheeks signalling vitality to sallow or greenish skin suggesting sickness. Science backs up this intuition: lifestyle factors such as smoking, stress, and diet visibly affect skin, while “superagers” often look decades younger than their chronological age. Early medical uses of facial analysis, like the genetic-diagnosis tool Face2Gene and the dementia pain-assessment app PainChek, have already shown promise. FaceAge builds on this by targeting specific facial regions to detect premature ageing as a potential warning sign of internal health decline.
A personal test of FaceAge revealed how lighting, makeup, and photo clarity can dramatically skew results, with the same person appearing a decade younger or older depending on the image. Experts note that both humans and AI use visual details—such as wrinkles and sharp edges—to estimate age, which can be masked or exaggerated by environmental factors. While intriguing, the technology’s precision in tracking biological ageing over time remains uncertain.
The rise of face-reading AI also raises serious ethical concerns. Critics warn of parallels with discredited pseudosciences like physiognomy, and past AI missteps—such as controversial algorithms claiming to detect sexual orientation or criminality—show the dangers of biased or context-blind systems. Malihe Alikhani, a machine learning ethics expert, cautions that without safeguards, placing diagnostic decisions in AI’s hands could erode patient involvement and trust in healthcare. The technology’s potential is vast, but so are the risks if it advances without rigorous oversight.