Building an AI Scientist
The Pauling Principle podcast- Andrew White’s team at Future House is building an AI scientist
In this episode of The Pauling Principle, host discusses with Dr. Andrew White — co-founder of Future House — about his pioneering work at the intersection of AI and chemistry, and his mission to build an AI scientist.
Key Points:
This episode explores how AI is beginning to “think scientifically,” from predicting molecular behaviour to designing new experiments. Dr. White envisions a near future where AI systems act as scientific collaborators—powerful tools that could accelerate discovery, provided humans stay ethically and intellectually in control.
- From Chemistry to AI Innovation: Andrew White shares his journey from chemical engineering to AI, developing “ChemCrow” — the first system using language models and tools for real chemistry tasks — leading to the founding of Future House, a nonprofit focused on AI-driven scientific discovery.
- The Birth of ChemCrow and AI for Chemistry: “ChemCrow” combined large language models with scientific tools, allowing AI to perform chemistry by handling both reasoning and computation. The name symbolizes intelligence (“crow”) plus tool use — a metaphor for how AI can now “think and act” in experiments.
- Building the AI Scientist: White contrasts Future House’s approach with Google’s “Co-Scientist.” While Google demonstrates capability with Gemini, Future House is creating specialized reasoning models and workflows to solve concrete problems in chemistry and biology.
- Grand Challenges in Science and AI: He outlines the next big scientific frontiers beyond protein folding: predicting protein function, modeling a “virtual cell,” and advancing retrosynthesis — all problems requiring data-rich, tool-integrated AI systems rather than just larger models.
- Philosophical and Practical Questions: White and the host debate whether AI needs to understand science or merely predict outcomes. They discuss “the unreasonable effectiveness of data,” the trade-off between physics-based and data-driven models, and how science might evolve as AI advances.
- Ethics, Risks, and Human Autonomy: On biosecurity and AI misuse, White believes current risks are overstated—nature remains the best bioengineer—but warns about a more subtle threat: humans relinquishing autonomy and judgment to AI systems in decision-making and research.
Visit website: https://www.youtube.com/watch?v=VHhBEcHDg6U
See alsoDetails last updated 19-Oct-2025


