AI in Drug Discovery: The Part Where the Humans Turn Out to Still Be Essential

CAS published a recap of their Life Sciences Summits from late 2025 this month and it is worth reading if you care about where AI is actually landing in high-stakes, regulated environments. The summits brought together pharmaceutical companies, biotech firms, CROs, and academic institutions across Europe, South Korea, and Japan to talk about AI in drug discovery. The honest conversations that came out of those rooms are more useful than most of the breathless AI coverage you usually encounter.

The headline finding, which will surprise exactly no one who has tried to implement AI at scale in a complex organization, is that the challenge is not data volume. It is data readiness. Organizations have enough data. They do not have data that is properly curated, contextualized, and aligned with specific research questions. Poorly curated datasets produce unreliable predictions regardless of how sophisticated the model is. That is a lesson that transfers directly from pharmaceutical R&D to IT operations. The garbage-in, garbage-out problem does not care what domain it is operating in. If your data foundation is weak, your AI outputs are going to reflect that weakness faithfully and at scale.

The second major theme is organizational readiness, which is a polite way of saying that the culture, incentive structures, and change management practices inside an organization determine whether AI tools actually get used, far more than the technical capabilities themselves. This is something I have seen in every technology implementation I have been part of. You can build the most elegant automated workflow in the world and it will not move the needle if the people who need to use it do not trust it, do not understand it, or are not rewarded for adopting it. Technology is the easy part. People are the interesting part.

What I appreciated most about the CAS summary is the honest reckoning with where human expertise still matters and why. The consensus across all three summits was that AI expands what researchers can explore and accelerates hypothesis testing. But expert judgment about biological plausibility, chemical feasibility, and strategic direction cannot be automated. Researchers still interpret results, validate predictions, and make the calls about which directions to pursue. One participant asked why CAS still uses human indexers given all the focus on automation. The answer was that AI can process large datasets and identify patterns, but domain expertise is still essential to curate, validate, and interpret what comes back. That is exactly right and it applies to every field AI is touching right now.

The takeaway for me, sitting on the IT side of the world pursuing an AI concentration in my graduate program, is that the most durable professional value right now is not the ability to use AI tools. It is the ability to understand which questions to ask those tools, evaluate what comes back critically, and translate the output into something that actually improves how decisions get made. The people who can do that, who have deep domain expertise and can work effectively alongside AI systems without ceding their judgment to them, those are the people who are going to matter most as this technology matures. I intend to be one of them.

https://www.cas.org/resources/cas-insights/ai-drug-discovery-practical

Previous
Previous

Your Next Coworker Does the Cha-Cha

Next
Next

One in Five Breaches Takes Two Weeks to Recover From and That Should Scare You