The intersection of chemistry and artificial intelligence (AI) is a fascinating area that attracts a lot of attention in both research and industry. We talked to people working in the field about the potential of AI to revolutionize chemical research, but also about concerns, (current) limitations, and ethical implications for chemical applications. We also asked for ideas to try or experiment with, as well as useful articles and videos for beginners and advanced users.
Professor Connor W. Coley, Massachusetts Institute of Technology (MIT), Cambridge, USA, works in the field of AI for small molecule design & synthesis.
What fascinates you about AI?
I’ve always enjoyed thinking about the use of algorithmic decision making in research, connecting all the way back to the statistical design of experiments for choosing the “most informative” experiments to run. AI represents one of the computational toolkits we can use to accomplish this.
In our own work on chemistry and molecular discovery, we think about the use of AI for inferring structure-property relationships, designing molecular structures as therapeutic candidates, proposing synthetic pathways to various molecules of interest, and even analyzing complex mixtures, among other tasks.
A fun aspect is that many of our goals are not new and have existed for decades, with published proofs of concept in the literature. AI and the use of organized datasets has let us change the way we approach some of these problems, and the perceived potential for the community to develop useful laboratory assistants has never been higher.
Is there anything we should fear?
Because I entered into this field in the context of the DARPA Make-It program [automating small molecule discovery and synthesis], I’ve always had dual use concerns at the back (and front) of my mind. Some applications of AI in the chemical sciences could certainly contribute to misuse, but how AI affects the overall threat landscape is more complex than that. Rick Mullin, C&EN, recently published a nice piece on AI ethics that may be of interest.
Do you have something for our readers to try out or experiment with?
Over several years, I’ve helped work on the ASKCOS program, which is a suite of different tools with relevance to synthesis planning. This includes retrosynthesis, condition recommendation, product prediction, solvation prediction, and other tasks. ASKCOS is essentially a wrapper for various cheminformatics and AI models.
We recently re-released a public version of the tool. Folks new to the field can try out a public deployment at https://askcos.mit.edu/, while others who might be interested in deploying their own instance or contributing their own models can look at the code and instructions at https://gitlab.com/mlpds_mit/askcosv2/askcos-docs/-/wikis/01-Introduction.
Synthesis planning is just one of many ways AI can be brought into the chemistry lab.
Can you recommend a good article for beginners and one you enjoyed recently?
A few of our recent articles summarize some of the goals and opportunities in our subfield of AI for chemistry. I might point readers interested in synthetic chemistry to:
- The promise and pitfalls of AI for molecular and materials synthesis,
N. David, W. Sun, C. W. Coley,
Nat. Comput. Sci. 2023, 3, 362–364.
- Predictive chemistry: machine learning for reaction deployment, reaction development, and reaction discovery,
Zhengkai Tu, Thijs Stuyver, Connor W. Coley,
Chem. Sci. 2023.
for two overviews in less and more depth, respectively.
Is there anything else you would like to share with readers of ChemistryViews?
It’s okay to be open-minded and skeptical at the same time! Don’t ever believe that AI is a panacea and be suspicious of what you’re being told when you read about a model or tool. Still, it’s important to recognize that models do not have to be perfect to provide value in our chemical research.
Human experimentalists make mistakes—running reactions that fail to yield the expected product being just one example—and there is no reason to hold programs to an artificially high standard where they are not allowed to make mistakes. In fact, chemists will often look at AI tools and hope to see something that is new and creative, while at the same time seeming plausible and well-grounded in precedence. But if we hope to have new and creative suggestions, whether of reactions, catalysts, pathways, or something else entirely, we’ll need to tolerate some fraction of “false positives” due to model uncertainty when generalizing from the known to the unknown.
Thank you very much for the insights.
back to overview “Opinions on AI & Chemistry”