Pittcon 2023

On my third try, I was finally able to participate in Pittcon last week. This year Pittcon took place in person in Philadelphia, US. I was lucky enough to present our recent works in two presentations, one on quantitative nontarget screening (NTS) and the other on toxicity prediction for unidentified chemicals. You can find my presentations here.

Finally at Pittcon!

In the Non-Targeted and Suspect Screening Analyses Using High Resolution-Mass Spectrometry for the Identification of Unknowns: Toward More Reliable, Reproducible, and Understandable Methods and Results session organized by Ann Knolhoff, the close focus was on different aspects of NTS: identification, prioritization, and finally quantification. First Ann herself kicked off, by taking us through a number of amazing case studies where NTS has made a difference. My largest admiration and surprise came from identifying the source of food poisoning in Uganda with NTS. Still, in her talk, Ann stressed that the key here is to correctly prioritize the peaks and candidate structures detected. And she also admitted that the success of this NTS study was a combination of robust analytical methods, good chemical background knowledge in the sample analysis, and pure luck! The big question is, how do we remove the luck component and deliver NTS studies that make a difference by pinpointing the chemicals that matter from day to day?

Jon Sobus introduced results from ENTACT, a nontarget screening collaborative trial. In this trial, 19 labs analyzed different samples containing up to a few hundred chemicals. My take home from Jon’s presentation was that it is actually extremely hard to evaluate the results from NTS even on the identification scale. The first challenge is that a traditional accuracy evaluation practice with a confusion matrix and true positives, false positives, false negatives, and true negatives is not meaningful in NTS. Especially hard to comprehend are the true negatives: chemicals that were not spiked and were also not reported. Essentially all chemicals in the world that were not spiked or reported would fall in this category. With millions of chemicals being registered in the databases such as PubChem, this is not necessarily a reasonable measure in NTS. To take it from there Evan Bolton also discussed how PubChem can be used more efficiently in the NTS workflow. Among many other cases, he also highlighted PubChemLite, a ~460,000 chemical database that particularly aims to capture chemicals relevant for exposure studies.

Catching up with Ann Knolhoff and Jon Sobus on NTS developments over lunch.

In the Sunday afternoon session Application of Automation and Machine Learning for Analytical Sciences Challenges in Pharmaceutical Research and Development organized by Michael Wleklinski and Matthew Bahr, an amazing combination of developments in analytical techniques and machine learning were presented. Some of my key take-home messages were the stark difference in the impeding steps in analysis in pharma vs environmental screening. In pharma the data processing is fast, almost instant, the analysis time still is the limiting step, and advancing data acquisition from a few minutes to a couple of seconds makes a true difference, as explained by Debopreeti Mukherjee from Merck. In the environmental analysis, in contrast, the data acquisition for a few tens of minutes is much accepted, especially in nontarget screening, as the data processing might easily take several days if not longer. In my understanding, the difference largely comes from the complexity of the samples and therefore data. In environmental samples, it is quite common to detect thousands of chemicals and the source of these can be very different. Starting from naturally occurring molecules all the way to anthropogenic ones such as pharmaceuticals, additives in consumer products, pesticides, or even illicit chemicals. I was also blown away by Benjamin Kline’s presentation about a next-to-fully automated lab they are developing at Emerald Cloud Lab. This lab aims to accelerate the often limiting R&D step by providing a platform where testing and optimization of new analysis strategies can be rigorously planned in silico and automatically executed and monitored. I see great possibilities here, especially, I feel like this could make the execution of experimental design and circular process optimization more reliable as we would be able to minimize the subjective human factor in the circle.

This time my stay in Pittcon was cut short due to teaching at Stockholm, but I am sure everyone enjoyed the whole conference and I hope to be back next year!