Optimising a Transformer-Based Model for Metabolite Prediction in Drug Discovery
Ladda ner
Författare
Typ
Examensarbete för masterexamen
Master's Thesis
Master's Thesis
Modellbyggare
Tidskriftstitel
ISSN
Volymtitel
Utgivare
Sammanfattning
Drug metabolism plays a crucial role in drug discovery, impacting the safety and efficacy of medications. Experimental methods of predicting metabolic reactions of potential drugs have long suffered from high costs in terms of both time and resources. Computational methods have emerged as more cost-effective and time-efficient approaches for predicting drug metabolites, but are often hindered by reliance on rigid rules. Large language models provide a more adaptable and rule-free alternative. In this project, the transformer-based language model Chemformer, previously employed in various chemical tasks, was optimised for drug metabolism prediction. To achieve this, a dataset of drugs and their corresponding metabolites was compiled and preprocessed. The Chemformer model was fine-tuned and evaluated using this dataset. Further optimisation methods to enhance the model’s performance involved incorporating an additional pre-training of the Chemformer model, randomisation of the input SMILES (Simplified Molecular Input Line Entry System) strings, augmenting the dataset, annotating the data with chemical information, employing ensemble models, and optimising prediction space by pre-training the model further. The most promising optimisation methods attempted were the additional pre-trainings and the randomisation of the SMILES strings, both of which showed a significant increase in performance. Benchmarking the best-performing model resulted in an outperformance in precision and F1 score compared to the existing models GLORYx and SyGMa. These results suggest that Chemformer is a promising tool for metabolite prediction.
Beskrivning
Ämne/nyckelord
drug discovery, metabolism, language model, transformer, computational chemistry, drugs, metabolites, optimisation