Learning on compressed molecular representations

Abstract

Last year, a preprint gained notoriety, proposing that a k-nearest neighbour classifier is able to outperform large-language models using compressed text as input and normalised compression distance (NCD) as a metric. In chemistry and biochemistry, molecules are often represented as strings, such as SMILES for small molecules or single-letter amino acid sequences for proteins. Here, we extend the previously introduced approach with support for regression and multitask classification and subsequently apply it to the prediction of molecular properties and protein–ligand binding affinities. We further propose converting numerical descriptors into string representations, enabling the integration of text input with domain-informed numerical descriptors. Finally, we show that the method can achieve performance competitive with chemical fingerprint- and GNN-based methodologies in general, and perform better than comparable methods on quantum chemistry and protein–ligand binding affinity prediction tasks.

Graphical abstract: Learning on compressed molecular representations

Article information

Article type
Paper
Submitted
18 Jun 2024
Accepted
09 Oct 2024
First published
04 Nov 2024
This article is Open Access
Creative Commons BY license

Digital Discovery, 2025, Advance Article

Learning on compressed molecular representations

J. Weinreich and D. Probst, Digital Discovery, 2025, Advance Article , DOI: 10.1039/D4DD00162A

This article is licensed under a Creative Commons Attribution 3.0 Unported Licence. You can use material from this article in other publications without requesting further permissions from the RSC, provided that the correct acknowledgement is given.

Read more about how to correctly acknowledge RSC content.

Social activity

Spotlight

Advertisements