Wei-Tse Hsu's Avatar

Wei-Tse Hsu

@weitse-hsu

- Postdoc in Drug Design at Oxford Biochemistry (Biggin Lab). - Ph.D. from the Shirts Group at CU Boulder. - Keen on compchem, deep learning & education. - Rookie runner. - Originally from Taiwan. - Check my MD tutorials: https://weitsehsu.com/

61
Followers
308
Following
6
Posts
11.12.2025
Joined
Posts Following

Latest posts by Wei-Tse Hsu @weitse-hsu

Post image

Now out in JACS! πŸŽ‰ : "Computing Solvation Free Energies of Small Molecules with Experimental Accuracy"! It's been a pleasure to collaborate on this with Harry Moore (@jhmchem.bsky.social) & GΓ‘bor CsΓ‘nyi pubs.acs.org/doi/10.1021/...

27.01.2026 19:28 πŸ‘ 29 πŸ” 8 πŸ’¬ 1 πŸ“Œ 0
Post image

New Preprint!! We show that binding entropy can be quantitatively predicted from crystallographic ensemble models, accounting for both protein conformational entropy and solvent entropy! www.biorxiv.org/content/10.6...

21.01.2026 20:49 πŸ‘ 39 πŸ” 14 πŸ’¬ 1 πŸ“Œ 2
Preview
Can AI-Predicted Complexes Teach Machine Learning to Compute Drug Binding Affinity? We evaluate the feasibility of using co-folding models for synthetic data augmentation in training machine learning-based scoring functions (MLSFs) for binding affinity prediction. Our results show th...

πŸš€ Bottom line:
With careful filtering, co-folding predictions can indeed teach ML about binding affinity.

πŸ‘‰ Read the full JCIM paper: pubs.acs.org/doi/full/10....

Work with Aniket Magarkar
@boehringerglobal.bsky.social and @philbiggin.bsky.social @ox.ac.uk

(6/6)

20.01.2026 19:27 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Post image

πŸ”Ž SI highlights:
- AEV-PLIG beats Boltz-2 in 4 target classes in the FEP benchmark (loses 1, ties 6); both are competitive with FEP+ in some cases.
- ipLDDT & ligand pLDDT are also effective filters; pTM, PAE, PDE are not
- Boltz confidence seems to generalize better than its structure module
(5/6)

20.01.2026 19:27 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

❓ Are co-folding predictions good enough to train scoring functions?

πŸ‘‰ Yes β€” with careful filtering. We see no performance difference b/w models trained on:
- experimental structures
- corresponding co-folding predictions

This holds across AEV-PLIG, EHIGN, and RF-Score.
(4/6)

20.01.2026 19:27 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

❓ When can we trust a co-folding prediction?

πŸ‘‰ From reproducing HiQBind with Boltz-1x, a few simple heuristics are recommended high-quality cofolding augmentation:
1️⃣ single-chain systems
2️⃣ Boltz confidence > 0.9
3️⃣ train–test similarity > 60%

(3/6)

20.01.2026 19:27 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

❓ How much can data augmentation actually improve scoring?

πŸ‘‰ Short answer: only if the added data are high-quality. Adding BindingNet v1 clearly improved performance, but v2 did notβ€”despite being 10x largerβ€”due to its substantially lower quality.

Quality beats quantity.
(2/6)

20.01.2026 19:27 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

πŸ“’ Can AI-Predicted Complexes Teach Machine Learning to Compute Drug Binding Affinity?

In our recent JCIM work, we tested whether co-folding models can be used for data augmentation for training ML-based scoring functions (SFs).

We asked 3 simple but critical questions. πŸ‘‡
(1/6)

20.01.2026 19:27 πŸ‘ 6 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0