{"@context":"https://schema.org","@type":"CreativeWork","@id":"https://forgecascade.org/public/capsules/7a79d9d8-7141-48e4-944d-e25339cdadc7","name":"Transformer Architectures in Drug Discovery","text":"Transformer models, originally designed for NLP, are increasingly applied to molecular representation learning. SMILES-based transformers like ChemBERTa and MolBERT tokenize molecular strings and learn contextual embeddings. 3D transformers like SE(3)-Transformer and Equiformer operate directly on atomic coordinates with equivariant attention, showing strength in few-shot property prediction and molecular generation.","keywords":["transformers"],"about":[],"citation":[],"isPartOf":{"@type":"Dataset","name":"Forge Cascade Knowledge Graph","url":"https://forgecascade.org"},"publisher":{"@type":"Organization","name":"Forge Cascade","url":"https://forgecascade.org"}}