New paper on AI safety investigating the transferability of adversarial triggers in LLMs.
Our paper on evaluating code generation in LLMs has been accepted at NAACL 2024!
Our paper on understanding in-context learning in Transformers and LLMs has been accepted at ICLR 2024 for oral presentation (top 1.2%)!
Presented our paper on evaluating acquisition of novel interpretations in LLMs at EMNLP 2023!
Our paper studying simplicity bias in Transformers has been accepted at ACL 2023!
Universal Adversarial Triggers Are Not Universal
Nicholas Meade, , Siva Reddy
Preprint
pdf
code
abstract
Evaluating In-Context Learning of Libraries for Code Generation
, Siva Reddy, Dzmitry Bahdanau, Pradeep Dasigi
NAACL'24
pdf
code
abstract
Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions
Satwik Bhattamishra, , Phil Blunsom, Varun Kanade
ICLR'24 [Oral]
pdf
code
abstract
MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language Models to Generalize to Novel Interpretations
, Satwik Bhattamishra, Siva Reddy, Dzmitry Bahdanau
EMNLP'23 [Oral]
pdf
code
abstract
Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions
Satwik Bhattamishra, , Varun Kanade, Phil Blunsom
ACL'23
pdf
code
abstract
When Can Transformers Ground and Compose: Insights from Compositional Generalization Benchmarks
Ankur Sikarwar, , Navin Goyal
EMNLP'22 [Oral]
pdf
code
abstract
Revisiting the Compositional Generalization Abilities of Neural Sequence Models
, Satwik Bhattamishra, Phil Blunsom, Navin Goyal
ACL'22
pdf
code
abstract
Are NLP Models really able to Solve Simple Math Word Problems?
, Satwik Bhattamishra, Navin Goyal
NAACL'21
pdf
code
abstract
article
On the Computational Power of Transformers and its Implications in Sequence Modeling
Satwik Bhattamishra, , Navin Goyal
CoNLL'20
pdf
code
abstract
VehicleChain: Blockchain-based Vehicular Data Transmission Scheme for Smart City
, Naigam Shah, Trupil Limbasiya, Debasis Das
IEEE SMC'19
pdf