I am a Research Engineer at Google, working on the Gemini family of foundation models. I am a core contributor to Gemini 3.0, Gemini 2.5, and Gemini 2.0 on the Post-Training for Code team โ€” developing novel post-training strategies (SFT, distillation, RLVR) that have pushed Gemini to #1 on WebDev Arena and achieved state-of-the-art results on LiveCodeBench Pro, Terminal Bench 2.0, and SWE-bench. My focus areas are RL for code, reward modeling, and long-horizon agentic evaluations.

Previously, I built enterprise LLMs at Capital One (Llama 2, Mixtral, DPO/RLHF), worked with Google DeepMind on multimodal LLMs for document extraction, and published research on multimodal fact-checking (SIGIR โ€” Best Paper Honorable Mention), hate speech detection (EMNLP), and efficient prompt tuning (ACL). I hold an M.S. from Virginia Tech and a B.S. from D.J. Sanghvi College of Engineering.

Highlights

Selected Papers

Experience

Education

Virginia Tech
M.S. Computer Science (Research) ยท 2021โ€“2023
Thesis: NLP-based Episodic Future Thinking, funded by NIH
D.J. Sanghvi College of Engineering
B.S. Computer Science ยท 2016โ€“2020

Service & Honors