Hi 🤝 I am Rahul, I am currently a grad student at Stanford 🔴🌲. Before this I worked as a Research Fellow at Microsoft Research with Yashoteja Prabhu & Manik Verma on Transformer Compression & Extreme Classification. My current interests are in RL+LLMs and AI Alignment. Currently, I am working at the Scaling Intelligence Lab at Stanford on post-training of LLMs and at the ILIAD group on RL application to robotics. In the future, I look forward to getting replaced by AGI and work as a Wallace design protein farmer.

In my previous life, I did my undergrad thesis at VAL, Indian Institue of Science on Capsule Networks & before that spent 3 months at IIRS (ISRO) as a research intern. I graduated with a B.Tech in Computer Science from BITS Pilani. For more details, check my CV. Here is a list of papers I am currently reading : [List]

Publications

Enhancing Tail Performance in Extreme Classifiers by Label Variance Reduction
Anirudh Buvanesh*, Rahul Chand*, Yashoteja Prabhu, Manish Gupta, Manik Verma (* = Equal Contribution)
ICLR'24 | International Conference on Learning Representations
pdf| abstract

DSFormer: Effective Compression of Text-Transformers by Dense-Sparse Weight Factorization
Rahul Chand, Yashoteja Prabhu, Pratyush Kumar
pdf| abstract

CapsFlow: Optical Flow Estimation with Capsule Networks
Rahul Chand, Rajat Arora, Ram Prabhakar Venkatesh Babu
pdf| abstract

Open Source

gpu_poor (1300+ stars)
Tool to check vRAM & token/s requirement for any LLM for consumer hardware. It supports ggml, HuggingFace, bitsandbytes, QLoRA & gradient checkpointing. Used over 120k+ times by 20k+ users.

llama2.c-for-dummies (200+ stars)
Starter tutorial for inference with LLaMA in C

Microsoft Research
2021 - 2023
IISC
S2019
IIRS
S2018
BITS Pilani
2015 - 2019