Akshay Agrawal

A picture of Akshay.

Ph.D. Candidate in Electrical Engineering, Stanford University
M.S., B.S. in Computer Science, Stanford University
[email protected]

Github / Google Scholar / Twitter / LinkedIn / Resume / Blog


I am a second-year Ph.D. student at Stanford University, where I am advised by Professor Stephen Boyd. I conduct research in convex optimization and machine learning, and I'm passionate about building foundational tools for both. I am a principal developer of CVXPY, a domain-specific language for convex optimization that is used by many universities and companies. Previously, I was a full-time software engineer on the Google Brain team, where I worked on TensorFlow 2 and the core TensorFlow runtime.

If computer science is my first passion, then writing is my second: I served as a writer and investigative news editor for The Stanford Daily, and I blog at debugmind.com.

I graduated from Stanford in 2017 with a B.S. and M.S. in computer science and a minor in mathematics. I am currently supported by a Stanford Graduate Fellowship.



Learning convex optimization models. [bibtex] [code]
A. Agrawal, S. Barratt, and S. Boyd. Pre-print.
Differentiating through log-log convex programs. [bibtex] [code]
A. Agrawal and S. Boyd. Pre-print.
Learning convex optimization control policies. [bibtex] [code]
A. Agrawal, S. Barratt, S. Boyd, B. Stellato. Learning for Dynamics and Control (L4DC), oral presentation.
Disciplined quasiconvex programming. [bibtex] [code]
A. Agrawal and S. Boyd. Optimization Letters.


Differentiable convex optimization layers. [bibtex] [code] [blog post]
A. Agrawal, B. Amos, S. Barratt, S. Boyd, S. Diamond, and J. Z. Kolter. In Advances in Neural Information Processing Systems (NeurIPS).
Presented at the TensorFlow Developer Summit 2020, Sunnyvale [slides] [video]
Differentiating through a cone program. [bibtex] [code]
A. Agrawal, S. Barratt, S. Boyd, E. Busseti, W. Moursi. Journal of Applied and Numerical Optimization.
TensorFlow Eager: A multi-stage, Python-embedded DSL for machine learning. [bibtex] [slides] [blog post] [code]
A. Agrawal, A. N. Modi, A. Passos, A. Lavoie, A. Agarwal, A. Shankar, I. Ganichev, J. Levenberg, M. Hong, R. Monga, S. Cai. Systems for Machine Learning (SysML).
Disciplined geometric programming. [bibtex] [tutorial] [poster] [code]
A. Agrawal, S. Diamond, S. Boyd. Optimization Letters.
Presented at ICCOPT 2019, Berlin [slides]


A rewriting system for convex optimization problems. [bibtex] [code]
A. Agrawal, R. Verschueren, S. Diamond, S. Boyd. Journal of Control and Decision.


YouEDU: Addressing confusion in MOOC discussion forums by recommending instructional video clips. [bibtex] [dataset] [code]
A. Agrawal, J. Venkatraman, S. Leonard, and A. Paepcke. Educational Data Mining.
Presented at EDM 2015, Madrid [slides]


I always enjoy speaking with people working on real problems. If you're in industry and would like to chat about convex optimization, CVXPY, or TensorFlow, don't hesitate to reach out to me over email.

I have industry experience in designing and building software for machine learning (TensorFlow 2.0), motion planning and control for autonomous vehicles, and performance analysis of Google-scale software systems.

From 2017-2018, I worked on TensorFlow as an engineer on Google Brain team. Specifically, I developed a multi-stage programming model that lets users enjoy eager (imperative) execution while providing them the option to optimize blocks of TensorFlow operations via just-in-time compilation.

I honed my technical infrastructure skills over the course of four summer internships at Google, where I:


I spent seven quarters as a teaching assistant for the following Stanford courses:


Technical Reports