$$ \newcommand{\pmi}{\operatorname{pmi}} \newcommand{\inner}[2]{\langle{#1}, {#2}\rangle} \newcommand{\Pb}{\operatorname{Pr}} \newcommand{\E}{\mathbb{E}} \newcommand{\RR}{\mathbf{R}} \newcommand{\script}[1]{\mathcal{#1}} \newcommand{\Set}[2]{\{{#1} : {#2}\}} \newcommand{\argmin}[2]{\underset{#1}{\operatorname{argmin}} {#2}} \newcommand{\optmin}[3]{ \begin{align*} & \underset{#1}{\text{minimize}} & & #2 \\ & \text{subject to} & & #3 \end{align*} } \newcommand{\optmax}[3]{ \begin{align*} & \underset{#1}{\text{maximize}} & & #2 \\ & \text{subject to} & & #3 \end{align*} } \newcommand{\optfind}[2]{ \begin{align*} & {\text{find}} & & #1 \\ & \text{subject to} & & #2 \end{align*} } $$

Key idea: Chu, et. al describe a *parser/generator* that takes a parametrized
convex program, converts it to an equivalent SOCP, and generates code that can
be used to solve the problem.

- SOCPs encompass a large class of convex programs; previous canonicalization suites targeted smaller problem classes, like QPs or QCQPs. Recall that linear programs, QPs, and QCQPs are all SOCPs.
- Problem parameters must enter through specific functions; this interface allows the code generator to circumvent all floating point operations.

Problem convexity is verified via disciplined convex programming.

The canonicalization takes as input a convex optimization problem and outputs an equivalent SOCP.

**Input**: convex optimization problem description.

**Output**: equivalent second-order cone program

where is the cross product of second-order cones and .

The three building blocks of DCP are atoms, expressions, and convex problems; atoms are the atomic units, expressions build from atoms, and convex problems build from expressions.

An atom is a built-in function like plus, minus, sum, maximum, square, or norm. There are three classes of atoms: parametric atoms, scalar atoms, and vector atoms. All atoms return scalars, except scalar atoms evaluated at vector arguments return elementwise-transformed vectors.

Atoms have three key properties: (1) output sign, (2) monotonicity per argument, and (3) curvature. The output sign can depend upon the signs of inputs; a vector is positive if and only if every entry is positive. The monotonicity of parametric atoms depends upon the sign of their parameters.

An expression is a vector variable or an atom evaluated at a subexpression (note the recursive definition); the leaves of expressions are the variables, internal nodes are atoms, and each atom is evaluated at its children. For example, is an atom, but a subexpression. The sign of an expression is computed bottom-up; variables have unknown sign (but some atoms, like the square atom, have sign independent of its inputs).

Convexity is also verified in a bottom-up fashion. All variables are affine; is convex if is convex and one of the following holds for each : (1) is affine, (2) is increasing in the th argument and is convex, or (3) is decreasing in the th argument and is concave.

Convexity can be determined by the curvature of all its expressions and atoms. Note that DCP-compliance is sound but not complete.

The first step is to convert the original problem into **Smith form**.
A new variable is introduced for each node in the expression
tree, with mapping to the root of the tree; then, the problem
is written as “minimize ” subject to equals the -th
subexpression, for each subexpression . In order to convexify
the smith problem, the equality constraints
for convex are replaced with inequality constraints
; this representation is known as *relaxed*
Smith form. It can be shown (this is key) that the optimal solution of
the relaxed Smith form of a DCP-compliant problem is optimal for the original
problem as well.

In the final step, nonlinear functions in the relaxed Smith form are replaced with the “optimal value of a partially-specified convex optimization problem.” This is known as the graph implementation for convex problems. Finally, the resulting SOCP is converted into standard form.

*Note that the only work done is copying problem data into SOCP data.*

CVXPY (version < 1.0) more or less implements this paper – it canonicalizes its problems by converting them into cone programs, including SOCPs. You can see this in the source code (available on Github): each atom implements a function named graph_implementation, and this function is invoked during canonicalization.