problem, the goal is to find the best (maximum or minimum) value of an objective function by selecting real variables that satisfy a set of equality and inequality constraints.
A general constrained optimization problem is to select real decision variables from a given feasible region in such a way as to optimize (minimize or maximize) a given objective function
\[f(x_0,x_1,\dots,x_{n-1}).\]
We usually let denote the vector of real decision variables That is, and write the general nonlinear program as:
\[\begin{aligned}\text{ Maximize }&f(x)\\\text{ Subject to }&g_i(x)\leq b_i&&\qquad(i=0,1,\dots,m-1)\\&x\in\mathbb{R}^n\end{aligned}\]
where each of the constraint functions through is given, and each is a constant (Bradley et al., 1977).
This is only one possible way to write the problem. Minimizing is equivalent to maximizing Likewise, an equality constraint can be expressed as the pair of inequalities and Moreover, by adding a slack variable, any inequality can be converted into an equality constraint (Bradley et al., 1977).
These types of problems appear across many application areas, for example, in business settings where a company aims to maximize profit or minimize costs while operating on resources or funding constraints (Cherry, 2016).
If is a linear function and the constraints are linear, then the problem is called a linear programming problem (LP) (Cherry, 2016).
“The problem is called a nonlinear programming problem (NLP) if the objective function is nonlinear and/or the feasible region is determined by nonlinear constraints.” (Bradley et al., 1977, p. 410)
The assumptions and approximations used in linear programming can sometimes produce a suitable model for the decision variable ranges of interest. In other cases, however, nonlinear behavior in the objective function and/or the constraints is essential to formulate the application as a mathematical program accurately (Bradley et al., 1977).
Nonlinear programming refers to methods for solving an NLP. Although many mature solvers exist for LPs, NLPs, especially those involving higher-order nonlinearities, are typically more difficult to solve (Cherry, 2016).
Challenging nonlinear programming problems arise in areas such as electronic circuit design, financial portfolio optimization, gas network optimization, and chemical process design (Cherry, 2016).
One way to tackle nonlinear programming problems is to linearize them and use an LP solver to obtain a good approximation. One would like to do that for several reasons. For instance, linear models are usually faster to solve and can be more numerically stable.
To move from theory to a working Python model, we will walk through the following sections:
This text assumes some familiarity with mathematical programming. After defining Separable Functions, we present a separable nonlinear maximization Example and outline an approach based on piecewise linear (PWL) approximations of the nonlinear objective function.
Next, we define Special Ordered Sets of Type 2 and explain how they support the numerical formulation.
We then introduce the theoretical background in On Convex and Concave Functions, providing tools used throughout the remainder of this work on nonlinear programming (NLP).
Finally, we present a Python Implementation procedure that uses Gurobipy and PWL approximations of the nonlinear objective function, enabling LP/MIP solvers to obtain useful approximations for fairly large NLPs.
Separable Functions
This article’s solution procedure is for separable programs. Separable programs are optimization problems of the form:
\[\begin{aligned}\text{ Maximize }&\sum\limits_{j=0}^{n-1}f_j(x_j)\\\text{ Subject to }&\sum\limits_{j=0}^{n-1} g_{ij}(x_j)\leq 0\qquad(i=0,1,\dots,m-1)\\&x\in\mathbb{R}^n\end{aligned}\]
where each of the functions and are known. These problems are called separable because the decision variables appear separately: one in each constraint function and one in each objective function (Bradley et al., 1977).
Example
Consider the following problem from Bradley et al. (1977) that arises in portfolio selection:
\[\begin{aligned}\text{ Maximize }&f(x)=20x_0+16x_1-2x_0^2-x_1^2-(x_0+x_1)^2\\\text{ Subject to }&x_0+x_1\leq5\\&x_0\geq0,x_1\geq0.\end{aligned}\]

As stated, the problem is not separable because of the term in the objective function. However, nonseparable problems can frequently be reduced to a separable form using various formulation tricks. In particular, by letting we can re-express the problem above in separable form as:
\[\begin{aligned}\text{ Maximize }&f(x)=20x_0+16x_1-2x_0^2-x_1^2-x_2^2\\\text{ Subject to }&x_0+x_1\leq5\\&x_0+x_1-x_2=0\\&x_0\geq0,x_1\geq0,x_2\geq 0.\end{aligned}\]
Clearly, the linear constraints are separable, and the objective function can now be written as where
\[\begin{aligned}f_0(x_0)&=20x_0-2x_0^2\\f_1(x_1)&=16x_1-x_1^2\end{aligned}\]
and
\[f_2(x_2)=-x_2^2.\]

To form the approximation problem, we approximate each nonlinear term by a piecewise linear function by using a specified number of breakpoints.
Each PWL function is defined over an interval and consists of line segments whose endpoints lie on the nonlinear function starting at and ending at We represent the PWL functions using the following parametrization. One can write any as:
\[x_j=\sum\limits_{i=0}^{r-1}\theta_j^i a_i,\quad \hat f_j(x_j)=\sum\limits_{i=0}^{r-1}\theta_j^if_j(a_i),\quad\sum\limits_{i=0}^{r-1}\theta_j^i=1,\quad\theta_j=(\theta_j^0,\theta_j^1,\dots\theta_j^{r-1})\in\mathbb{R}_{\geq0}^r\]
for each where the PWL functions are specified by the points for (Cherry, 2016). Here, we use 4 uniformly distributed breakpoints to approximate each Also, since and we have that
\[0\leq x_0\leq5,\qquad0\leq x_1\leq 5,\quad\text{ and }\quad0\leq x_2\leq5.\]
And so, our approximations need not extend beyond these variable bounds. Therefore the PWL function will be defined by the points and for
Any can be written as
\[\begin{aligned}x_0&=\theta_0^0\cdot0+\theta_0^1\cdot\frac{5}{3}+\theta_0^2\cdot\frac{10}{3}+\theta_0^3\cdot5\\&=\theta_0^1\cdot\frac{5}{3}+\theta_0^2\cdot\frac{10}{3}+\theta_0^3\cdot 5\text{ s.t. }\sum\limits_{i=0}^3\theta_0^i=1\end{aligned}\]
and can be approximated as
\[\begin{aligned}\hat f_0(x_0)&=\theta_0^0f_0(0)+\theta_0^1f_0\left(\frac{5}{3}\right)+\theta_0^2f_0\left(\frac{10}{3}\right)+\theta_0^3f_0(5)\\&=\theta_0^1\cdot\frac{250}{9}+\theta_0^2\cdot\frac{400}{9}+\theta_0^3\cdot50.\end{aligned}\]
The same reasoning follows for and For example, evaluating the approximation at gives:
\[\begin{aligned}\hat f_0(1.5)&=\frac{1}{10}f_0(0)+\frac{9}{10}f_0\left(\frac{5}{3}\right)\\&=\frac{1}{10}\cdot 0+\frac{9}{10}\cdot\frac{250}{9}\\&=25\end{aligned}\]
since
\[1.5=\frac{1}{10}\cdot0+\frac{9}{10}\cdot\frac{5}{3}.\]

Such weighted sums are called convex combinations, i.e., linear combinations of points with non-negative coefficients that sum to 1. Now, since and are concave functions, one may ignore additional restrictions, and the problem can be solved as an LP (Bradley et al., 1977). However, in general, such a formulation does not suffice. In particular, we still need to enforce the adjacency condition: At most two weights are positive, and if two weights are positive, then they are adjacent, i.e., of the form and A similar restriction applies to each approximation. For instance, if the weights and then the approximation gives
\[\begin{aligned}\hat f_0(x_0)&=\frac{2}{5}\cdot 0+\frac{3}{5}\cdot\frac{400}{9}=\frac{80}{3}\\x_0&=\frac{2}{5}\cdot0+\frac{3}{5}\cdot\frac{10}{3}=2.\end{aligned}\]
In contrast, at the approximation curve gives
\[\begin{aligned}\hat f_0(x_0)&=\frac{4}{5}\cdot\frac{250}{9}+\frac{1}{5}\cdot\frac{400}{9}=\frac{280}{9}\\x_0&=\frac{4}{5}\cdot\frac{5}{3}+\frac{1}{5}\cdot\frac{10}{3}=2.\end{aligned}\]

One can enforce the adjacency condition by introducing additional binary variables into the model, as formulated in Cherry (2016). However, we leverage Gurobipy‘s SOS (Special Ordered Set) constraint object.
The complete program is as follows
\[\begin{alignat}{2}\text{ Maximize }&\hat f_0(x_0)+\hat f_1(x_1)+\hat f_2(x_2)\\\text{ Subject to }&x_0=\sum\limits_{i=0}^{r-1}\theta_0^ia_i\\&x_1=\sum\limits_{i=0}^{r-1}\theta_1^ia_i\\&x_2=\sum\limits_{i=0}^{r-1}\theta_2^ia_i\\&\hat f_0=\sum\limits_{i=0}^{r-1}\theta_0^if_0(a_i)\\&\hat f_1=\sum\limits_{i=0}^{r-1}\theta_1^if_1(a_i)\\&\hat f_2=\sum\limits_{i=0}^{r-1}\theta_2^if_2(a_i)\\&\sum\limits_{i=0}^{r-1}\theta_j^i=1&&\qquad j=0,1,2\\&\{\theta_j^0,\theta_j^1,\dots\theta_j^{r-1}\}\text{ SOS type 2 }&&\qquad j=0,1,2\\&x_0,x_1,x_2\in[0,5]\cap\mathbb{R}\\&\hat f_0\in[0,50]\cap\mathbb{R}\\&\hat f_1\in[0,55]\cap\mathbb{R}\\&\hat f_2\in [-25,0]\cap\mathbb{R}\\&\theta_j^i\in[0,1]\cap\mathbb{R}&&\qquad j=0,1,2,\;i=0,1,\dots,{r-1}.\end{alignat}\]
Special Ordered Sets of Type 2
“A set of variables is a special ordered set of type 2 (SOS type 2) if whenever i.e., at most two variables in the set can be nonzero, and if two variables are nonzero they must be adjacent in the set.” (de Farias et al., 2000, p. 1)
In our formulation, the adjacency condition on the variables matches exactly a SOS type 2 condition: at most two can be positive, and they must correspond to adjacent breakpoints. Using SOS type 2 constraints lets us ensure adjacency directly in Gurobi, which applies specialized branching strategies that automatically capture the SOS structure, instead of introducing extra binary variables by hand.
On Convex and Concave Functions
For the following discussion, we take a general separable constrained maximization problem instance, as previously defined in the Introduction section:
\[\begin{aligned}\text{ Maximize }&\sum\limits_{j=0}^{n-1}f_j(x_j)\\\text{ Subject to }&\sum\limits_{j=0}^{n-1} g_{ij}(x_j)\leq 0\qquad(i=0,1,\dots,m-1)\\&x\in\mathbb{R}^n.\end{aligned}\]
where each of the functions and are known.
Throughout, the term piecewise-linear (PWL) approximation refers to the secant interpolant obtained by connecting ordered breakpoints pairs with line segments.
Convex and concave are among the most essential functional forms in mathematical optimization. Formally, a function is called convex if, for every and and every
\[f[\lambda y+(1-\lambda)z]\leq\lambda f(y)+(1-\lambda)f(z).\]
This definition implies that the sum of convex functions is convex, and nonnegative multiples of convex functions are also convex. Concave functions are simply the negative of convex functions, for which the definition above follows except for the reversed direction of the inequality. Of course, some functions are neither convex nor concave.

It is easy to see that linear functions are both convex and concave. This property is essential since it allows us to optimize (maximize or minimize) linear objective functions using computationally efficient techniques, such as the simplex method in linear programming (Bradley et al., 1977).
Additionally, a single-variable differentiable function is convex on an interval exactly when its derivative is nondecreasing — meaning the slope does not go down. Likewise, differentiable is concave on an interval exactly when is nonincreasing — meaning the slope does not go up.
From the definition above, one can trivially conclude that PWL approximations overestimate convex functions since every line segment joining two points on its graph does not lie below the graph at any point. Similarly, PWL approximations underestimate concave functions.
Nonlinear Objective Function
This notion helps us explain why we can omit the SOS type 2 constraints from a problem such as the one in the Example section. By concavity, the PWL approximation curve lies above the segment joining any two non-adjacent points. Therefore, the maximization will select the approximation curve with only adjacent weights. A similar argument applies if three or more weights are positive (Bradley et al., 1977). Formally, suppose that each is a concave function and each known is a linear function in Then we can apply the same procedure as in the Example section — We let breakpoints satisfy
\[l= a_0\lt a_1\lt \dots\lt a_{r-1}= u,\qquad r\geq 2.\]
For real numbers Also, let such that,
\[\sum\limits_{i=0}^{r-1}\theta_j^i=1\]
and assign
\[x_j=\sum\limits_{i=0}^{r-1}\theta_j^i\cdot a_i,\qquad\hat f_j=\sum\limits_{i=0}^{r-1}\theta_j^if_j(a_i)\]
for each And so, the transformed LP is as follows
\[\begin{alignat}{2}\text{ Maximize }&\sum\limits_{j=0}^{n-1}\hat f_j\\\text{ Subject to }&\sum\limits_{j=0}^{n-1} g_{ij}(x_j)\leq 0&&\qquad(i=0,1,\dots,m-1)\\&x_j=\sum\limits_{i=0}^{r-1}\theta_j^i\cdot a_i&&\qquad(j=0,1,\dots,n-1)\\&\hat f_j=\sum\limits_{i=0}^{r-1}\theta_j^if_j(a_i)&&\qquad(j=0,1,\dots,n-1)\\&\sum_{i=0}^{r-1}\theta_j^i=1&&\qquad(j=0,1,\dots,n-1)\\&\theta_j\in\mathbb{R}_{\geq 0}^r&&\qquad(j=0,1,\dots,n-1)\\&x\in\mathbb{R}^n.\end{alignat}\]
Now, let be the PWL curve that goes through the points i.e., for each
\[p_j(x_j)=\frac{a_{k+1}-x_j}{a_{k+1}-a_k}f_j(a_k)+\frac{x_j-a_k}{a_{k+1}-a_k}f_j(a_{k+1})\]
concave implies that its secant slopes are non-increasing, and since is constructed by joining ordered breakpoints on the graph of we have that is also concave. And so, by Jensen’s inequality,
\[p_j(x_j)=p_j\left(\sum\limits_{i=0}^{r-1}\theta_j^i\cdot a_i\right)\geq\sum\limits_{i=0}^{r-1}\theta_j^i\cdot p_j(a_i)=\sum\limits_{i=0}^{r-1}\theta_j^if_j(a_i)=\hat f_j.\]
Moreover, for all in any feasible solution one can choose such that and define the vector by
\[\hat\theta_j^k=\frac{a_{k+1}-x_j}{a_{k+1}-a_k},\qquad \hat\theta_j^{k+1}=\frac{x_j-a_k}{a_{k+1}-a_k},\qquad\hat\theta_j^i=0,\;(i\notin \{k,k+1\})\qquad\]
with and Then and gives
\[\hat f_j^\text{new}=\hat\theta_j^k f_j(a_k)+\hat\theta_j^{k+1}f_j(a_{k+1})=p_j(x_j)\geq\hat f_j.\]
Because the constraints depend on only through the induced value replacing each by preserves feasibility (because it leaves each unchanged) and it weakly improves the objective function (because it increases each to ). Therefore, the SOS type 2 constraints that enforce the adjacency condition do not change the optimal value in separable maximization problems with concave objective functions. A similar argument shows that the SOS type 2 constraints do not change the optimal value in separable minimization problems with convex objective functions.
Note, however, that this shows that there exists an optimal solution in which only weights corresponding to adjacent breakpoints are positive. It does not guarantee that all optimal solutions of the LP satisfy adjacency. Therefore, one may still include SOS type 2 constraints to enforce a consistent representation.
In practice, solving models with concave objective functions via PWL approximations can yield an objective value that underestimates the true optimal value. Conversely, solving models with convex objective functions via PWL approximations yield an objective value that overestimates the true optimal value.
However, although these approximations can distort the objective value, in models where the original linear constraints remain unchanged, feasibility is not affected by the PWL approximations. Any variable assignments found by solving the model through PWL approximations satisfy the same linear constraints and therefore lie in the original feasible region. Consequently, one can evaluate the original nonlinear objective at the returned solution to obtain the objective value, even though this value need not equal the true nonlinear optimum.
Nonlinear Constraints
What requires different approaches are PWL approximations of nonlinear constraints, since these approximations can change the original feasible region. We define the feasible region/set of the problem, like the one above, by:
\[F=\bigcap\limits_{i=0}^{m-1}\{x\in\mathbb{R}^n:G_i(x)\leq0\},\]
where
\[G_i(x):=\sum\limits_{j=0}^{n-1}g_{ij}(x_j).\]
Now, suppose (for notational simplicity) that all are nonlinear functions. Approximate each univariate by a PWL function and define:
\[\hat G_i(x):=\sum\limits_{j=0}^{n-1}\hat g_{ij}(x_j).\]
Then
\[\hat F=\bigcap\limits_{i=0}^{m-1}\{x\in\mathbb{R}^n:\hat G_i(x)\leq0\},\]
is the feasible set of the transformed LP that uses PWL approximations. To avoid truly infeasible solutions, we want
\[\hat F\subseteq F.\]
If, instead, then there may exist meaning the model that uses PWL approximations could accept points that violate the original nonlinear constraints.

Given the way we construct the PWL approximations, a sufficient condition for is that for constraints of the form each is convex and for constraints of the form each is concave.
To see why this holds, take any First, consider constraints of the form where each is convex. Because PWL approximations overestimate convex functions, for any fixed we have:
\[(\forall j,\;\hat g_{ij}(x_j)\geq g_{ij}(x_j))\rightarrow\sum\limits_{j=0}^{n-1}\hat g_{ij}(x_j)\geq\sum\limits_{j=0}^{n-1}g_{ij}(x_j)\rightarrow\hat G_i(x)\geq G_i(x)\]
Since it satisfies Together with this implies
Next, consider constraints of the form where each is concave. Because PWL approximations underestimate concave functions, for any fixed we have:
\[(\forall j,\;\hat g_{ij}(x_j)\leq g_{ij}(x_j))\rightarrow\sum\limits_{j=0}^{n-1}\hat g_{ij}(x_j)\leq\sum\limits_{j=0}^{n-1}g_{ij}(x_j)\rightarrow\hat G_i(x)\leq G_i(x).\]
Again, since it satisfies Combined with this implies
This motivates the notion of a convex set: A set is convex if, for any two points the entire line segment connecting them lies within Formally, is convex if for all and any the point is also in

In particular, if is convex, then
\[S_i:=\{x\in\mathbb{R}^n:G_i(x)\leq b\}\]
is convex for any Proof: Take any then and Since is convex, for any we have that
\[G_i(\lambda x_1+(1-\lambda)x_2)\leq\lambda G_i(x_1)+(1-\lambda)G_i(x_2)\leq\lambda b+(1-\lambda)b=b.\]
Hence, and so, is convex. A similar argument applies to show that
\[T_i:=\{x\in\mathbb{R}^n:G_i(x)\geq b\}\]
is convex for any if is concave.
And since one could easily show that the intersection of any collection of convex sets is also convex,
“for a convex feasible set we want convex functions for less-than-or-equal-to constraints and concave functions for greater-than-or-equal to constraints.” (Bradley et al., 1977, p.418)
Nonlinear optimization problems in which the goal is to minimize a convex objective (or maximize a concave one) over a convex set of constraints are called convex nonlinear problems. These problems are generally easier to tackle than nonconvex ones.
Indeed, if then variable assignments may lead to constraint violations. In these cases, one would need to rely on other techniques, such as the simple modification MAP to the Frank-Wolfe algorithm provided in Bradley et al. (1977), and/or introduce tolerances that, to some degree, allow constraint violations.
Python Implementation
Before presenting the code, it is useful to consider how our modeling approach will scale as the problem size increases. For a separable program with decision variables and breakpoints per variable, the model introduces continuous variables (for the convex combinations) and SOS type 2 constraints. This means that both the number of variables and constraints grow linearly with and but their product can quickly lead to large models when either parameter is increased.
As previously mentioned, we will use Gurobipy to numerically solve the program above. Generally, the procedure is as follows:
- Choose bounds
- Choose breakpoints
- Add convex-combination constraints
- Add SOS type 2
- Solve
- Evaluate the nonlinear objective at the returned
- Refine breakpoints if needed
After importing the necessary libraries and dependencies, we declare and instantiate our nonlinear functions and
# Imports
import gurobipy as grb
import numpy as np
# Nonlinear functions
f0=lambda x0: 20.0*x0-2.0*x0**2
f1=lambda x1: 16.0*x1-x1**2
f2=lambda x2: -x2**2
Next, we choose the number of breakpoints. To construct the PWL approximations, we need to distribute them across the interval Many strategies exist to do this. For instance, one could cluster more points near the ends or even rely on adaptive procedures as in Cherry (2016). For simplicity, we stick to uniform sampling:
lb,ub=0.0,5.0
r=4 # Number of breakpoints
a_i=np.linspace(start=lb,stop=ub,num=r) # Distribute breakpoints uniformly
Then, we can compute the value of the functions at the chosen breakpoints to approximate the decision variables:
# Compute function values at breakpoints
f0_hat_r=f0(a_i)
f1_hat_r=f1(a_i)
f2_hat_r=f2(a_i)
The next step is to declare and instantiate a Gurobipy optimization model:
model=gp.Model()
After creating of our model, we need to add the decision variables. Gurobipy provides methods to add multiple decision variables to a model at once, and we leverage this feature to declare and instantiate some of our decision variables, namely the coefficients of the PWL approximations. Note that all decision variables in our model are continuous and can be assigned to any real value within the provided bounds. Such models can be solved efficiently using modern software packages such as Gurobi.
# Decision variables
x0=model.addVar(lb=0.0,ub=5.0,name='x0',vtype=GRB.CONTINUOUS)
x1=model.addVar(lb=0.0,ub=5.0,name='x1',vtype=GRB.CONTINUOUS)
x2=model.addVar(lb=0.0,ub=5.0,name='x2',vtype=GRB.CONTINUOUS)
f0_hat=model.addVar(lb=0.0,ub=50.0,name='f0_hat',vtype=GRB.CONTINUOUS)
f1_hat=model.addVar(lb=0.0,ub=55.0,name='f1_hat',vtype=GRB.CONTINUOUS)
f2_hat=model.addVar(lb=-25.0,ub=0.0,name='f2_hat',vtype=GRB.CONTINUOUS)
theta=model.addVars(range(0,3),range(0,r),lb=0.0,ub=1.0,name='theta',vtype=GRB.CONTINUOUS)
Additionally, we need to add constraints that limit the values our decision variables may take. Firstly, we add the constraints that ensure the correct decision of the variables that yield the approximations of the nonlinear functions:
# Constraints
model.addConstr(x0==gp.quicksum(theta[0,i]*a_i[i] for i in range(0,r)))
model.addConstr(x1==gp.quicksum(theta[1,i]*a_i[i] for i in range(0,r)))
model.addConstr(x2==gp.quicksum(theta[2,i]*a_i[i] for i in range(0,r)))
model.addConstr(f0_hat==gp.quicksum(theta[0,i]*f0_hat_r[i] for i in range(0,r)))
model.addConstr(f1_hat==gp.quicksum(theta[1,i]*f1_hat_r[i] for i in range(0,r)))
model.addConstr(f2_hat==gp.quicksum(theta[2,i]*f2_hat_r[i] for i in range(0,r)))
model.addConstr(gp.quicksum(theta[0,i] for i in range(0,r))==1)
model.addConstr(gp.quicksum(theta[1,i] for i in range(0,r))==1)
model.addConstr(gp.quicksum(theta[2,i] for i in range(0,r))==1)
Secondly, we add the constraints that define the original problem, namely those restrictions in the nonlinear formulation:
model.addConstr(x0+x1<=5.0)
model.addConstr(x0+x1-x2==0.0)
And finally, we add the SOS Type 2 constraints to ensure the adjacency condition:
# Special ordered sets
model.addSOS(GRB.SOS_TYPE2,[theta[0,i] for i in range(0,r)])
model.addSOS(GRB.SOS_TYPE2,[theta[1,i] for i in range(0,r)])
model.addSOS(GRB.SOS_TYPE2,[theta[2,i] for i in range(0,r)])
All that remains is to define the objective function, which we would like to maximize:
model.setObjective(f0_hat+f1_hat+f2_hat,GRB.MAXIMIZE) # Objective function
The model is now complete and we can use the Gurobi solver to retrieve the optimal value and optimal variable assignments. The code to do this is shown below:
model.optimize() # Optimize!
obj_val=model.ObjVal
res={}
all_vars=model.getVars()
values=model.getAttr('X',all_vars)
names=model.getAttr('VarName',all_vars)
# Retrieve values of variables after optimizing
for name,val in zip(names,values):
res[name]=val
After executing the code above and printing attributes to the screen, the output is as follows:
- Status: 2
- Number of breakpoints: 4
- Optimal objective value: 45.00
- Optimal x0, f0(x0) value (PWL approximation): 1.67, 27.78
- Optimal x1, f1(x1) value (PWL approximation): 3.33, 42.22
- Optimal x2, f2(x2) value (PWL approximation): 5.00, -25.00

The difference from our computed solution to the optimal value of 139/3 is 4/3. Now, to increase accuracy and obtain a better solution, one could add more breakpoints, consequently increasing the number of segments used to approximate the nonlinear functions. For instance, the output below shows how increasing the number of breakpoints to 16 leads to an approximation with practically no error to the true optimal value:
- Status: 2
- Number of breakpoints: 16
- Optimal objective value: 46.33
- Optimal x0, f0(x0) value (PWL approximation): 2.33, 35.78
- Optimal x1, f1(x1) value (PWL approximation): 2.67, 35.56
- Optimal x2, f2(x2) value (PWL approximation): 5.00, -25.00


You can find the complete Python code in this GitHub repository: link.
Further Readings
For a more advanced, robust algorithm that uses PWL functions to approximate the nonlinear objective function, Cherry (2016) is a valuable reference. It presents a procedure that begins with coarse approximations, which are refined iteratively using the optimal solution to the LP relaxation from the previous step.
For a deeper understanding of nonlinear programming, including methods, applied examples, and an explanation of why nonlinear problems are intrinsically more difficult to solve, the reader may refer to Nonlinear Programming: Applied Mathematical Programming (Bradley et al., 1977).
Finally, de Farias et al. (2000) present computational experiments that use SOS constraints within a branch-and-cut scheme to solve a generalized assignment problem arising in fiber-optic cable production scheduling.
Conclusion
Nonlinear programming is a relevant area in numerical optimization — with applications ranging from financial portfolio optimization to chemical process control. This article presented a procedure for tackling separable nonlinear programs by replacing nonlinear terms with PWL approximations and solving the resulting model using LP/MIP solvers. We also provided theoretical background on convex and concave functions and explained how these notions relate to nonlinear programming. With these elements, a reader should be able to model and solve LP/MIP-compatible approximations of many nonlinear programming instances without relying on dedicated nonlinear solvers.
Thank you for reading!
References
Cherry, A., 2016. Piecewise Linear Approximation for Nonlinear Programming Problems. A dissertation. Texas Tech University. Available at: https://ttu-ir.tdl.org/server/api/core/bitstreams/2ffce0e6-87e9-4e4c-b41a-60ea670c5a13/content (Accessed: 27 January 2026).
Bradley, S. P., Hax, A. C. & Magnanti, T. L., 1977. Nonlinear Programming. In: Applied Mathematical Programming. Reading, MA: Addison-Wesley Publishing Company, pp. 410–464. Available at: https://web.mit.edu/15.053/www/AMP-Chapter-13.pdf (Accessed: 27 January 2026).
de Farias, I. R., Johnson, E. L. & Nemhauser, G. L., 2000. A generalized assignment problem with special ordered sets: a polyhedral approach. Mathematical Programming, 89(1), pp. 187–203. Available at: https://doi.org/10.1007/PL00011392 (Accessed: 27 January 2026).