Generalized Reduced Gradient Method


The Reduced Gradient Method will handle equality constraints only.

The Generalized Reduced Gradient Method will handle both equality and inequality constraints. Inequality constraints are converted to equalities by the use of slack variables.

g(x) < 0 where x= {x1, x2, ... xn}
g(x) + xn+1 = 0

(Slack variables are also used in Linear Programming)

The GRG method converts the constrained problem into an unconstrained problem. It is an iterative method:

where Sq is the search direction.

For Sq we use the generalized reduced gradient, a combination of the gradient of the objective function and a pseudo-gradient derived from the equality constraints. A search direction is found such that any active constraint remains precisely active for some small move in this direction. Since gradient information is used, this is similar to steepest decent.

The GRG algorithm:

Split the variables into two categories: dependent and independent. (Excel calls them basic and non-basic variables, Papalambrous and Wilde call them decision and state variables).

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Compare the search direction to steepest decent:

 

 


GRG takes a linear approximation at the search point, so this is an iterative procedure. Some problems with the GRG algorithm:

1) The inversion of the [B] matrix can be difficult. Algorithms have been developed to overcome this to some extent.

2) The addition of slack variables complicates the problem if we have lots of inequality constraints.