# GLPK/Scaling

Scaling is the process of applying linear transformations to the problem constraint matrix in order to improve its numerical properties.

The type of scaling to be deployed can be controlled using command-line options in the case of GLPSOL or via the GLPK API when calling GLPK directly from a program.

## Contents

## Overview[edit]

Scaling within GLPK is managed by the function `glp_scale_prob`. The original information on this page was derived from the source code comments for this function and its subsidiary functions `scale_prob`, `gm_scaling`, `gm_iterate`, and `eq_scaling`. On that note, the GLPK codebase is well commented and a good place to look for detailed information.

Scaling in GLPK may consist of the following steps — with the respective bitwise or-able scaling options also shown:

- skip if the problem is well scaled :
`GLP_SF_SKIP` - perform iterative geometric scaling :
`GLP_SF_GM` - perform equilibration scaling :
`GLP_SF_EQ` - round scale factors to nearest power of two :
`GLP_SF_2N`

The default scaling setting `GLP_SF_AUTO` undertakes the first three steps in the order: `GLP_SF_SKIP`, `GLP_SF_GM`, `GLP_SF_EQ`. Use of the MIP presolver may also add `GLP_SF_2N`.

Scaling normally also provides terminal output. More details can be found in the official documentation or directly from the source code.

## Check whether problem is well scaled[edit]

The problem is considered well scaled if,

**Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "/mathoid/local/v1/":): {\displaystyle 0.1 \leq \left|a_{i,j}\right| \leq 10, \forall a_{i,j} \neq 0 }**

where is the constraint matrix coefficient for row *i* and column *j*.

## Iterative geometric scaling[edit]

Geometric scaling can be applied equally to rows and columns. Geometric scaling scales each row or column such that the product of the smallest and the biggest absolute value is unity.

**Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "/mathoid/local/v1/":): {\displaystyle a'_{i,j} = \frac{a_{i,j}}{ \sqrt{ \min(\left|a_{i,j}\right|) \times \max(\left|a_{i,j}\right|) } } }**

Iterative geometric scalings applies geometric scaling alternatively to rows and columns until either a maximum number of iterations is reached or the scaling does not improve significantly. The scaling quality *r* is defined by:

GLPK defaults to a round-to-round improvement for *r* of at least 10% and a maximum of 15 rounds before termination.

## Equilibration scaling[edit]

Equilibration scaling can be both applied to both rows and columns. Equilibration scaling divides each row or column by the biggest absolute value present in that row or column.

**Failed to parse (Conversion error. Server ("https://wikimedia.org/api/rest_") reported: "Cannot get mml. Server problem."): {\displaystyle a'_{i,j}={\frac {a_{i,j}}{\max(\left|a_{i,j}\right|)}}}**

Equilibration scaling makes the infinity norm of each row and column equal to one.

## Nearest power of two scaling[edit]

Scaling may produce rounding errors. These can be avoided if all scaling factors are powers of two as this will only influence the exponent and not the mantissa of the IEEE 754 representation of double precision numbers.

The algorithm first iterates over all rows and then over all columns. The scaling factor each row/column is rounded to the closest power of two:

## Recomputation[edit]

If you add or remove a row or column or modify an existing row/column, then you will need to call `glp_scale_prob` again. In the case of modification, this call may also change some row and/or column scale factors and thereby the basis matrix. This then invalidates the current basis factorization stored in your `glp_prob` object by the simplex solver, because the factorization is computed for the scaled matrix and not the original matrix. This, however, is only a matter of efficiency (and not accuracy or stability) because, if the basis factorization becomes invalid, the simplex solver will simply compute a new basis matrix.