Show code cell content
###############################################################################
# The Institute for the Design of Advanced Energy Systems Integrated Platform
# Framework (IDAES IP) was produced under the DOE Institute for the
# Design of Advanced Energy Systems (IDAES).
#
# Copyright (c) 20182023 by the software owners: The Regents of the
# University of California, through Lawrence Berkeley National Laboratory,
# National Technology & Engineering Solutions of Sandia, LLC, Carnegie Mellon
# University, West Virginia University Research Corporation, et al.
# All rights reserved. Please see the files COPYRIGHT.md and LICENSE.md
# for full copyright and license information.
###############################################################################
IDAES Model Diagnostics Toolbox Tutorial#
Author: Andrew Lee
Maintainer: Andrew Lee
Updated: 20231031
As you have likely discovered already, developing and solving models in an equationoriented (EO) environment can be challenging and often takes a significant amount of effort. There are many pitfalls and mistakes that can be encountered when developing a model which can greatly impact the solvability and robustness of the final problem.
Model diagnosis and debugging is often more of an art than a science, and it generally relies on significant experience and understanding both of general EO modeling techniques and the specific model and problem being solved. To assist with this process, IDAES has developed a model diagnostics toolbox that brings together a large number of tools for identifying potential issues in a model to help guide the user through the process of finding and resolving these issues. Note however that whilst these tools can help identify the presence of an issue, remedying the issue always requires some degree of engineering knowledge about the system being modeled, and thus it is ultimately up to the user to find a solution to the problem.
This tutorial will take you through using the DiagnosticsToolbox
to debug a number of issues in a simple Pyomo model and to take it from initially reporting a possible infeasible solution to returning the correct solution.
To get started, the DiagnosticsToolbox
can be imported from idaes.core.util
.
from idaes.core.util import DiagnosticsToolbox
To get some information on where to start, try using the Python help()
function to see the documentation for the DiagnosticsToolbox
.
help(DiagnosticsToolbox)
Help on class DiagnosticsToolbox in module idaes.core.util.model_diagnostics:
class DiagnosticsToolbox(builtins.object)
 DiagnosticsToolbox(model: pyomo.core.base.block._BlockData, **kwargs)

 The IDAES Model DiagnosticsToolbox.

 To get started:

 1. Create an instance of your model (this does not need to be initialized yet).
 2. Fix variables until you have 0 degrees of freedom. Many of these tools presume
 a square model, and a square model should always be the foundation of any more
 advanced model.
 3. Create an instance of the DiagnosticsToolbox and provide the model to debug as
 the model argument.
 4. Call the ``report_structural_issues()`` method.

 Model diagnostics is an iterative process and you will likely need to run these
 tools multiple times to resolve all issues. After making a change to your model,
 you should always start from the beginning again to ensure the change did not
 introduce any new issues; i.e., always start from the report_structural_issues()
 method.

 Note that structural checks do not require the model to be initialized, thus users
 should start with these. Numerical checks require at least a partial solution to the
 model and should only be run once all structural issues have been resolved.

 Report methods will print a summary containing three parts:

 1. Warnings  these are critical issues that should be resolved before continuing.
 For each warning, a method will be suggested in the Next Steps section to get
 additional information.
 2. Cautions  these are things that could be correct but could also be the source of
 solver issues. Not all cautions need to be addressed, but users should investigate
 each one to ensure that the behavior is correct and that they will not be the source
 of difficulties later. Methods exist to provide more information on all cautions,
 but these will not appear in the Next Steps section.
 3. Next Steps  these are recommended methods to call from the DiagnosticsToolbox to
 get further information on warnings. If no warnings are found, this will suggest
 the next report method to call.

 Args:

 model: model to be diagnosed. The DiagnosticsToolbox does not support indexed Blocks.

 Keyword Arguments
 
 variable_bounds_absolute_tolerance: float, default=0.0001
 Absolute tolerance for considering a variable to be close to its
 bounds.

 variable_bounds_relative_tolerance: float, default=0.0001
 Relative tolerance for considering a variable to be close to its
 bounds.

 variable_bounds_violation_tolerance: float, default=0
 Absolute tolerance for considering a variable to violate its bounds.
 Some solvers relax bounds on variables thus allowing a small violation
 to be considered acceptable.

 constraint_residual_tolerance: float, default=1e05
 Absolute tolerance to use when checking constraint residuals.

 variable_large_value_tolerance: float, default=10000.0
 Absolute tolerance for considering a value to be large.

 variable_small_value_tolerance: float, default=0.0001
 Absolute tolerance for considering a value to be small.

 variable_zero_value_tolerance: float, default=1e08
 Absolute tolerance for considering a value to be near to zero.

 jacobian_large_value_caution: float, default=10000.0
 Tolerance for raising a caution for large Jacobian values.

 jacobian_large_value_warning: float, default=100000000.0
 Tolerance for raising a warning for large Jacobian values.

 jacobian_small_value_caution: float, default=0.0001
 Tolerance for raising a caution for small Jacobian values.

 jacobian_small_value_warning: float, default=1e08
 Tolerance for raising a warning for small Jacobian values.

 warn_for_evaluation_error_at_bounds: bool, default=True
 If False, warnings will not be generated for things like log(x) with x
 >= 0

 parallel_component_tolerance: float, default=0.0001
 Tolerance for identifying nearparallel Jacobian rows/columns

 Methods defined here:

 __init__(self, model: pyomo.core.base.block._BlockData, **kwargs)
 Initialize self. See help(type(self)) for accurate signature.

 assert_no_numerical_warnings(self)
 Checks for numerical warnings in the model and raises an AssertionError
 if any are found.

 Raises:
 AssertionError if any warnings are identified by numerical analysis.

 assert_no_structural_warnings(self)
 Checks for structural warnings in the model and raises an AssertionError
 if any are found.

 Raises:
 AssertionError if any warnings are identified by structural analysis.

 display_components_with_inconsistent_units(self, stream=None)
 Prints a list of all Constraints, Expressions and Objectives in the
 model with inconsistent units of measurement.

 Args:
 stream: an I/O object to write the list to (default = stdout)

 Returns:
 None

 display_constraints_with_extreme_jacobians(self, stream=None)
 Prints the constraints associated with rows in the Jacobian with extreme
 L2 norms. This often indicates poorly scaled constraints.

 Tolerances can be set via the DiagnosticsToolbox config.

 Args:
 stream: an I/O object to write the output to (default = stdout)

 Returns:
 None

 display_constraints_with_large_residuals(self, stream=None)
 Prints a list of Constraints with residuals greater than a specified tolerance.
 Tolerance can be set in the class configuration options.

 Args:
 stream: an I/O object to write the list to (default = stdout)

 Returns:
 None

 display_external_variables(self, stream=None)
 Prints a list of variables that appear within activated Constraints in the
 model but are not contained within the model themselves.

 Args:
 stream: an I/O object to write the list to (default = stdout)

 Returns:
 None

 display_extreme_jacobian_entries(self, stream=None)
 Prints variables and constraints associated with entries in the Jacobian with extreme
 values. This can be indicative of poor scaling, especially for isolated terms (e.g.
 variables which appear only in one term of a single constraint).

 Tolerances can be set via the DiagnosticsToolbox config.

 Args:
 stream: an I/O object to write the output to (default = stdout)

 Returns:
 None

 display_near_parallel_constraints(self, stream=None)
 Display nearparallel (duplicate) constraints in model.

 Args:
 stream: I/O object to write report to (default = stdout)

 Returns:
 None

 display_near_parallel_variables(self, stream=None)
 Display nearparallel (duplicate) variables in model.

 Args:
 stream: I/O object to write report to (default = stdout)

 Returns:
 None

 display_overconstrained_set(self, stream=None)
 Prints the variables and constraints in the overconstrained subproblem
 from a DulmageMendelsohn partitioning.

 This can be used to identify the overdefined part of a model and thus
 where constraints must be removed or variables unfixed.

 Args:
 stream: an I/O object to write the list to (default = stdout)

 Returns:
 None

 display_potential_evaluation_errors(self, stream=None)
 Prints constraints that may be prone to evaluation errors
 (e.g., log of a negative number) based on variable bounds.

 Args:
 stream: an I/O object to write the output to (default = stdout)

 Returns:
 None

 display_underconstrained_set(self, stream=None)
 Prints the variables and constraints in the underconstrained subproblem
 from a DulmageMendelsohn partitioning.

 This can be used to identify the underdefined part of a model and thus
 where additional information (fixed variables or constraints) are required.

 Args:
 stream: an I/O object to write the list to (default = stdout)

 Returns:
 None

 display_unused_variables(self, stream=None)
 Prints a list of variables that do not appear in any activated Constraints.

 Args:
 stream: an I/O object to write the list to (default = stdout)

 Returns:
 None

 display_variables_at_or_outside_bounds(self, stream=None)
 Prints a list of variables with values that fall at or outside the bounds
 on the variable.

 Args:
 stream: an I/O object to write the list to (default = stdout)

 Returns:
 None

 display_variables_fixed_to_zero(self, stream=None)
 Prints a list of variables that are fixed to an absolute value of 0.

 Args:
 stream: an I/O object to write the list to (default = stdout)

 Returns:
 None

 display_variables_near_bounds(self, stream=None)
 Prints a list of variables with values close to their bounds. Tolerance can
 be set in the class configuration options.

 Args:
 stream: an I/O object to write the list to (default = stdout)

 Returns:
 None

 display_variables_with_extreme_jacobians(self, stream=None)
 Prints the variables associated with columns in the Jacobian with extreme
 L2 norms. This often indicates poorly scaled variables.

 Tolerances can be set via the DiagnosticsToolbox config.

 Args:
 stream: an I/O object to write the output to (default = stdout)

 Returns:
 None

 display_variables_with_extreme_values(self, stream=None)
 Prints a list of variables with extreme values.

 Tolerances can be set in the class configuration options.

 Args:
 stream: an I/O object to write the list to (default = stdout)

 Returns:
 None

 display_variables_with_none_value(self, stream=None)
 Prints a list of variables with a value of None.

 Args:
 stream: an I/O object to write the list to (default = stdout)

 Returns:
 None

 display_variables_with_value_near_zero(self, stream=None)
 Prints a list of variables with a value close to zero. The tolerance
 for determining what is close to zero can be set in the class configuration
 options.

 Args:
 stream: an I/O object to write the list to (default = stdout)

 Returns:
 None

 get_dulmage_mendelsohn_partition(self)
 Performs a DulmageMendelsohn partitioning on the model and returns
 the over and underconstrained subproblems.

 Returns:
 listoflists variables in each independent block of the underconstrained set
 listoflists constraints in each independent block of the underconstrained set
 listoflists variables in each independent block of the overconstrained set
 listoflists constraints in each independent block of the overconstrained set

 prepare_degeneracy_hunter(self, **kwargs)
 Create an instance of the DegeneracyHunter and store as self.degeneracy_hunter.

 After creating an instance of the toolbox, call
 report_irreducible_degenerate_sets.

 Returns:

 Instance of DegeneracyHunter

 Keyword Arguments
 
 solver: str, default='scip'
 MILP solver to use for finding irreducible degenerate sets.

 solver_options: optional
 Options to pass to MILP solver.

 M: float, default=100000.0
 Maximum value for nu in MILP models.

 m_small: float, default=1e05
 Smallest value for nu to be considered nonzero in MILP models.

 trivial_constraint_tolerance: float, default=1e06
 Tolerance for identifying nonzero rows in Jacobian.

 prepare_svd_toolbox(self, **kwargs)
 Create an instance of the SVDToolbox and store as self.svd_toolbox.

 After creating an instance of the toolbox, call
 display_underdetermined_variables_and_constraints().

 Returns:

 Instance of SVDToolbox

 Keyword Arguments
 
 number_of_smallest_singular_values: PositiveInt, optional
 Number of smallest singular values to compute

 svd_callback: svd_callback_validator, default=<function svd_dense at 0x7f97cabffd30>
 Callback to SVD method of choice (default = svd_dense). Callbacks
 should take the Jacobian and number of singular values to compute as
 options, plus any method specific arguments, and should return the u,
 s and v matrices as numpy arrays.

 svd_callback_arguments: dict, optional
 Optional arguments to pass to SVD callback (default = None)

 singular_value_tolerance: float, default=1e06
 Tolerance for defining a small singular value

 size_cutoff_in_singular_vector: float, default=0.1
 Size below which to ignore constraints and variables in the singular
 vector

 report_numerical_issues(self, stream=None)
 Generates a summary report of any numerical issues identified in the model provided
 and suggest next steps for debugging model.

 Numerical checks should only be performed once all structural issues have been resolved,
 and require that at least a partial solution to the model is available.

 Args:
 stream: I/O object to write report to (default = stdout)

 Returns:
 None

 report_structural_issues(self, stream=None)
 Generates a summary report of any structural issues identified in the model provided
 and suggests next steps for debugging the model.

 This should be the first method called when debugging a model and after any change
 is made to the model. These checks can be run before trying to initialize and solve
 the model.

 Args:
 stream: I/O object to write report to (default = stdout)

 Returns:
 None

 
 Readonly properties defined here:

 model
 Model currently being diagnosed.

 
 Data descriptors defined here:

 __dict__
 dictionary for instance variables (if defined)

 __weakref__
 list of weak references to the object (if defined)
The help()
function gives us a lot of information on the DiagnosticsToolbox
and all the methods that it supports (and there are many). However, the important part to start with are the four steps outlined at the top of the doc string that tell us how to get started.
Firstly, we need a model to test (and, for this tutorial at least, one that has a wide range of issues that we need to fix before it will solve). We then also need to fix some variables so that we have 0 degrees of freedom in our model. Whilst our ultimate goal is generally optimization (and thus a system with 1 or more degrees of freedom), all models conceptually derive from a square model representing a nominal state. If this nominal state is not wellposed, then any issues present will also be present in the resulting optimization (even if adding degrees of freedom means that the model is now easier to solve).
The cell below contains a demonstration model for this tutorial that contains a number of issues that we will resolve using the DiagnosticsToolbox
.
import pyomo.environ as pyo
m = pyo.ConcreteModel()
m.v1 = pyo.Var(units=pyo.units.m)
m.v2 = pyo.Var(units=pyo.units.m)
m.v3 = pyo.Var(bounds=(0, 5))
m.v4 = pyo.Var()
m.v5 = pyo.Var(bounds=(0, 10))
m.v6 = pyo.Var()
m.v7 = pyo.Var(units=pyo.units.m, bounds=(0, 1)) # Poorly scaled variable with lower bound
m.v8 = pyo.Var() # unused variable
m.c1 = pyo.Constraint(expr=m.v1 + m.v2 == 10) # Unit consistency issue
m.c2 = pyo.Constraint(expr=m.v3 == m.v4 + m.v5)
m.c3 = pyo.Constraint(expr=2*m.v3 == 3*m.v4 + 4*m.v5 + m.v6)
m.c4 = pyo.Constraint(expr=m.v7 == 1e8*m.v1) # Poorly scaled constraint
m.v4.fix(2)
m.v5.fix(2)
m.v6.fix(0)
Next, the instructions tell us to create an instance of the DiagnosticsToolbox
and to pass the model we wish to examine as an argument.
dt = DiagnosticsToolbox(m)
Finally, the instructions tell us to run the report_structural_issues()
method. Structural issues represent issues that exist solely in the form of the model equations and thus do not depend on the current value of any of the variables. This is useful as it means we can check for these before we even call a solver, which can be critical as sometimes these issues will cause a solver to fail without providing a useful solution.
dt.report_structural_issues()
====================================================================================
Model Statistics
Activated Blocks: 1 (Deactivated: 0)
Free Variables in Activated Constraints: 4 (External: 0)
Free Variables with only lower bounds: 0
Free Variables with only upper bounds: 0
Free Variables with upper and lower bounds: 2
Fixed Variables in Activated Constraints: 3 (External: 0)
Activated Equality Constraints: 4 (Deactivated: 0)
Activated Inequality Constraints: 0 (Deactivated: 0)
Activated Objectives: 0 (Deactivated: 0)

2 WARNINGS
WARNING: 1 Component with inconsistent units
WARNING: Structural singularity found
UnderConstrained Set: 3 variables, 2 constraints
OverConstrained Set: 1 variables, 2 constraints

2 Cautions
Caution: 1 variable fixed to 0
Caution: 1 unused variable (0 fixed)

Suggested next steps:
display_components_with_inconsistent_units()
display_underconstrained_set()
display_overconstrained_set()
====================================================================================
Looking at the output from the report_structural_issues()
method, we can see that it provides a fairly short summary containing 4 sections.
The first section is a summary of the size of the model, indicating things like the number of variables and constraints. The size of the model is often important for judging how difficult it will be to solve, and this information can also be useful for comparison to what is being sent to the solver. Most solvers will report the size of the model in their output logs, and if there is a difference between what is reported here and by the solver, then you should probably look into what is happening. This section also notes some things such as if you have any deactivated Blocks, Constraints or Objectives, or if you have variables which appear in the constraints that are not part of the model; these are not necessarily wrong but it is easy to have accidentally deactivated something you did not intend to so you should always check to see that these are expected.
The second section provides a summary of any critical structural issues that were found  in this case we can see that there are 2 warnings we are going to need to look into. Warnings represent issues that need to be addressed before moving on as these will likely cause the solver to fail or give an incorrect answer.
The third section lists a summary of any cautions that are found. Cautions represent issues that may or may not be problematic; in many cases these might be expected behaviors or borderline issues. However, these could also represent conceptual issues that should be addressed, so users should take the time to investigate these and determine if they need to be fixed or not.
Finally, there is a section that suggests the next steps to take to help guide you through the model diagnosis process. If any warnings were identified, this section will list methods that can help you get more information on each specific problem, and if no warnings are found then it will guide you onto the next step in the model diagnosis workflow.
Note: there are methods available to help investigate cautions as well, but these will not show up in the next steps in order to avoid cluttering the output. You can get more information on the available methods for investigating cautions via the documentation or help()
function.
In our current model, we have 2 critical issues (warnings) that we need to look into and resolve. The order in which we resolve these will generally not matter, but be aware that these can often be interrelated  fixing one warning might resolve other warnings as well (or create new ones), and sometimes you will need to look at multiple issues together to find the overall root cause.
To start with, let us look at the unit consistency issue. From the “Next Steps” section above, the toolbox is suggesting we run the display_components_with_inconsistent_units()
method for more information.
dt.display_components_with_inconsistent_units()
====================================================================================
The following component(s) have unit consistency issues:
c1
For more details on unit inconsistencies, import the assert_units_consistent method
from pyomo.util.check_units
====================================================================================
This tells us that the issue lies in constraint c1
. If we go back and look at this constraint, we can see that it says v1 + v2 == 10
. v1
and v2
both have units of m
which is consistent, but the constant in the expression (right hand side) is unitless. Thus, we need to correct this so that the right hand side has units for the constraint to be consistent.
The cell below shows how to delete a constraint and replace it with a new one with the correct units.
# Delete the incorrect Constraint
m.del_component(m.c1)
# Recreate the Constraint with the correct units
m.c1 = pyo.Constraint(expr=m.v1 + m.v2 == 10*pyo.units.m)
It is strongly recommended that you keep a record of the changes you make at each step and why, along with a Git hash (or similar version control marker) corresponding to these changes. This will allow you see what changes and why, and give you a way to go back to previous iterations if the current approach does not work out. The IDAES documentation contains recommendations on how to keep and maintain a modeling logbook.
Now, rerun the report_structural_issues()
method and see if this change has fixed the unit consistency issue.
dt.report_structural_issues()
====================================================================================
Model Statistics
Activated Blocks: 1 (Deactivated: 0)
Free Variables in Activated Constraints: 4 (External: 0)
Free Variables with only lower bounds: 0
Free Variables with only upper bounds: 0
Free Variables with upper and lower bounds: 2
Fixed Variables in Activated Constraints: 3 (External: 0)
Activated Equality Constraints: 4 (Deactivated: 0)
Activated Inequality Constraints: 0 (Deactivated: 0)
Activated Objectives: 0 (Deactivated: 0)

1 WARNINGS
WARNING: Structural singularity found
UnderConstrained Set: 3 variables, 2 constraints
OverConstrained Set: 1 variables, 2 constraints

2 Cautions
Caution: 1 variable fixed to 0
Caution: 1 unused variable (0 fixed)

Suggested next steps:
display_underconstrained_set()
display_overconstrained_set()
====================================================================================
The unit consistency issue has been resolved by the changes above, so now we need to look at the structural singularity. A structural singularity occurs when one subpart of the model is overconstrained (negative degrees of freedom), which generally means another part is underconstrained (positive degrees of freedom, assuming that there are 0 degrees of freedom overall).
The toolbox is suggesting we use the display_overconstrained_set()
and display_underconstrained_set()
methods to get more information on the singularity; for now, let us start with the overconstrained set.
dt.display_overconstrained_set()
====================================================================================
DulmageMendelsohn OverConstrained Set
Independent Block 0:
Variables:
v3
Constraints:
c2
c3
====================================================================================
From the output above, the toolbox is telling us that we have two constraints (c2
and c3
) which only contain a single unfixed variable (v3
); thus in this part of the model we have 1 degree of freedom and the model is not well defined (structurally singular). If we go back and look at these constraints, we can see the that the constraints are:
c2: v3 == v4 + v5
c3: 2*v3 == 3*v4 + 4*v5 + v6
We can see that in addition to v3
these constraints actually contain 3 other variables (v4
, v5
and v6
), however these are all variables we fixed to get our initial zero degrees of freedom. It looks like we have either accidentally fixed one too many variables or written one too many constraints.
For this example, let us assume that v4
was not supposed to be fixed and unfix it.
m.v4.unfix()
dt.report_structural_issues()
====================================================================================
Model Statistics
Activated Blocks: 1 (Deactivated: 0)
Free Variables in Activated Constraints: 5 (External: 0)
Free Variables with only lower bounds: 0
Free Variables with only upper bounds: 0
Free Variables with upper and lower bounds: 2
Fixed Variables in Activated Constraints: 2 (External: 0)
Activated Equality Constraints: 4 (Deactivated: 0)
Activated Inequality Constraints: 0 (Deactivated: 0)
Activated Objectives: 0 (Deactivated: 0)

2 WARNINGS
WARNING: 1 Degree of Freedom
WARNING: Structural singularity found
UnderConstrained Set: 3 variables, 2 constraints
OverConstrained Set: 0 variables, 0 constraints

2 Cautions
Caution: 1 variable fixed to 0
Caution: 1 unused variable (0 fixed)

Suggested next steps:
display_underconstrained_set()
====================================================================================
We can see that the overconstrained set is now empty (0 variables and 0 constraints) but the underconstrained set still has 3 variables and only 2 constraints. We can also see that there is a new warning about having 1 degree of freedom in the model, however this should not be surprising as we have just unfixed v4
to resolve the overconstrained set so we have added a degree of freedom to the model.
dt.display_underconstrained_set()
====================================================================================
DulmageMendelsohn UnderConstrained Set
Independent Block 0:
Variables:
v2
v1
v7
Constraints:
c1
c4
====================================================================================
Looking at the output from the display_underconstrained_set()
method, we can see that we have two constraints, c1
and c4
, which contain three unfixed variables, v1
, v2
and v7
. Thus, we have one degree of freedom that needs to be addressed. To fix this, we could either fix one of the variables shown or add an additional equality constraint to the model.
For this example let’s fix v2
to a value of 5 and then rerun the report_structural_issues()
method.
m.v2.fix(5)
dt.report_structural_issues()
====================================================================================
Model Statistics
Activated Blocks: 1 (Deactivated: 0)
Free Variables in Activated Constraints: 4 (External: 0)
Free Variables with only lower bounds: 0
Free Variables with only upper bounds: 0
Free Variables with upper and lower bounds: 2
Fixed Variables in Activated Constraints: 3 (External: 0)
Activated Equality Constraints: 4 (Deactivated: 0)
Activated Inequality Constraints: 0 (Deactivated: 0)
Activated Objectives: 0 (Deactivated: 0)

0 WARNINGS
No warnings found!

2 Cautions
Caution: 1 variable fixed to 0
Caution: 1 unused variable (0 fixed)

Suggested next steps:
Try to initialize/solve your model and then call report_numerical_issues()
====================================================================================
The toolbox is now telling us that no warnings were found, so we have resolved all the structural issues (for now at least). The toolbox is telling us that there are also 2 noncritical issues (cautions) that we should look at; one about an unused variable and one about a variable fixed to zero. If you wish, you can look into identifying and fixing these yourself, however for this example we will move on to the next step (remember that the toolbox has methods to display more details for each of these which you can find in the documentation or from the help()
function).
For the Next Steps section, the toolbox is recommending we try to solve our model and then check for numerical issues.
solver = pyo.SolverFactory('ipopt')
solver.solve(m, tee=True)
WARNING: Could not locate the 'ipopt' executable, which is required for solver
ipopt

ApplicationError Traceback (most recent call last)
Cell In[14], line 2
1 solver = pyo.SolverFactory('ipopt')
> 2 solver.solve(m, tee=True)
File ~/checkouts/readthedocs.org/user_builds/idaesexamples/envs/latest/lib/python3.8/sitepackages/pyomo/opt/base/solvers.py:533, in OptSolver.solve(self, *args, **kwds)
530 def solve(self, *args, **kwds):
531 """Solve the problem"""
> 533 self.available(exception_flag=True)
534 #
535 # If the inputs are models, then validate that they have been
536 # constructed! Collect suffix names to try and import from solution.
537 #
538 from pyomo.core.base.block import _BlockData
File ~/checkouts/readthedocs.org/user_builds/idaesexamples/envs/latest/lib/python3.8/sitepackages/pyomo/opt/solver/shellcmd.py:141, in SystemCallSolver.available(self, exception_flag)
139 if exception_flag:
140 msg = "No executable found for solver '%s'"
> 141 raise ApplicationError(msg % self.name)
142 return False
143 return True
ApplicationError: No executable found for solver 'ipopt'
As hinted at above, IPOPT has returned a warning that the problem may be infeasible. Before moving on however, it is always good practice to look over the solver outputs and see what it is telling you.
dt.report_numerical_issues()
====================================================================================
Model Statistics
Jacobian Condition Number: 1.700E+01

2 WARNINGS
WARNING: 1 Constraint with large residuals (>1.0E05)
WARNING: 1 Variable at or outside bounds (tol=0.0E+00)

5 Cautions
Caution: 2 Variables with value close to their bounds (abs=1.0E04, rel=1.0E04)
Caution: 2 Variables with value close to zero (tol=1.0E08)
Caution: 1 Variable with extreme value (<1.0E04 or >1.0E+04)
Caution: 1 Variable with None value
Caution: 1 extreme Jacobian Entry (<1.0E04 or >1.0E+04)

Suggested next steps:
display_constraints_with_large_residuals()
display_variables_at_or_outside_bounds()
====================================================================================
The report_numerical_issues()
provides a summary similar to that which we saw for the structural issues. Firstly, it reports to us the Jacobian condition number for our problem which can give us an idea of how wellscaled the problem is, followed by a list of warnings, cautions and suggested next steps.
Unsurprisingly, we are seeing a warning about a constraint with a large residual which we would expect when a solver reports a potentially infeasible problem. We are also seeing a warning about a variable with bound violations which might be contributing to the potential infeasibility.
For the next steps, the toolbox is suggesting some new methods to get more information on these issues; let us start by looking at the constraints with large residuals.
dt.display_constraints_with_large_residuals()
====================================================================================
The following constraint(s) have large residuals (>1.0E05):
c2: 6.66667E01
====================================================================================
The toolbox is telling us that the constraint which failed to converge is c2
, however this is generally only part of the story. Solvers work by trying to minimize the infeasibility in the model (residual of the constraints), which generally means they push any infeasibility onto the least sensitive constraint in the problem. Thus, the constraint which shows the infeasibility is often not the root cause of the problem, but only the symptom of the underlying issue.
If we look back at the constraints, we can see that the same variables also appear in c3
and that some of these have bounds, all of which could be contributing to the infeasibility. In this case the solver tried to minimize the residual in all the constraints and ended up pushing all the issues off onto c2
.
Next, let us take a look at the variables at or outside their bounds as well. When a solver reports an potentially infeasible solution, the most common cause is unexpected bounds violations so you should always check these first.
dt.display_variables_at_or_outside_bounds()
====================================================================================
The following variable(s) have values at or outside their bounds (tol=0.0E+00):
v3 (free): value=0.0 bounds=(0, 5)
====================================================================================
The toolbox is telling us that v3
is the variable with a potential issue. It is also showing us the current value and bounds for v3
as well as if it is a fixed or free variable, which will be useful for diagnosing the issues.
We can see that v3
is a free variable with bounds between 0 and 5 and a current value of 0. As v3
is a free variable, this suggests that the solver has pushed the value to the bound where it cannot go any further, and this might be part of the cause of our infeasibility.
Never arbitrarily change a bound just because it is causing your model to be infeasible without understanding the consequences of this decision. Often, a bound violation is an indication that you need to rethink some of the constraints in your model to find alternatives which are valid in the actual range of values you are trying to solve for.
For this example, let us assume that we made a mistake with the bounds on v3
and set the lower bound to be 5.
m.v3.setlb(5)
Now that we have fixed the bounds issues, we should check whether our model is now feasible. However, before we continue we should recognize that we have just made a structural change to the model. If we were not careful, this could have introduced new structural issues to the model, so we should start from the beginning just to be sure.
dt.report_structural_issues()
====================================================================================
Model Statistics
Activated Blocks: 1 (Deactivated: 0)
Free Variables in Activated Constraints: 4 (External: 0)
Free Variables with only lower bounds: 0
Free Variables with only upper bounds: 0
Free Variables with upper and lower bounds: 2
Fixed Variables in Activated Constraints: 3 (External: 0)
Activated Equality Constraints: 4 (Deactivated: 0)
Activated Inequality Constraints: 0 (Deactivated: 0)
Activated Objectives: 0 (Deactivated: 0)

0 WARNINGS
No warnings found!

2 Cautions
Caution: 1 variable fixed to 0
Caution: 1 unused variable (0 fixed)

Suggested next steps:
Try to initialize/solve your model and then call report_numerical_issues()
====================================================================================
Our change has not introduced any new structural issues, so we can move on and try to solve the model again.
solver.solve(m, tee=True)
Ipopt 3.13.2:
******************************************************************************
This program contains Ipopt, a library for largescale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit http://projects.coinor.org/Ipopt
This version of Ipopt was compiled from source code available at
https://github.com/IDAES/Ipopt as part of the Institute for the Design of
Advanced Energy Systems Process Systems Engineering Framework (IDAES PSE
Framework) Copyright (c) 20182019. See https://github.com/IDAES/idaespse.
This version of Ipopt was compiled using HSL, a collection of Fortran codes
for largescale scientific computation. All technical papers, sales and
publicity material resulting from use of the HSL codes within IPOPT must
contain the following acknowledgement:
HSL, a collection of Fortran codes for largescale scientific
computation. See http://www.hsl.rl.ac.uk.
******************************************************************************
This is Ipopt version 3.13.2, running with linear solver ma27.
Number of nonzeros in equality constraint Jacobian...: 7
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 0
Total number of variables............................: 4
variables with only lower bounds: 0
variables with lower and upper bounds: 2
variables with only upper bounds: 0
Total number of equality constraints.................: 4
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) d lg(rg) alpha_du alpha_pr ls
0 0.0000000e+00 6.67e01 0.00e+00 1.0 0.00e+00  0.00e+00 0.00e+00 0
1 0.0000000e+00 6.66e03 2.97e+00 1.0 2.00e+00  7.17e01 9.90e01h 1
2 0.0000000e+00 6.27e05 9.38e+00 1.0 2.00e02  1.00e+00 9.91e01h 1
3 0.0000000e+00 8.88e16 1.13e12 1.0 1.88e04  1.00e+00 1.00e+00h 1
Number of Iterations....: 3
(scaled) (unscaled)
Objective...............: 0.0000000000000000e+00 0.0000000000000000e+00
Dual infeasibility......: 0.0000000000000000e+00 0.0000000000000000e+00
Constraint violation....: 8.8817841970012523e16 8.8817841970012523e16
Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00
Overall NLP error.......: 8.8817841970012523e16 8.8817841970012523e16
Number of objective function evaluations = 4
Number of objective gradient evaluations = 4
Number of equality constraint evaluations = 4
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 4
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 3
Total CPU secs in IPOPT (w/o function evaluations) = 0.001
Total CPU secs in NLP function evaluations = 0.000
EXIT: Optimal Solution Found.
{'Problem': [{'Lower bound': inf, 'Upper bound': inf, 'Number of objectives': 1, 'Number of constraints': 4, 'Number of variables': 4, 'Sense': 'unknown'}], 'Solver': [{'Status': 'ok', 'Message': 'Ipopt 3.13.2\\x3a Optimal Solution Found', 'Termination condition': 'optimal', 'Id': 0, 'Error rc': 0, 'Time': 0.02317023277282715}], 'Solution': [OrderedDict([('number of solutions', 0), ('number of solutions displayed', 0)])]}
IPOPT should have returned optimal solution now, so it looks like those bounds were what was causing the model to be infeasible. At this point, the model is now solving (for the current values at least), so you might think that the model is now ready for optimization.
However, if we look at the solver logs we can see that it took around 3 iterations for IPOPT to solve our model (depending on minor variations in computer architecture). For a model this simple, we would generally expect it to solve in only 1 iteration so there is still some room for improvement.
You should always take the time to look over the solver logs to look for signs of trouble, even if you get an optimal solution. While you might get an optimal solution for the current state, there may be advance warning signs of issues that will cause problems later when you try to solve the model at a different state.
Let us run the report_numerical_issues
method again to see if there are any other problems we need to address.
dt.report_numerical_issues()
====================================================================================
Model Statistics
Jacobian Condition Number: 1.700E+01

0 WARNINGS
No warnings found!

5 Cautions
Caution: 1 Variable with value close to their bounds (abs=1.0E04, rel=1.0E04)
Caution: 1 Variable with value close to zero (tol=1.0E08)
Caution: 1 Variable with extreme value (<1.0E04 or >1.0E+04)
Caution: 1 Variable with None value
Caution: 1 extreme Jacobian Entry (<1.0E04 or >1.0E+04)

Suggested next steps:
If you still have issues converging your model consider:
prepare_svd_toolbox()
prepare_degeneracy_hunter()
====================================================================================
The toolbox is not reporting any warnings which is good, however there are still 5 numerical cautions that it has identified which might be contributing to the larger than expected number of iterations. As mentioned earlier, the toolbox does not suggest methods for investigating these, but there are methods available. For example, we can look at the variable with an extreme value using the display_variables_with_extreme_values()
method.
dt.display_variables_with_extreme_values()
====================================================================================
The following variable(s) have extreme values (<1.0E04 or > 1.0E+04):
v7: 4.9999999999999945e08
====================================================================================
We can see that v7
is potentially causing problems due to having a very small value (on the order of magnitude of the solver tolerance). This can be especially problematic for interior point solvers like IPOPT if there is a lower bound of 0 (which there is in this case). IPOPT tries to avoid bounds and thus perturbs solutions away from these if it gets too close, which can cause convergence to be slow (or fail) if the solution lies close to the bound.
We can address this by scaling the variable so that the value of the scaled variable is large enough that the solution is not close to the lower bound. Additionally, we should look at any constraint that v7
appears in (in this case c4
) and ensure that those constraints are well scaled as well (so that a residual of 1e6 is reasonable for the terms involved).
For this case, we can set a scaling factor of 1e8 for both v7
and c4
as shown below. Note that we also need to apply Pyomo’s scaling transformation to create a new scaled model to work with.
m.scaling_factor = pyo.Suffix(direction=pyo.Suffix.EXPORT)
m.scaling_factor[m.v7] = 1e8
m.scaling_factor[m.c4] = 1e8
scaling = pyo.TransformationFactory('core.scale_model')
scaled_model = scaling.create_using(m, rename=False)
Now that we have a scaled model, we can try to solve it and hopefully see better convergence than the unscaled model.
solver.solve(scaled_model, tee=True)
Ipopt 3.13.2:
******************************************************************************
This program contains Ipopt, a library for largescale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit http://projects.coinor.org/Ipopt
This version of Ipopt was compiled from source code available at
https://github.com/IDAES/Ipopt as part of the Institute for the Design of
Advanced Energy Systems Process Systems Engineering Framework (IDAES PSE
Framework) Copyright (c) 20182019. See https://github.com/IDAES/idaespse.
This version of Ipopt was compiled using HSL, a collection of Fortran codes
for largescale scientific computation. All technical papers, sales and
publicity material resulting from use of the HSL codes within IPOPT must
contain the following acknowledgement:
HSL, a collection of Fortran codes for largescale scientific
computation. See http://www.hsl.rl.ac.uk.
******************************************************************************
This is Ipopt version 3.13.2, running with linear solver ma27.
Number of nonzeros in equality constraint Jacobian...: 7
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 0
Total number of variables............................: 4
variables with only lower bounds: 0
variables with lower and upper bounds: 2
variables with only upper bounds: 0
Total number of equality constraints.................: 4
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) d lg(rg) alpha_du alpha_pr ls
0 0.0000000e+00 5.33e15 0.00e+00 1.0 0.00e+00  0.00e+00 0.00e+00 0
Number of Iterations....: 0
(scaled) (unscaled)
Objective...............: 0.0000000000000000e+00 0.0000000000000000e+00
Dual infeasibility......: 0.0000000000000000e+00 0.0000000000000000e+00
Constraint violation....: 5.3290705182007514e15 5.3290705182007514e15
Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00
Overall NLP error.......: 5.3290705182007514e15 5.3290705182007514e15
Number of objective function evaluations = 1
Number of objective gradient evaluations = 1
Number of equality constraint evaluations = 1
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 1
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 0
Total CPU secs in IPOPT (w/o function evaluations) = 0.000
Total CPU secs in NLP function evaluations = 0.000
EXIT: Optimal Solution Found.
{'Problem': [{'Lower bound': inf, 'Upper bound': inf, 'Number of objectives': 1, 'Number of constraints': 4, 'Number of variables': 4, 'Sense': 'unknown'}], 'Solver': [{'Status': 'ok', 'Message': 'Ipopt 3.13.2\\x3a Optimal Solution Found', 'Termination condition': 'optimal', 'Id': 0, 'Error rc': 0, 'Time': 0.0058002471923828125}], 'Solution': [OrderedDict([('number of solutions', 0), ('number of solutions displayed', 0)])]}
As we can see, the scaled model solved in 0 iterations (indicating that it already had the right solution). However, had we done this to the unscaled model we would have found it required 23 iterations again due to IPOPT perturbing the initial (correct) solution away from the bounds.
Now that we have fixed the scaling issues, we can go back to the DiagnosticsToolbox
and see if we still have any warnings. Note however that we need to look at the scaled model now rather than the original model, so we need to create a new instance of the DiagnositcsToolbox
with the scaled model as the model
argument.
dt_scaled = DiagnosticsToolbox(scaled_model)
dt_scaled.report_numerical_issues()
====================================================================================
Model Statistics
Jacobian Condition Number: 1.800E+01

0 WARNINGS
No warnings found!

3 Cautions
Caution: 1 Variable with value close to their bounds (abs=1.0E04, rel=1.0E04)
Caution: 1 Variable with value close to zero (tol=1.0E08)
Caution: 1 Variable with None value

Suggested next steps:
If you still have issues converging your model consider:
prepare_svd_toolbox()
prepare_degeneracy_hunter()
====================================================================================
We can see that applying scaling addressed two of the cautions we had before (the variable with an extreme value and an associated large value in the model Jacobian). Whilst we were able to solve the unscaled model in this case, this is in part because it was a simple linear model. In more complex, nonlinear models, scaling becomes much more important and often depends strongly on the current state of the model. That is, you can often find cases where the unscaled (or poorly scaled) model solves for a limited range of conditions but fails to solve if you move too far away for the current state. Whilst you might be able to solve the model at the current state, you should always check the solver logs and numerical cautions for advanced warning signs of scaling issues that might manifest later when you try to solve the model for a different state (e.g., during optimization).
At this point, we have addressed all the issues that were preventing us from solving the demonstration model and so reached the end of this tutorial. For cases where we are still having trouble solving the model, we can see that the toolbox is suggesting additional methods for further debugging and these advanced features will be the focus of separate tutorials.