As algorithms become more and more advanced, their logic and results often become harder to understand. Combined with their widespread application for everyday decision-making, this raises questions—especially from the people affected by those decisions.  

Why did the algorithm do this to me? 

In machine learning, various concepts related to this question have emerged. For example: 

  • Transparency: How did we end up with this trained model? 
  • Interpretability: What do the coefficients in my trained model mean? 
  • Explainability: Given one particular set of input and output, why exactly did we get that particular output? 

Since mathematical optimization is often the last step in a complex decision-making process supported by algorithms, similar questions could be raised about optimization models and their outputs. In this article, we’ll examine those questions from several perspectives. 

When the Magic Happens 

Before we look at “how,” let’s first ask ourselves “what” exactly needs explaining. Both machine learning and optimization consist of two major phases: one where a model is being created, and one where it is used to generate output. 

  • In machine learning, models are the result of applying algorithms to data. More advanced algorithms may yield higher accuracy, but result in models that are harder to interpret: What exactly do the coefficients in a multi-layer neural network represent? Similarly, explainability is affected: Although one could follow the path from inputs through a prediction/classification model to outputs, these mathematical operations hardly explain why the outcome makes sense from a human perspective. 
  • With mathematical optimization, the model itself is manually defined. Each element is linked directly to a real-life aspect. In this sense, the model can be considered a “digital twin” of the environment in which you want to make decisions. Interpreting the model is therefore not too hard: “This constraint shows that total production in a certain week cannot exceed capacity for that machine.” Then, we’re left to explain why a solver comes up with a particular solution to that model. 

Butterfly Effect 

One approach could be to try and explain the underlying algorithms (the “transparency” question). However, these are typically not a simple linear recipe, but rather a toolbox of many interacting tools (branching, cutting planes, heuristics, presolve steps). Explaining the individual tools does not explain the full path to the results.  

Let’s make the typical questions a bit more specific. Given a particular solution to a model, questions about that solution often relate to individual decisions. Why did the solver choose to route supply flows through one particular warehouse more than expected? Why is the largest part of our budget in a portfolio model allocated to one specific stock? Why do I need to work three days in a row, while others don’t?  

It may be tempting to look for answers “locally,” ignoring the rest of the model. But we should not forget that there’s an important reason to solve the decision-making problem as a single model: We’re looking for a globally optimal solution while considering all relationships between decisions and objectives. So, most likely there is not a simple answer to the questions above. Deep understanding of the model is required to reason about cause and effect. 

What Can We Do? 

One way to tackle explainability is to work with counterexamples. Instead of asking, “Why did the solver choose X?” we could ask, “Why did the solver not choose Y?” Although that may seem abstract, it’s exactly what happens when you build an optimization model and show the results to a planner who has been doing the work for many years. Fortunately, that question is relatively easy to answer. There are three possible scenarios: 

  • Y just isn’t an option, given the rules you have specified. In technical terms, requiring Y as a constraint in the model renders the model infeasible. We will look at potential follow-up questions that aim at answering the “why” in more detail. 
  • Y is an option, but it’s worse than the solution X. In technical terms, any solution that starts from (assumes) Y, will have an objective value worse than the optimal solution. 
  • You might have a scenario where X and Y are equally valid and yield the same objective value. In that scenario, either outcome is possible. In principle, algorithms won’t prefer one option over the other. The tricks from the previous bullet can be applied here as well; if Y should really be preferred, then this should be reflected in the model through the objective function. 

Not a Feasible Alternative 

Let’s take the netflow example included with Gurobi. It shows how to allocate production capacity in two cities based on demand from three warehouses across other cities. To illustrate explainability, we will assume that flow from Detroit to Boston is not possible. The netflow example here shows how to optimize this model. In the result, you will see there is no flow from Denver to New York. 

Now one of our planners might ask, “Why don’t we supply New York from Denver?” An easy way to answer that is by asking, “Well, what if we did?” 

By adding a single constraint that represents that assumption, we will find that our model has now become infeasible. In other words, supplying Denver from New York is not an option (anymore). 

Again, our planner might ask “Why?” And fortunately, Gurobi provides the answer. By calculating an irreducible infeasible subset (IIS), we obtain a simplified version of our model in which the infeasibility surfaces clearly. We can read that simplified version (newflow.ilp) as follows: If we supply 50 from Denver to New York, we can’t ship anything there from Detroit. But the model states that all supply must be matched to demand. So now all 50 Detroit supplies must be shipped to Seattle, which only has a demand of 10. 

Not an Optimal Alternative 

For the second option, let’s look at a slightly modified version of the workforce example included in every Gurobi distribution. This example involves assigning shifts to employees to cover demand while respecting employee availability. The original model does not have a solution, since there are not enough employees for particular days. The changed code can be viewed here. When we inspect the solution, we notice that Amy works five days in a row.  

Naturally, Amy would ask, “Why?” The answer can be found through the inverse question: “What if we would never have more than four consecutive shifts?” This can be easily added to the model. Whereas the model used to have an optimal solution with a cost of 480, we now see that the optimal cost is 487. In other words: A solver would not select a solution with only four consecutive shifts because it would not be the optimal solution. By using a solution with five consecutive shifts, lower costs can be achieved. 

Unraveling Decision Complexity

Mathematical optimization problems are difficult by nature—that’s why you turn to advanced technology in the first place.

Since all decisions are connected, there is no easy explanation for the “why” behind every decision. Fortunately, individual concerns can typically be solved easily by asking the right question and using the model to answer them. 

Ready to see how Gurobi-powered optimization can help you solve your toughest decision-making challenges? Request a free evaluation license today.

Ronald van der Velden
AUTHOR

Ronald van der Velden

Technical Account Manager – EMEAI

AUTHOR

Ronald van der Velden

Technical Account Manager – EMEAI

Ronald van der Velden holds a MSc degree in Econometrics and Operations Research at the Erasmus University in Rotterdam. He started his career at Quintiq where he fulfilled various roles ranging from creating planning and scheduling models as a software developer, to business analysis and solution design at customers worldwide, as well as executing technical sales activities like value scans and "one week demo challenges". He also spent two years as a lead developer at a niche company focused on 3D graphics in the entertainment industry before going back to his mathematical roots at Gurobi. In his spare time he loves spending time with his wife and two sons, going for a run on the Veluwe and working on hobby software projects.

Ronald van der Velden holds a MSc degree in Econometrics and Operations Research at the Erasmus University in Rotterdam. He started his career at Quintiq where he fulfilled various roles ranging from creating planning and scheduling models as a software developer, to business analysis and solution design at customers worldwide, as well as executing technical sales activities like value scans and "one week demo challenges". He also spent two years as a lead developer at a niche company focused on 3D graphics in the entertainment industry before going back to his mathematical roots at Gurobi. In his spare time he loves spending time with his wife and two sons, going for a run on the Veluwe and working on hobby software projects.

Guidance for Your Journey

30 Day Free Trial for Commercial Users

Start solving your most complex challenges, with the world's fastest, most feature-rich solver.

Always Free for Academics

We make it easy for students, faculty, and researchers to work with mathematical optimization.

Try Gurobi for Free

Choose the evaluation license that fits you best, and start working with our Expert Team for technical guidance and support.

Evaluation License
Get a free, full-featured license of the Gurobi Optimizer to experience the performance, support, benchmarking and tuning services we provide as part of our product offering.
Academic License
Gurobi supports the teaching and use of optimization within academic institutions. We offer free, full-featured copies of Gurobi for use in class, and for research.
Cloud Trial

Request free trial hours, so you can see how quickly and easily a model can be solved on the cloud.

Search

Gurobi Optimization