The Gurobi library consists of over 10,000 commercial models sourced from academia and our industry prospects & customers. We test each optimization we make to the solver, so we know that each new version of Gurobi is delivering meaningful, powerful performance improvements to our users. The best way to know if a solver will work for your needs is to use it. Request a free evaluation or academic license for yourself.
Benchmarking is an important aspect of evaluating a solver, and public benchmarks can certainly provide very useful perspective in your evaluation process. When looking at any benchmark test, there are some critical points to consider in order to truly understand, evaluate, and select which solver is best for you.
Gurobi and Benchmarks
We firmly believe that our software and our library are the most robust on the market—and we consistently win almost every major public benchmark test. Unfortunately, if we test competing solvers against our library, competitor licensing restrictions prevent us from publishing the results.
Benchmark results can fluctuate over time as companies introduce new versions of their solvers. With few exceptions, the Gurobi Optimizer consistently wins in public benchmark test results, showing the:
Gurobi keeps getting better and better with each version.
Because benchmark tests are usually run using a solver’s default settings, it’s important to understand what those defaults are. But because defaults are chosen to provide the best overall performance across a range of models, they’re often not optimized for a particular model.
Understand benchmark tests in context of their defaults. Use them as a starting point, and ultimately test solvers against your own models.
Some benchmark tests can be misleading – intentionally or not. If a company cherry-picks models and tunes their solver for that subset of models, they may be able to claim superiority over recognized industry-leading solvers. With a deeper look, you may find that the selected model is only academic in nature and not reflective of the real world, or that tuning the opposing solver would result in a much better performance than indicated by the test parameters.
Make sure the results you’re seeing aren’t being manipulated or misconstrued to appear more impressive than they are.
It’s important to determine whether a test measures something that is meaningful to you in practice. A test that measures the time required to produce poor-quality solutions isn’t relevant if your application requires high-quality solutions.
Evaluate the benchmark test and the solver’s performance based on the problems and models you need to solve.
When testing a solver, you need the opportunity to tune performance to your specific models. Gurobi includes over 100 parameters to adjust, and an Automatic Tuning Tool that intelligently explores parameter settings and returns with advice on specific settings you can use to optimize the solver for your model(s).
Using default settings, Gurobi has the fastest out-of-the-box performance. By using the Automatic Tuning Tool to tune the parameters for each individual model, mean performance across the models increases by 68%. Our distributed tuning capabilities show a 152% performance improvement in the same amount of tuning time.
Commercial Users: Free Evaluation Version
Academic Users: Free Academic Version
Latest news and releases
Choose the evaluation license that fits you best, and start working with our Expert Team for technical guidance and support.
Request free trial hours, so you can see how quickly and easily a model can be solved on the cloud.