Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning. It features an imperative, define-by-run style user API. Thanks to our define-by-run API, the code written with Optuna enjoys high modularity, and the user of Optuna can dynamically construct the search spaces for the hyperparameters.

Optuna has its ecosystem to enhance the user experience:

Algorithmic Foundations / Components:

Optuna can be applied to various types of black-box optimization problems, such as multi-objective optimization, constrained optimization, batch optimization, distributed optimization, human-in-the-loop optimization (with optuna-dashboard), and multi-fidelity optimization. We support/assume the following interface, algorithms, and conditions for each setting.

  • Standard black-box optimization
    Interfaces: The `optuna.study.Study.optimize` method, which is an Optuna-like eager search space and objective function definition. See the tutorial for more details. We also support the ask-and-tell interface. See the configurations tutorial for more details.
    Algorithms: We support random search, grid search, brute force search, TPE, CMA-ES, GP-based Bayesian optimization method and quasi Monte Carlo sampling.
  • Multi-objective optimization
    Interfaces: The `optuna.study.Study.optimize` method and the ask-and-tell interface are supported. See the multiobjective tutorial for more information.
    Algorithms: We support MOTPE, NSGA-II and NSGA-III as multi-objective specific methods.
  • Constrained optimization
    Interfaces: The `optuna.study.Study.optimize` method and the ask-and-tell interface are supported. We need to specify the constraints as functions in the arguments of the sampler (sampling algorithms in Optuna). See this section on constrained optimization in FAQ for more information.
    Algorithms: We support constrained TPE, constrained GP-based Bayesian optimization method, constrained NSGA-II and constrained NSGA-III as constrained specific methods.
    Conditions: The constraints need to be specified as functions, and should be given to the sampling algorithm.
  • Batch optimization
    Interfaces: We only support the batch optimization with the ask-and-tell interface. See this section on ask & tell in the tutorial for more details.
    Algorithms: We support the TPE with constant liar as a batch optimization specific method. Note that the sampling based methods such as CMA-ES, NSGA-II, and quasi Monte Carlo sampling can be used for the batch optimization without any additional considerations.
    Conditions: Only supported with the ask-and-tell interface.
  • Distributed optimization
    Interfaces: The `optuna.study.Study.optimize` method and the ask-and-tell interface are supported. You can parallelize the optimization in multiple threads by using the `n_jobs` argument in the `optuna.study.Study.optimize`. And you can parallelize the optimization in multiple processes by using the RDB. See the tutorial on distributed computation for more information, and this section on parallelization FAQ for more detailed patterns like RDB (MySQL) based parallelization or utilization of file-based storage.
    Algorithms: We support the TPE with constant liar as a distributed optimization specific method. Note that the sampling based methods such as CMA-ES, NSGA-II, and quasi Monte Carlo sampling can be used for the distributed optimization without any additional considerations.
    Conditions: You need to set up the independent RDB or to use the file-based storage for the multi process parallelization.
  • Human-in-the-loop optimization
    Interfaces: We support the human-in-the-loop optimization in the integration with optuna-dashboard, which is the real-time web dashboard for Optuna. You need run the optimization script and launch the optuna-dashboard. You can input your evaluation in the Web dashboard page, and the optimization process can be monitored in the optuna-dashboard. See the tutorial on human in the loop for more information.
    Algorithms: We only support the GP-based Bayesian optimization method for the human-in-the-loop optimization.
    Conditions: You need to use the optuna-dashboard for the interface.
  • Multi-fidelity optimization
    Interfaces: The `optuna.study.Study.optimize` method and the ask-and-tell interface are supported. Note that we only support the 1 dimensional and integer valued fidelity. See the tutorial on activating pruners for more information.
    Algorithms: We support median/threshold pruning, Successive halving, Hyperband and Wilcoxon signed-rank test based method.
    Conditions: We only support the 1 dimensional and integer valued fidelity. Possible use case is the early stopping (pruning) of the evaluation of the objective function.

Key Features and their Benefits

  • Lightweight, versatile, and platform agnostic architecture: Handle a wide variety of tasks with a simple installation that has few requirements.
  • Pythonic search spaces: Define search spaces using familiar Python syntax including conditionals and loops.
  • Efficient optimization algorithms: Adopt state-of-the-art algorithms for sampling hyperparameters and efficiently pruning unpromising trials.
  • Easy parallelization: Scale studies to tens or hundreds of workers with little or no changes to the code.
  • Quick visualization: Inspect optimization histories from a variety of plotting functions.

Installation

`pip install optuna` or the official guide for more details

Exemplary Usage:

Check out the quickstart guides: https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/001_first.html
https://optuna.readthedocs.io/en/stable/tutorial/index.html

External Projects using Optuna

Cite Optuna

@inproceedings{akiba2019optuna, 
title={{O}ptuna: A Next-Generation Hyperparameter Optimization Framework}, 
author={Akiba, Takuya and Sano, Shotaro and Yanase, Toshihiko and Ohta, Takeru and Koyama, Masanori},
booktitle={The 25th ACM SIGKDD International Conference on Knowledge Discovery \& Data Mining},
pages={2623--2631},
year={2019}
}

Contributed by Hideaki Imamura and the Optuna Team (January 2025)