Background Hyperparameter optimization, hyperparameter tuning, model tuning, model optimization, and occasionally, model selection, all refer to roughly the same thing: we have a model, the model has hyperparameters, and we want to find “the best” hyperparameters to maximize the performance of our model.

Here at SigOpt, we provide a tuning platform for practitioners to develop machine learning models efficiently. To stay at the cutting edge, we regularly attend conferences such as the International Conference on Machine Learning (ICML).

This is the first edition of our new quarterly newsletter. In these updates, we will discuss newly released features, showcase content we have produced, published, or been cited in, and share interesting machine learning research that our research team has found. We hope you find these valuable and informative!

Today we announce the availability of SigOpt on AWS Marketplace. Now, with a single click, data scientists and researchers can access SigOpt’s powerful optimization-as-a-service platform designed to automatically fine-tune their machine learning (ML) and artificial intelligence (AI) workloads.

We have revisited one of the traditional acquisition functions of Bayesian optimization, the upper confidence bound and looked to a slightly different interpretation to better inform optimization of it. In our article, we present a number of examples of this clustering-guided UCB method being applied to various optimization problems.

In this post, we want to analyze a more complex situation in which the parameters of a given model produce a random output, and our multicriteria problem involves maximizing the mean while minimizing the variance of that random variable.

Today is a big day at SigOpt. Since the seed round we secured last year, we’ve continued to build toward our mission to ‘optimize everything,’ and are now helping dozens of companies amplify their research and development and drive business results with our cloud Bayesian optimization platform.

In this post, we review the history of some of the tools implemented within SigOpt, and then we discuss the original solution to this black box optimization problem, known as full factorial experiments or grid search. Finally, we compare both naive and intelligent grid-based strategies to the latest advances in Bayesian optimization, delivered via the SigOpt platform.

SigOpt provides customers the opportunity to build better machine learning and financial models by providing users a path to efficiently maximizing key metrics which define their success. In this post we demonstrate the relevance of model tuning on a basic prediction strategy for investing in bond futures.

In this post on integrating SigOpt with machine learning frameworks, we will show you how to use SigOpt and TensorFlow to efficiently search for an optimal configuration of a convolutional neural network (CNN).

Advanced Optimization Techniques, Deep Learning

Unsupervised Learning with Even Less Supervision Using Bayesian Optimization

In this post on integrating SigOpt with machine learning frameworks, we will show you how to use SigOpt and XGBoost to efficiently optimize an unsupervised learning algorithm’s hyperparameters to increase performance on a classification task.

SigOpt allows experts to build the next great model and apply their domain expertise instead of searching in the dark for the best experiment to run next. With SigOpt you can conquer this tedious, but necessary, element of development and unleash your experts on designing better products with less trial and error.

In this first post on integrating SigOpt with machine learning frameworks, we’ll show you how to use SigOpt and scikit-learn to train and tune a model for text sentiment classification in under 50 lines of Python.

SigOpt provides customers the opportunity to define uncertainty with their observations and using this knowledge we can balance observed results against their variance to make predictions and identify the true behavior behind the uncertainty.

Gaussian processes are powerful because they allow you to exploit previous observations about a system to make informed, and provably optimal, predictions about unobserved behavior. They do this by defining an expected relationship between all possible situations; this relationship is called the covariance and is the topic of this post.

Rescale gives users the dynamic computational resources to run their simulations and SigOpt provides tools to optimize them. By using Rescale’s platform and SigOpt’s tuning, efficient cloud simulation is easier than ever.