|
The intersection of interests between machine learning and optimization
has engaged many leading researchers in both communities for some years
now. Both are vital and growing fields, and the areas of shared interest are
expanding too. This volume collects contributions from many researchers
who have been a part of these efforts.
We are grateful first to the contributors to this volume. Their cooperation
in providing high-quality material while meeting tight deadlines is highly
appreciated. We further thank the many participants in the two workshops
on Optimization and Machine Learning, held at the NIPS Workshops in
2008 and 2009. The interest generated by these events was a key motivator
for this volume. Special thanks go to S. V. N. Vishawanathan (Vishy)
for organizing these workshops with us, and to PASCAL2, MOSEK, and
Microsoft Research for their generous financial support for the workshops.
S. S. thanks his father for his constant interest, encouragement, and advice
towards this book. S. N. thanks his wife and family. S. W. thanks all
those colleagues who introduced him to machine learning, especially Partha
Niyogi, to whose memory his efforts on this book are dedicated.
The interplay between optimization and machine learning is one of the most important developments in modern computational science. Optimization formulations and methods are proving to be vital in designing algorithms to extract essential knowledge from huge volumes of data. Machine learning, however, is not simply a consumer of optimization technology but a rapidly evolving field that is itself generating new optimization ideas. This book captures the state of the art of the interaction between optimization and machine learning in a way that is accessible to researchers in both fields.Optimization approaches have enjoyed prominence in machine learning because of their wide applicability and attractive theoretical properties. The increasing complexity, size, and variety of today's machine learning models call for the reassessment of existing assumptions. This book starts the process of reassessment. It describes the resurgence in novel contexts of established frameworks such as first-order methods, stochastic approximations, convex relaxations, interior-point methods, and proximal methods. It also devotes attention to newer themes such as regularized optimization, robust optimization, gradient and subgradient methods, splitting techniques, and second-order methods. Many of these techniques draw inspiration from other fields, including operations research, theoretical computer science, and subfields of optimization. The book will enrich the ongoing cross-fertilization between the machine learning community and these other fields, and within the broader optimization community. |
|