
Polynomial optimization, as its name suggests, is used to optimize a generic
multivariate polynomial function, subject to some suitable polynomial equality
and/or inequality constraints. Such problem formulation dates back to the nineteenth
century when the relationship between nonnegative polynomials and sum of squares
(SOS) was discussed by Hilbert. Polynomial optimization is one of the fundamental
problems in Operations Research and has applications in a wide range of areas,
including biomedical engineering, control theory, graph theory, investment science,
material science, numerical linear algebra, quantum mechanics, signal processing,
speech recognition, among many others. This brief discusses some important
subclasses of polynomial optimization models arising from various applications.
The focus is on optimizing a high degree polynomial function over some frequently
encountered constraint sets, such as the Euclidean ball, the Euclidean
sphere, intersection of cocentered ellipsoids, binary hypercube, general convex
compact set, and possibly a combination of the above constraints. All the models
under consideration are NPhard in general. In particular, this brief presents a
study on the design and analysis of polynomialtime approximation algorithms,
with guaranteed worstcase performance ratios. We aim at deriving the worstcase
performance/approximation ratios that are solely dependent on the problem
dimensions, meaning that they are independent of any other types of the problem
parameters or input data. The new techniques can be applied to solve even broader
classes of polynomial/tensor optimization models. Given the wide applicability
of the polynomial optimization models, the ability to solve such models—albeit
approximately—is clearly beneficial. To illustrate how such benefits might be,
we present a variety of examples in this brief so as to showcase the potential
applications of polynomial optimization. 