Feature vector regularization in machine learning

12/19/2012
by   Yue Fan, et al.
0

Problems in machine learning (ML) can involve noisy input data, and ML classification methods have reached limiting accuracies when based on standard ML data sets consisting of feature vectors and their classes. Greater accuracy will require incorporation of prior structural information on data into learning. We study methods to regularize feature vectors (unsupervised regularization methods), analogous to supervised regularization for estimating functions in ML. We study regularization (denoising) of ML feature vectors using Tikhonov and other regularization methods for functions on R^n. A feature vector x=(x_1,...,x_n)={x_q}_q=1^n is viewed as a function of its index q, and smoothed using prior information on its structure. This can involve a penalty functional on feature vectors analogous to those in statistical learning, or use of proximity (e.g. graph) structure on the set of indices. Such feature vector regularization inherits a property from function denoising on R^n, in that accuracy is non-monotonic in the denoising (regularization) parameter α. Under some assumptions about the noise level and the data structure, we show that the best reconstruction accuracy also occurs at a finite positive α in index spaces with graph structures. We adapt two standard function denoising methods used on R^n, local averaging and kernel regression. In general the index space can be any discrete set with a notion of proximity, e.g. a metric space, a subset of R^n, or a graph/network, with feature vectors as functions with some notion of continuity. We show this improves feature vector recovery, and thus the subsequent classification or regression done on them. We give an example in gene expression analysis for cancer classification with the genome as an index space and network structure based protein-protein interactions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset