Introducing a unified PCA algorithm for model size reduction

Richard P. Good, Daniel Kost, Gregory A. Cherry

Research output: Contribution to journalArticlepeer-review

50 Scopus citations

Abstract

Principal component analysis (PCA) is a technique commonly used for fault detection and classification (FDC) in highly automated manufacturing. Because PCA model building and adaptation rely on eigenvalue decomposition of parameter covariance matrices, the computational effort scales cubically with the number of input variables. As PCA-based FDC applications monitor systems with more variables, or trace data with faster sampling rates, the size of the PCA problems can grow faster than the FDC system infrastructure will allow. This paper introduces an algorithm that greatly reduces the overall size of the PCA problem by breaking the analysis of a large number of variables into multiple analyses of smaller uncorrelated blocks of variables. Summary statistics from these subanalyses are then combined into results that are comparable to what is generated from the complete PCA of all variables together.

Original languageEnglish
Article number5458324
Pages (from-to)201-209
Number of pages9
JournalIEEE Transactions on Semiconductor Manufacturing
Volume23
Issue number2
DOIs
StatePublished - 1 May 2010
Externally publishedYes

Keywords

  • Combined index
  • Computation time
  • Fault detection
  • Large scale systems
  • Multivariate statistical process control (MSPC)
  • Principal component analysis (PCA)
  • Recursive PCA

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics
  • Industrial and Manufacturing Engineering
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Introducing a unified PCA algorithm for model size reduction'. Together they form a unique fingerprint.

Cite this