Eigenvector University returns to Seattle, USA May 12-16, 2025 Complete Info Here!

Chemometrics without Equations (or Hardly Any)

November 16, 2025


Eigenvector Research, Inc. is pleased to bring you Chemometrics without Equations (or Hardly Any), at Eastern Analytical Symposium 2025. This easily accessible introductory course covers the basics of chemometrics: the application of modern multivariate and machine learning methods to chemical data. In this hands-on course you will be introduced to the most commonly used chemometric methods and learn how to interpret and apply them for best results.

Complete information about the course can be found on the EAS course page.

Target Audience

Chemometrics without Equations was developed for chemists and bio-scientists who need to analyze multivariate chemical data and for those who manage these staff members. It covers the basics of pattern recognition, quantitative (regression) and qualitative (classification) models. Additional sections discuss how to improve models through advanced data preprocessing and variable selection. The course is relevant in a large number of analytical applications including

  • Process Analysis, e.g. pharmaceutical, food & beverage, and chemical process industries.
  • Analytical Chemistry, e.g. QC labs, forensic investigations, etc.
  • Sensory and Consumer Science
  • Medical Devices
  • Metabolomics, genomics, etc.

Course Description

Chemometrics Without Equations (or Hardly Any) is designed for those who wish to explore the problem solving power of machine learning tools but are discouraged by the high level of mathematics found in many software manuals and texts. Course emphasis is on proper development and interpretation of chemometric methods as applied to real-life problems. The objective is to teach in the simplest way possible so that participants will be better chemical data science practitioners and managers.

Chemometrics without Equations starts by introducing one of the most important methods in data science, Principal Components Analysis (PCA). PCA is used in a myriad of applications for exploratory data analysis/pattern recognition. The course continues with regression methods including Classical Least Squares (CLS) and the now ubiquitous Partial Least Squares regression (PLS). It is then shown how these methods are adapted for sample classification in SIMCA and PLS Discriminant Analysis (PLS-DA). The course concludes with sections on how models can be improved with advanced preprocessing methods and variable selection.

The course will include many follow-along examples and several homework problems. In order to take advantage of these, participants should equip their computers with current versions of our MATLAB based software PLS_Toolbox or our stand-alone Solo software (available for Windows, MacOS and Linux). Demo copies will work just fine for the course. Users with Eigenvector accounts can download free demos. If you don’t have an account, start by creating one.

About the Instructors

The course will be led by Eigenvector Research Vice-President Neal B. Gallagher. Eigenvector has delivered over 200 chemometrics courses at scientific conferences, on-site for companies and at our popular Eigenvector University each year in Seattle.

How to Register, Deadlines and Cancellations

Registration will be done through the EAS website.

Schedule

Chemometrics without Equations will be taught in a single day, Sunday, November 16, from 7:30-5:00pm EST. The schedule is as follows.

Daily Schedule
08:00 – 08:30 Registration Checkin
08:30 – 10:00 Instruction
10:00 – 10:15 Coffee Break
10:15 – 12:00 Instruction
12:00 – 13:00 Lunch
13:00 – 14:30 Instruction
14:30 – 14:45 Coffe Break
14:45 – 16:30 Instruction
16:30 – 17:00 Wrap-up and questions

Course Outline

1. Introduction
1.1 what is chemometrics?
1.2 resources

2 Pattern Recognition Motivation
2.1 what is pattern recognition?
2.2 relevant measurements
2.3 some statistical definitions

3. Principal Components Analysis
3.1 what is PCA?
3.2 scores and loadings
3.3 interpretation
3.4 supervised and unsupervised pattern recognition
3.5 examples

4. Regression
4.1 what is regression?
4.2 classical least squares (CLS)
4.3 inverse least squares (ILS)
4.4 principal components regression (PCR)
4.5 partial least squares regression (PLS)
4.6 examples

5. Classification
5.1 what is classification?
5.2 classification based on PCA models: SIMCA
5.3 using regression for classification: PLS Discriminant Analysis (PLS-DA)

6 Advanced Preprocessing
6.1 what are the goals of preprocessing?
6.2 mean- and median-centering, autoscaling
6.3 normalization and standard normal variate
6.4 Savitsky-Golay and filtering
6.5 generalized least squares weighting (GLS)
6.6 multiplicative scatter correction (MSC)
6.7 extended multiplicative scatter correction (EMSC)

7. Variable selection
7.1 why do variable selection?
7.2 knowledge based selection
7.3 model based, e.g. on loadings
7.4 interval PLS (iPLS)

8. Conclusions