Author Archives: Barry M. Wise

About Barry M. Wise

Co-founder, President and CEO of Eigenvector Research, Inc. Creator of PLS_Toolbox chemometrics software.

Coming in 6.0: Analysis Report Writer

Sep 4, 2010

Version 6.0 of PLS_Toolbox, Solo and Solo+MIA will be out this fall. There will be lots of new features, but right at the moment I’m having fun playing with one in particular: the Analysis Report Writer (ARW). The ARW makes documenting model development easy. Once you have a model developed, just make the plots you want to make, then select Tools/Report Writer in the Analysis window. From there you can select HTML, or if you are on a Windows PC, MS Word or MS Powerpoint for saving your report. The ARW then writes a document with all the details of your model, along with copies of all the figures you have open.

As an example, I’ve developed a (quick and dirty!) model using the tablet data from the IDRC shootout in 2002. The report shows the model details, including the preprocessing and wavelength selection, along with the plots I had up. This included the calibration curve, cross-validation plot, scores on first two PCs, and regression vector. But it could include any number and type of plots as desired by the user. Statistics on the prediction set are also included.

Look for Version 6.0 on our web site in October, or come visit us at FACSS, booth 48.

BMW

Advanced Features in Barcelona

Sep 1, 2010

I’m pleased to announce that I’ll be doing a one day short course, Using the Advanced Features of PLS_Toolbox, on the University of Barcelona campus, on October 1, 2010 (one month from today!). The course will show how to use many of the powerful, but often underutilized, tools in PLS_Toolbox. It will also feature some new tools from the upcoming PLS_Toolbox 6.0, to be released this fall.

Normally, our courses focus on developing an understanding of chemometric methods such as PCA and PLS. This course is something of a departure in that it focuses on getting the most out of the software. This will give us a chance to show how to access many of the advanced features and methods implemented in PLS_Toolbox.

You can get complete information, including registration info and a course outline, on the course description page. If you have additional questions or suggestions for demos you’d like to see, contact me.

See you in Barcelona!

BMW

Re-orthogonalization of PLS Algorithms

Aug 23, 2010

Thanks to everybody that responded to the last post on Accuracy of PLS Algorithms. As I expected, it sparked some nice discussion on the chemometrics listserv (ICS-L)!

As part of this discussion, I got a nice note from Klaas Faber reminding me of the short communication he wrote with Joan Ferré, “On the numerical stability of two widely used PLS algorithms.” [1] The article compares the accuracy of PLS via NIPALS and SIMPLS as measured by the degree to which the scores vectors are orthogonal. It finds that SIMPLS is not as accurate as NIPALS, and suggests adding a re-orthogonalization step to the algorithms.

I added re-orthogonalization to the code for SIMPLS, DSPLS, and Bidiag2 and ran the tests again. The results are shown below. Adding this to Bidiag2 produces the most dramatic effect (compare to figure in previous post), and improvement is seen with SIMPLS as well. Now all of the algorithms stay within ~1 part in 1012 of each other through all 20 LVs of the example problem. Reconstruction of X from the scores and loadings (or weights, as the case may be!) was improved as well. NIPALS, SIMPLS and DSPLS all reconstruct X to within 3e-16, while Biadiag2 comes in at 6e-13.

Accuracy of PLS Algorithms with Re-orthogonalization

The simple fix of re-orthogonalization makes Bidiag2 behave acceptably. I certainly would not use this algorithm without it! Whether to add it to SIMPLS is a little more debatable. We ran some tests on typical regression problems and found that the difference on predictions with and without it was typically around 1 part in 108 for models with the correct number of LVs. In other words, if you round your predictions to 7 significant digits or less, you’d never see the difference!

That said, we ran a few tests to determine the effect on computation time of adding the re-orthogonalization step to SIMPLS. For the most part, the effect was negligible, less than 5% additional computation time for the vast majority of problem sizes and at worst a ~20% increase. Based on this, we plan to include re-orthogonalization in our SIMPLS code in the next major release of PLS_Toolbox and Solo, Version 6.0, which will be out this fall. We’ll put it in the option to turn it off, if desired.

BMW

[1] N.M. Faber and J. Ferré, “On the numerical stability of two widely used PLS algorithms,” J. Chemometrics, 22, pps 101-105, 2008.

Accuracy of PLS Algorithms

Aug 13, 2010

In 2009 Martin Andersson published “A comparison of nine PLS1 algorithms” in Journal of Chemometrics[1]. This was a very nice piece of work and of particular interest to me as I have worked on PLS algorithms myself [2,3] and we include two algorithms (NIPALS and SIMPLS) in PLS_Toolbox and Solo. Andersson compared regression vectors calculated via nine algorithms using 16 decimal digits (aka double precision) in MATLAB to “precise” regression vectors calculated using 1000 decimal digits. He found that several algorithms, including the popular Bidiag2 algorithm (developed by Golub and Kahan [4] and adapted to PLS by Manne [5]), deviated substantially from the vectors calculated in high precision. He also found that NIPALS was among the most stable of algorithms, and while somewhat less accurate, SIMPLS was very fast.

Andersson also developed a new PLS algorithm, Direct Scores PLS (DSPLS), which was designed to be accurate and fast. I coded it up, adapted it to multivariate Y, and it is now an option in PLS_Toolbox and Solo. In the process of doing this I repeated some of Andersson’s experiments, and looked at how the regression vectors calculated by NIPALS, SIMPLS, DSPLS and Bidiag2 varied.

The figure below shows the difference between various regression vectors as a function of Latent Variable (LV) number for Melter data set, where X is 300 x 20. The plotted values are the norm of the difference divided by the norm the first regression vector. The lowest line on the plot (green with pluses) is the difference between the NIPALS and DSPLS regression vectors. These are the two methods that are in the best agreement. DSPLS and NIPALS stay within 1 part in 1012 out through the maximum number of LVs (20).

The next line up on the plot (red line and circles with blue stars) is actually two lines, the difference between SIMPLS and both NIPALS and DSPLS. These lie on top of each other because NIPALS and DSPLS are so similar to each other. SIMPLS stays within one part in 1010 through 10 LVs (maximum LVs of interest in this data) and degrades to one part in ~107.

The highest line on the plot (pink with stars) is the difference between NIPALS and Bidiag2. Note that by 9 LVs this difference has increased to 1 part in 100, which is to say that the regression vector calculated by Bidiag2 has no resemblance to the regression vector calculated by the other methods!

I programmed my version of Bidiag2 following the development in Bro and Eldén [6]. Perhaps there exist more accurate implementations of Bidiag2, but my results resemble those of Andersson quite closely. You can download my bidiag.m file, along with the code that generates this figure, check_PLS_reg_accuracy.m. This would allow you to reproduce this work in MATLAB with a copy of PLS_Toolbox (a demo would work). I’d be happy to incorporate an improved version of Bidiag in this analysis, so if you have one, send it to me.

BMW

[1] Martin Andersson, “A comparison of nine PLS1 algorithms,” J. Chemometrics, 23(10), pps 518-529, 2009.

[2] B.M. Wise and N.L. Ricker, “Identification of Finite Impulse Response Models with Continuum Regression,” J. Chemometrics, 7(1), pps 1-14, 1993.

[3] S. de Jong, B. M. Wise and N. L. Ricker, “Canonical Partial Least Squares and Continuum Power Regression,” J. Chemometrics, 15(2), pps 85-100, 2001.

[4] G.H. Golub and W. Kahan, “Calculating the singular values and pseudo-inverse of a matrix,” SIAM J. Numer. Anal., 2, pps 205-224, 1965.

[5] R. Manne, “Analysis of two Partial-Least-Squares algorithms for multivariate calibration,” Chemom. Intell. Lab. Syst., 2, pps 187–197, 1987.

[6] R. Bro and L. Eldén, “PLS Works,” J. Chemometrics, 23(1-2), pps 69-71, 2009.

Clustering in Images

Aug 6, 2010

It is probably an understatement to say that here are many methods for cluster analysis. However, most clustering methods don’t work well for large data sets. This is because they require computation of the matrix that defines the distance between all the samples. If you have n samples, then this matrix is n x n. That’s not a problem if n = 100, or even 1000. But in multivariate images, each pixel is a sample. So a 512 x 512 image would have a full distance matrix that is 262,144 x 262,144. This matrix would have 68 billion elements and take 524G of storage space in double precision. Obviously, that would be a problem on most computers!

In MIA_Toolbox and Solo+MIA there is a function, developed by our Jeremy Shaver, which works quite quickly on images (see cluster_img.m). The trick is that it chooses some unique seed points for the clusters by finding the points on the outside of the data set (see distslct_img.m), and then just projects the remaining data onto those points (normalized to unit length) to determine the distances. A robustness check is performed to eliminate outlier seed points that result in very small clusters. Seed points can then be replaced with the mean of the groups, and the process repeated. This generally converges quite quickly to a result very similar to knn clustering, which could not be done in a reasonable amount of time for large images.

Mississippi River Landsat Image clustered into 3 groupsMississippi River Landsat Image clustered into 6 groupsAs an example, I’ve used the clustering function on a Landsat image of the Mississippi River. This 512 x 512 image has 7 channels. Working from the purely point-and-click Analysis interface on my 4 year old MacBook Pro laptop, this image can be clustered into 3 groups in 6.8 seconds. The result is shown at the far left. Clustering into 6 groups takes just a bit longer, 13.8 seconds. Results for the 6 cluster analysis are shown at the immediate left. This is actually a pretty good rendering of the different surface types in the image.

SIMS of Drug Bead False Color Image with 3 PCs SIMS of Drug Bead clustered into 35 groupsAs another example, I’ve used the image clustering function on the SIMS image of a drug bead. This image is 256 x 256 and has 93 channels. For reference, the (contrast enhanced) PCA score image is shown at the far left. The drug bead coating is the bright green strip to the right of the image, while the active ingredient is hot pink. The same data clustered into 5 groups is shown to the immediate right. Computation time was 14.7 seconds. The same features are visible in the cluster image as the PCA, although the colors are swapped: the coating is dark brown and the active is bright blue.

Thanks, Jeremy!

BMW

10,000 Commits

Jul 30, 2010

The EVRI software developers surpassed a landmark when our Donal O’Sullivan made the 10,000th “commit” to our software repository. Donal was working on some improvements to our Support Vector Machine (SVM) routines in PLS_Toolbox and submitted the changes yesterday afternoon.

As noted by our Chief of Technology Development, Jeremy Shaver, “This is a trivial landmark in some ways (it is just a number, like when your car rolls over 10,000 miles) but it also indicates just how active our development is. It all started with a revision by Scott in March, 2004. In six years time, we’ve committed thousands upon thousands of lines of code and many megabytes of files. That’s an average of 1666 commits/year or 4.6 commits/day although nearly 2000 of those were in the last year.”

The level of activity in our software repository truly demonstrates how our product development continues to accelerate. Thanks go to our users for driving, and funding, the advancements. And of course, to our developers, thanks for all your efforts, guys!

BMW

Eigenvector Welcomes Randy Bishop

Jul 26, 2010

All of us at EVRI would like to issue a warm, (albeit belated), welcome to Randy Bishop. Randy joined our staff in March, 2010.

We started running into Dr. Bishop about 10 years ago at FACSS meetings where he often taught experimental design. In those days he was with GE Plastics and heavily involved with Six Sigma. Since then he has worked in Process Analytical Technology (PAT) with GlaxoSmithKline and Wyeth (now Pfizer).

Randy has a wealth of experience with a broad variety of analytical methods, especially Raman spectroscopy, but also many other types of spectroscopy, spectrometry, and chromatography. He has used multivariate methods extensively and has been a leader in promoting their use among his colleagues. In fact, we’d worked with Randy to implement specific chemometric methods to make it easier for his co-workers to use them.

Beyond that, he’s fun to work with, a great guitarist, and we just love listening to his East Tennessee drawl. We look forward to working with Randy on consulting projects, our short courses (look for some new DOE offerings soon) and software development.

Welcome aboard, Randy!

BMW

New Website Up and Running

Jul 21, 2010

We’re pleased to announce that the new Eigenvector website is up! We’ve been working on it for several months. You may have noticed we haven’t updated the current site during that time, but it is all up-to-date now!

I developed Eigenvector’s first website in the fall of 1996. There were just over 2 million registered domain names in 1996 with .net, .org and .com extensions. Now there are well over 100 million. The original Eigenvector website ran off a server in our house in Manson, WA, which was connected to the internet via frame-relay. It even had a webcam that looked out my window.

Visitors should find the new site more streamlined, consistent and easier to navigate. We’ve also designed it to make it easier for us to keep updated. Our old site, which had grown slowly since it’s last major revamp in about 1999, had become pretty fractured and unwieldy. We hope that we can serve you better with our new site!

BMW

Pseudoinverses, Rank and Significance

Jun 1, 2010

The first day of Eigenvector University 2010 started with a class on Linear Algebra, the “language of chemometrics.” The best question of the day occurred near the end of the course as we were talking about pseudoinverses. We were doing an example with a small data set, shown below, which demonstrates the problem of numerical instability in regression.

X and y for instability demonstration

If you regress Y on X you get regression coefficients of b = [2 0]. On the other hand, if you change the 3rd element of Y to 6.0001, you get b = [3.71 -0.86]. And if you you change this element to 5.9999, then b = [0.29 0.86]. So a change of 1 part in 50,000 in Y changes the answer for b completely.

The problem, of course, is that X is nearly rank deficient. If it weren’t for the 0.0001 added to the 8 in the (4,2) element X would be rank 1. If you use the Singular Value Decomposition (SVD) to do a rank 1 approximation of X and use that for the pseudoinverse, the problem is stabilized. In MATLAB-ese, this is [U,S,V] = svd(X), then Xinv = V(:,1)*inv(S(1,1))*U(:,1)’, and b = Xinv*Y = [0.40 0.80]. If you choose 5.9999, 6.0000 or 6.0001 for the 3rd element of Y, the answer for b stays the same to within 0.00001.

Then the question came: “Why don’t you get the stable solution when you use the pseudoinverse function, pinv, in MATLAB?” The answer is that, to MATLAB, X is not rank 1, it is rank 2. The singular values of X are 12.2 and 3.06e-5. MATLAB would consider a singular value zero if it was less than s = max(size(X))*norm(X)*eps, where eps is the machine precision. In this instance, s = 1.08e-14, and the smaller singular value of X is larger than this by about 9 orders of magnitude.

But just because it is significant with respect to machine precision doesn’t mean that is significant with respect to the precision of the measurements. If X was measured values, and it was known that they could only be reliably measured to 0.001, then clearly the rank 1 approximation of X should be used for this problem. In MATLAB, you can specify the tolerance of the pinv function. So if you use, for instance, Xinv = pinv(X,1e-4), then you get the same stable solution we did when we used the rank 1 approximation explicitly.

Thanks for the good question!

BMW

Eigenvector University 2010 Best Posters

May 26, 2010

The EigenU 2010 Poster Session was held Tuesday evening, May 18. This year 7 users contributed posters for the contest, and the Eigenvectorians chipped in another 5, so there was plenty to discuss. We all enjoyed beer, wine and hors d’oeuvres as we discussed the finer points of aligning chromatograms and calculating PARAFAC models, among other things!

This year’s posters were judged by Bruce Kowalski, guest instructor at this year’s EigenU, and Brian Rohrback, President of Infometrix. Bruce and Brian carefully reviewed the submitted posters (not counting the ones from the Eigenvectorians, of course). Thanks to the judges, especially Brian, who stopped by just for this event!

Barry Wise, Jamin Hoggard, Bruce Kowalski, Cagri Ozcaglar and Brian Rohrback

Barry Wise, Jamin Hoggard, Bruce Kowalski, Cagri Ozcaglar and Brian Rohrback

The winners, pictured above, were Cagri Ozcaglar of Rensselaer Polytechnic Institute for Examining Sublineage Structure of Mycobacterium Tuberculosis Complex Strains with Multiway Modeling, and Jamin Hoggard of the University of Washington for Extended Nontarget PARAFAC Applications to GC×GC–TOF-MS Data. Jamin and Cagri accepted iPod nanos with the inscription Eigenvector University 2010 Best Poster for their efforts.

Congratulations, Cagri and Jamin!

BMW

Biggest Chemometrics Learning Event Ever?

May 26, 2010

Eigenvector University 2010 finished up on Friday afternoon, May 21. What a week! Six days, 17 courses, 10 instructors and 45 students. I’d venture a guess that this was the biggest chemometrics learning event ever. If more chemometrics students and instructors have been put together for more hours than in Seattle last week, I’m not aware of it.

Thanks to all that came for making it such a great event. It was a very accomplished audience, and the discussions were great, both in class and over beers. The group fielded lots of good questions, observations and related much useful experience.

We’re already looking forward to doing it next year and have been busy this week incorporating student feedback into our courses and software. The Sixth Edition of EigenU is tentatively scheduled for May 15-20, 2011. See you there!

BMW

Ready for EigenU

May 14, 2010

Eigenvector University 2010 starts in just 2 days. We’re busy doing the final tune-ups on our course notes, making final catering arrangements, etc. I hope that all our guests are ready to get out of their offices for a while and spend some time “sharpening the saw.”

This year’s EigenU will be the biggest ever, with over 45 attendees (not everybody is there every day) plus 10 Eigenvectorians (Neal Gallagher, Jeremy Shaver, Bob Roginski, Scott Koch, Donal O’Sullivan, Randy Bishop, Willem Windig, Bruce Kowalski, Rasmus Bro and myself). So while things might be a bit crowded at times, we’ll have plenty of staff on hand to help with questions whether they’re on the practical aspects of the software or philosophical aspects of multivariate modeling.

Our first evening event is the MATLAB/PLS_Toolbox User Poster session on Tuesday, 5:30-7:30. I’m pleased to announce that the poster contest judges will be Bruce Kowalski, and Brian Rohrback, President of Infometrix, makers of the Pirouette chemometrics package. The two best posters win iPod nanos. We’ll let you know who won!

BMW

Robust Methods

May 12, 2010

This year we are presenting “Introduction to Robust Methods” at Eigenvector University. I’ve been working madly preparing a set of course notes. And I must say that it has been pretty interesting. I’ve had a chance to try the robust versions of PCA, PCR and PLS on many of the data sets we’ve used for teaching and demoing software, and I’ve been generally pleased with the results. Upon review of my course notes, our Donal O’Sullivan asked why we don’t use the robust versions of these methods all the time. I think that is a legitimate question!

In a nutshell, robust methods work by finding the subset of samples in the data that are most consistent. Typically this involves use of the Minimum Covariance Determinant (MCD) method, which finds the samples that have a covariance with the smallest determinant, which is a measure of the volume occupied by the data. The user specifies the fraction, h, to include, and the algorithm searches out the optimal set. The parameter h is between 0.5 and 1, and a good general default is 0.75. With h = 0.75 the model can resist up to 25% arbitrarily bad samples without going completely astray. After finding the h subset, the methods then look to see what remaining samples fall within the statistical bounds of the model and re-include them. Any remaining samples are considered outliers.

The main advantage of robust methods is that they automate the process of finding outliers. This is especially convenient when the data sets have many samples and a substantial fraction of bad data. How many times have you removed an obvious outlier from a data set only to find another outlier that wasn’t obvious until the first one is gone? This problem, known as masking, is virtually eliminated with robust methods. Swamping, when normal samples appear as outliers due to the confidence limits being stretched by the true outliers, is also mitigated.

So am I ready to set my default algorithm preferences to “robust?” Well, not quite. There is some chance that useful samples, sometimes required for building the model over a wide range of the data, will be thrown out. But I think I’ll at least review the robust results now each time I make a model on any medium or large data set, and consider why the robust method identifies them as outliers.

FYI, for those of you using PLS_Toolbox or Solo, you can access the robust option in PCA, PCR and PLS from the analysis window by choosing Edit/Options/Method Options.

Finally, I should note that the robust methods in our products are there due to a collaboration with Mia Hubert and her Robust Statistics Group at Katholieke Universiteit Leuven, and in particular, Sabine Verboven. They have been involved with the development of LIBRA, A MATLAB LIBrary for Robust Analysis. Our products rely on LIBRA for the robust “engines.” Sabine spent considerable time with us helping us integrate LIBRA into our software. Many thanks for that!

BMW

Welcome to the 64-bit party!

May 3, 2010

Unscrambler X is out, and CAMO is touting the fact that it is 64-bit. We say, “Welcome to the party!” MATLAB has had 64-bit versions out since April, 2006. That means that users of our PLS_Toolbox and MIA_Toolbox software have enjoyed the ability to work with data sets larger than 2Gb for over 4 years now. Our stand-alone packages, Solo and Solo+MIA have been 64-bit since June, 2009.

And how much is Unscrambler X? They don’t post their prices like we do. So go ahead and get a quote on Unscrambler, and then compare. We think you’ll find our chemometric software solutions to be a much better value!

BMW

Best Poster iPods Ordered

Apr 28, 2010

As in past years, this year’s Eigenvector University includes a poster session where users of MATLAB, PLS_Toolbox and Solo can showcase their work in the field of chemometrics. It is also a chance to discuss unsolved problems and future directions, over a beer, no less.

This year’s crop of posters will be judged by Bruce Kowalski, co-founder of the field of chemometrics. For their efforts, the top two poster presenters will receive Apple iPod nanos! This year I’ve ordered the 16GB models that record and display video and include an FM tuner with Live Pause. These are spiffy, for sure. We’ll have one in blue (just like the EVRI logo!) and one in orange (our new highlight color, new website coming soon!). Both are engraved “Eigenvector University 2010, Best Poster.”

There is still time to enter the poster contest. Just send your abstract, describing your chemometric achievements and how you used MATLAB and/or our products, to me, bmw@eigenvector.com. Then be ready to present your poster at the Washington Athletic Club in downtown Seattle, at 5:30pm on Tuesday, May 18. The poster session is free, no need to register for EigenU classes.

You could win!

BMW

EigenU 2010 on-track to be biggest ever

Apr 18, 2010

The Fifth Edition of Eigenvector University is set for May 16-21, 2010. Once again we’ll be at the beautiful Washington Athletic Club in Seattle. This year we’ll be joined by Bruce Kowalski, co-founder (with Svante Wold) of the field of Chemometrics. Rasmus Bro will also be there, along with the entire Eigenvector staff.

Registrations are on-track to make this the biggest EigenU ever. All of the 17 classes on the schedule are a go! We’re also looking forward to Tuesday evening’s Poster Session (with iPod nano prizes), Wednesday evening’s PowerUser Tips & Tricks session, and Thursday evening’s Workshop Dinner. It is going to be a busy week!

BMW

Another EAS Meeting

Nov 15, 2009

Hard to believe that a year has passed, but I’m once again back at the Eastern Analytical Symposium, in Somerset NJ. EAS 2009 started today (Sunday, Nov. 15) with short courses, and will continue through Thursday (Nov. 19).

I’m teaching today with Don Dahlberg. Once again we’re presenting “Chemometrics without Equations” to a new group of chemometric neophytes. It’s a good class this year, a dozen registrants, all with real-world data problems they’d like to solve.

The trade show part of EAS starts tomorrow. I’m happy to report that our booth arrived safely this year. I’m excited to show off the new versions of our software, including version 5.5 of PLS_Toolbox, Solo and Solo+MIA, and version 2.0 of MIA_Toolbox. If you are in the neighborhood, please drop by booth #329 for a demo. Mention this blog and I’ll give you a free 2G USB drive! (Limited to first 50 requests.)

I’m also looking forward to the session for the Achievements in Chemometrics Award, which Eigenvector sponsors. This year Romà Tauler is the award recipient. He is being recognized for his work with curve resolution methods, and it is certainly well-deserved!

I’ll check back with another report on EAS 2009 later this week.

BMW

Chemometric “how to” videos on-line

Nov 12, 2009

If a picture is worth a thousand words, what’s a video worth?

Here at EVRI we’ve started developing a series of short videos that show “how to” do various chemometric tasks with our software packages, including our new PLS_Toolbox 5.5 and Solo 5.5. Some of the presentations are pretty short and specific, but others are a little longer (10-15 minutes) and are a blend of teaching a bit about the method being used while showing how to do it in the software.

An example of the latter is “PCA on Wine Data” which shows how to build a Principal Components Analysis model on a small data set concerning the drinking habits, health and longevity of the population of 10 countries. Another movie, “PLS on Tablet Data” demonstrates building a Partial Least Squares calibration for a NIR spectrometer to predict assay values in pharmaceutical tablets.

While I was at it, I just couldn’t help producing a narrated version of our “Eigenvector Company Profile.”

We plan to have many more of these instructional videos that cover aspects of chemometrics and our software from basic to advanced. We hope you find each of them worth at least 1000 words!

BMW

The Gang’s all here: Chemometric software updates released

Nov 5, 2009

In this case, “the Gang” is all our most popular software packages, and they’ve all gotten substantial improvements. This includes our flagship MATLAB toolbox for chemometrics, PLS_Toolbox, and its stand-alone version Solo. Plus our products for Multivariate Image Analysis MIA_Toolbox and the stand alone Solo+MIA.

As evidenced by the release notes, PLS_Toolbox and Solo received a host of additions and improvements. I’m particularly geeked about the performance improvements we’ve made to our Multivariate Curve Resolution (MCR) code, which has been speeded up by a factor of 15-25, and the addition of a new interface for Correlation Spectroscopy. We’ve also added a lot of new file import/export options and further refined many of the plotting tools, making it easier than ever to get the information you want to see right in front of you.

Though considerable, the updates for PLS_Toolbox and Solo might be called evolutionary. But the MIA_Toolbox/Solo+MIA upgrade is revolutionary. The main interface for MIA is now the Image Manager, which provides a place to load, organize, explore, and manipulate images before further analysis with other Eigenvector tools. This plus the new Trend Tool make it easy to explore and edit multivariate images. The whole work flow is streamlined. Add this to the host of analysis methods available, the improved computational tools for things like MCR, and the availability of our tools for 64-bit platforms, and you’ve got a very powerful set of tools for dealing with large multivariate images!

Existing users with current maintenance contracts can down download the new tools from their accounts. New users can order or get free 30-day demos by creating an account.

Enjoy the updates!

BMW

Carl Duchesne Wins Best Poster at MIA Workshop

Oct 27, 2009

The International Workshop on Multivariate Image Analysis was held September 28-29 in Valencia, Spain. We weren’t able to make it, but we were happy to sponsor the Best Poster prize, which was won by Carl Duchesne. Carl is an Assistant Professor at Université Laval, in Sainte-Foy, Quebec, CA, where he works with the Laboratoire d’observation et d’optimisation des procédés (LOOP).

With co-authors Ryan Gosselin and Denis Rodrigue, Prof. Duchesne presented “Hyperspectral Image Analysis and Applications in Polymer Processing.” The poster describes how a spectral imaging system combined with texture analysis can be used with multivariate models to predict the thermo-mechanical properties of polymers during their manufacture. The system can also be used to detect abnormal processing conditions, what we would call Multivariate Statistical Process Control (MSPC).

For his efforts Carl received a copy of our Solo+MIA software, which is our stand-alone version of PLS_Toolbox + MIA_Toolbox. We trust that Carl and his group at Laval will find it useful in their future MIA endeavors. Congratulations Carl!

BMW