Archive

Posts Tagged ‘mathematics’

Awesome seminars at UW

April 3, 2012 1 comment

There are some fascinating seminars sponsored by UW, and most of them are recorded:

CSE Colloquia:
Every Tuesday 3:30 pm
https://www.cs.washington.edu/htbin-post/mvis/mvis/Colloquia#current

Yahoo! Machine Learning Seminar
Every Tuesday from 12 – 1 pm
http://ml.cs.washington.edu/seminars

UWTV: Research/Technology/Discovery Channel
Broadcast all the new findings, research, technology for free!!
http://www.uwtv.org/

 

 

 

Simple Classification Models: LDA, QDA and Linear Regression

July 28, 2011 Leave a comment

Finally, my website was set free from the hacker–at least for now ^_^. In my backup directory, I found some notes I made for the Pattern Recognition class I taught in Spring 2010. The notes contains the details of the derivation of

  • Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA) [pdf]
  • Linear Regression for Classification [pdf]

Hope this can be useful.

3D LIDAR Point-cloud Segmentation

February 8, 2011 4 comments

One of the big challenges in 3D LIDAR point-cloud segmentation is detailed ground extraction, especially in high vegetated area. In some applications, it requires to extract the ground points from the LIDAR data such that the details are preserved as much as possible, however, most of the time the details and the noise are coupled and it is difficult to remove the noise whereas the ground details are preserved. Imagine the case where you have the LIDAR point cloud over a creek covered by multilayer canopies including ground flora and you would like to extract the creek from the data set by preserving the ground details as much as you can. This would be a very labor-intensive task for human, so a better choice might be to develop an automatic process for computer to complete the task for us. Even for a computer, this can be a very labor-intensive task due to the number of points in the area is extremely high.

before_vfafter_vf

In 2004, I and my former adviser, Dr. Kenneth C. Slatton, developed a multiscale information-theoretic based algorithm for ground segmentation. The method works well in real-world applications and is used in several publications. The MATLAB toolbox is available here. The brief manual can be found here.

I would like to thank my colleagues at National Center for Airborne Laser Mapping (NCALM), Adaptive Signal Processing Laboratory (ASPL) and Geosensing group at University of Florida who use the algorithm on their work and give tons of useful suggestions to improve this algorithm up until now; Dr. Jhon Caceres for very nice GUI; Dr. Sowmya Selvarajan for the first-ever manual for this toolbox. Last but not least, I would like to thank Dr. Kenneth Clint Slatton for wonderful ideas and guidance–we still have an unpublished journal to fulfill [1].


/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:””;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin-top:0in;
mso-para-margin-right:0in;
mso-para-margin-bottom:10.0pt;
mso-para-margin-left:0in;
line-height:115%;
mso-pagination:widow-orphan;
font-size:11.0pt;
mso-bidi-font-size:14.0pt;
font-family:”Calibri”,”sans-serif”;
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-fareast-font-family:”Times New Roman”;
mso-fareast-theme-font:minor-fareast;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;}

[1] K. Kampa and K. Clint Slatton, “Information-Theoretic Hierarchical Segmentation of Airborne Laser Swath Mapping Data,” IEEE Transactions in Geoscience and Remote Sensing, (in preparation).

[2] K. Kampa and K. C. Slatton, “An Adaptive Multiscale Filter for Segmenting Vege­tation in ALSM Data,” Proc. IEEE International Geoscience and Remote Sensing Symposium (IGARSS), vol. 6, Sep. 2004, pp. 3837 – 3840.

A brief slides can be found here.

Bayesian Inference

January 27, 2011 1 comment

Recently I have been looking for a good short resource for the fundamental Bayesian inference. There are tons of good relevant materials online, unfortunately, some are too long, some are too theoretic. Come on, I want to find something intuitive, making sense and understandable and readable within 30 minutes! After a couple of hours wandering on internet and reading some good relevant books, I decided to pick the best 3 books, of course in my opinion, that contributes to my understanding the most within half an hour.

  1. Bayesian Data Analysis by Andrew Gelman et al. — If you understand the concept of Bayesian framework and just want to review Bayesian inference, then the first pages of Chapter2 are sufficient. Good examples can be found in section 2.6.
  2. Bayesian core: a practical approach to computational Bayesian statistics by Jean-Michel Marin, Christian P. Robert — The book emphasizes the important concepts and has a lot of examples necessary to understand the basic ideas.
  3. Bayesian Field Theory by Jörg C. Lemm — This book provides great perspective of Bayesian framework to learning, information theory and physics. There are very a lot of figures and graphical models making the book easy to follow.

Variational Bayesian Gaussian Mixture Model (VBGMM)

October 14, 2010 4 comments

EM-algorithm for Mixture of Gaussian (EMGMM) has been a very popular model in statistics and machine learning. In EMGMM, the model parameters are still estimated using maximum likelihood (ML) method, however, recently there has been a need to put the prior probability on the model parameters. So, the GMM becomes a hierarchical Bayesian model whose root layer to leaf layer are the parameters, the mixture proportion and the observation respectively. Originally this hierarchical model can be infered using some challenging integration techniques or stochastic sampling techniques (e.g. MCMC). The latter case takes a lot of computational time to sample from the distribution.

Fortunately, there is an approximation technique that can help to make fast inference and give a good  approximate solution for example mean-field variational approximation. The variational approximation is very well explained in chapter9 of the classic machine learning textbook [1] by Bishop. There is a very good example on the variational Bayesian Gaussian mixture models. In fact, Bishop did a great job on explaining and deriving VBGMM, however, for a beginner, the algebra of the derivation can be challenging. Since the derivation contains a lot of interesting things that can be applied to other variational approximations, and the text skipped some details, so I decided to “fill in” the missing part and make a derivation tutorial out of it which is available here [pdf]. I also made the details of the derivation of some examples prior to VBGMM section in the text [1] available as well [pdf]. Originally, VBGMM is firstly appeard in an excellent paper [2]. Again, for the introduction and for more detail on the interpretation of the model, please refer to the original paper or Bishop’s textbook.

Implementation of the VBGMM in MATLAB can be found at Prof. Kevin Murphy’s group in UBC  [link] or [link]. The code requires Netlab toolbox (a bunch of very good MATLAB codes for machine learning).

[1] “Pattern Recognition and Machine Learning” (2006) by Christopher Bishop [link]
[2] “A Variational Bayesian Framework for Graphical Models” (NIP99) by Hagai Attias [link]

Notes on Variational EM algorithm

July 18, 2010 Leave a comment

This note is aimed for a beginner for variational EM algorithm. The note contains the following contents:

  1. motivation of variational EM
  2. Derivation in details
  3. Geometrical meaning of using KL divergence in variational EM
  4. Traditional EM vs. variational EM

Please download the note here. Your comments and suggestions are very appreciated.

Derivation of Inference and Parameter Estimation Algorithm for Latent Dirichlet Allocation (LDA)

June 15, 2010 11 comments

As you may know that Latent Dirichlet Allocation (LDA) [1] is like a backbone in text/image annotation these days. As of today (June 16, 2010) there were 2045 papers cited the LDA paper. That might be a good indicator of how important to understand this LDA paper thoroughly. In my personal opinion, I found that this paper contains a lot of interesting things, for example, modeling using graphical models, inference and parameter learning using variational approximation like mean-field variational approximation which can be very useful when reading other papers extended from the original LDA paper.

The original paper explains the concept and how to set up the framework clearly, so I wold suggest the reader to read the part from the original paper. However, for a new kid in town for this topic, there might be some difficulties to understand how to derive the formula in the paper. Therefore my tutorial paper solely focuses on how to mathematically derive the algorithm in the paper. Hence, the best way to use this tutorial paper is to accompany with the original paper. Hope my paper can be useful and enjoyable.

You can download the pdf file here.

[1] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent dirichlet allocation. J. Mach. Learn.
Res., 3:9931022, 2003.