Archive

Posts Tagged ‘bayesian networks’

A good Introduction on MapReduce

MapReduce is a framework to efficiently process a task that can be parallelized using cluster or grid. A good introduction can be found in the link below.

http://en.wikipedia.org/wiki/MapReduce#Example

In a sense, MapReduce framework is very similar to message-passing algorithm in graphical models where the Map and Reduce are comparable to building (tree) structure and marginalization of the messages respectively. So, I think MapReduce can make an inference plausible for large-scale graphical models.

Advertisements

Irregular Tree Structure Bayesian Networks (ITSBNs)

February 24, 2011 Leave a comment

Irregular Tree Structure Bayesian Networks (ITSBNs)

This is my on-going work on structured image segmentation. I’m about to publish the algorithm soon, so the details will be posted after submission. ^_^ Please wait.

<details will be appeared soon>

Here are some results

original image

original image

segmented using GMM

segmented using GMM

segmented using ITSBN

segmented using ITSBN

How to use the ITSBN toolbox

  1. Install Bayesian networks MATLAB toolbox and VLFeat. Let’s say we put them in the folders Z:\research\FullBNT-1.0.4 and Z:\research\vlfeat-0.9.9 respectively.
  2. Download and unpack the ITSBN toolbox. Let’s say the folder location is “Z:\research\ITSBN”. The folder contains some MATLAB files and 2 subdirectories 1) Gaussian2Dplot and 2) QuickShift_ImageSegmentation
  3. Put any color image to be segmented in the same folder. In this case, we use the one from Berkeley image segmentation BSDS500 and the folder is ‘Z:\research\BSR\BSDS500\data\images\test’
  4. Open the file main_ITSBNImageSegm.m in MATLAB and make sure that all the paths pointing to their corresponding folders:
    1. vlfeat_dir = ‘Z:\research\vlfeat-0.9.9\toolbox/vl_setup’;
    2. BNT_dir = ‘Z:\research\FullBNT-1.0.4’;
    3. image_dataset_dir = ‘Z:\research\BSR\BSDS500\data\images\test’;
  5. Run main_ITSBNImageSegm.m. When finished you should see folders of segmented images in the folder ‘Z:\research\ITSBN’.

Balanced-Tree Structure Bayesian Networks (TSBNs) for Image Segmentation

February 15, 2011 6 comments

In this work, I made a more generalized version of Gaussian mixture model (GMM) by putting prior balanced-tree structure over the class variables (mixture components) in the hope that the induced correlation among the hidden variables would suppress the noise in the resulting segmentation. Unlike supervised image classification by [1], this work focuses on totally unsupervised segmentation using TSBN. In this work, it is interesting to see how the data will be “self-organized” according to the initial structure given by TSBN.

The MATLAB code is available here. The codes call inference routines in Bayesian network toolbox (BNT), so you may want to install the toolbox before using my TSBN code.

original image 136x136

original image 136x136

16x16 feature image. Each pixel represents 16-dimensional vector.

16x16 feature image. Each pixel represents 16-dimensional vector.

Segmentation result using TSBN by setting predefined number of classes to 3. It turns out that the classes are meaningful as they are sky, skier and snow.

Segmentation result using TSBN by setting predefined number of classes to 3. It turns out that the classes are meaningful as they are sky, skier and snow.

[1]  X. Feng, C.K.I. Williams, S.N. Felderhof, “Combining Belief Networks and Neural Networks for Scene Segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 467-483, April, 2002

Variational Bayesian Gaussian Mixture Model (VBGMM)

October 14, 2010 4 comments

EM-algorithm for Mixture of Gaussian (EMGMM) has been a very popular model in statistics and machine learning. In EMGMM, the model parameters are still estimated using maximum likelihood (ML) method, however, recently there has been a need to put the prior probability on the model parameters. So, the GMM becomes a hierarchical Bayesian model whose root layer to leaf layer are the parameters, the mixture proportion and the observation respectively. Originally this hierarchical model can be infered using some challenging integration techniques or stochastic sampling techniques (e.g. MCMC). The latter case takes a lot of computational time to sample from the distribution.

Fortunately, there is an approximation technique that can help to make fast inference and give a good  approximate solution for example mean-field variational approximation. The variational approximation is very well explained in chapter9 of the classic machine learning textbook [1] by Bishop. There is a very good example on the variational Bayesian Gaussian mixture models. In fact, Bishop did a great job on explaining and deriving VBGMM, however, for a beginner, the algebra of the derivation can be challenging. Since the derivation contains a lot of interesting things that can be applied to other variational approximations, and the text skipped some details, so I decided to “fill in” the missing part and make a derivation tutorial out of it which is available here [pdf]. I also made the details of the derivation of some examples prior to VBGMM section in the text [1] available as well [pdf]. Originally, VBGMM is firstly appeard in an excellent paper [2]. Again, for the introduction and for more detail on the interpretation of the model, please refer to the original paper or Bishop’s textbook.

Implementation of the VBGMM in MATLAB can be found at Prof. Kevin Murphy’s group in UBC  [link] or [link]. The code requires Netlab toolbox (a bunch of very good MATLAB codes for machine learning).

[1] “Pattern Recognition and Machine Learning” (2006) by Christopher Bishop [link]
[2] “A Variational Bayesian Framework for Graphical Models” (NIP99) by Hagai Attias [link]

Deep Belief Networks

September 2, 2010 Leave a comment

Derivation of Inference and Parameter Estimation Algorithm for Latent Dirichlet Allocation (LDA)

June 15, 2010 11 comments

As you may know that Latent Dirichlet Allocation (LDA) [1] is like a backbone in text/image annotation these days. As of today (June 16, 2010) there were 2045 papers cited the LDA paper. That might be a good indicator of how important to understand this LDA paper thoroughly. In my personal opinion, I found that this paper contains a lot of interesting things, for example, modeling using graphical models, inference and parameter learning using variational approximation like mean-field variational approximation which can be very useful when reading other papers extended from the original LDA paper.

The original paper explains the concept and how to set up the framework clearly, so I wold suggest the reader to read the part from the original paper. However, for a new kid in town for this topic, there might be some difficulties to understand how to derive the formula in the paper. Therefore my tutorial paper solely focuses on how to mathematically derive the algorithm in the paper. Hence, the best way to use this tutorial paper is to accompany with the original paper. Hope my paper can be useful and enjoyable.

You can download the pdf file here.

[1] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent dirichlet allocation. J. Mach. Learn.
Res., 3:9931022, 2003.

Speech Processing and Machine Recognition of Speech

April 19, 2010 Leave a comment

Speech processing has been an active area of research for decades. By nature, the research is highly multidisciplinary as it involves linguistics, psychology, anatomy, mathematics, and machine learning, so some newbies on the area might wonder what would be a good tutorial to read in order to catch up with the topic. Personally, without any clue about linguistics or psychology, I started with a classic HMMs paper [1] by Rabiner. However, this paper focuses on HMM part of speech recognition rather than the overview of speech processing.  Today I found a series of good video lectures, and I thought that it might help summarize what people have been doing in the area. Hopefully you find it useful.

  • Hidden Markov Model (HMM) Toolbox for Matlab by Kevin Murphy [link]: The web page links to many useful papers, and provides HMM toolbox for MATLAB.
  • JHU Summer School on Human Language Technology [link]: The web page contains a lot of video lectures hosted by videolectures.net
  • Here is the lecture by Hynek Hermansky, you may want to watch this first

Introduction to Speech Processing

Hynek Hermansky

Next, you may want to know more technical details

Machine Recognition of Speech

Brian Kingsbury

Recommended Reading:

[1] A tutorial on Hidden Markov Models and selected applications in speech recognition, L. Rabiner, 1989, Proc. IEEE 77(2):257–286. [pdf]