Posts Tagged ‘machine learning’

BigML: a machine learning “sandbox”

Today I just found an interesting website BigML, and it seems to offer a playground for people, especially ML researchers, to experiment standard machine learning techniques on your data set or even on your business.

The main website is here:

You can try the BigML for free in development mode, but I think 1 MB for training data set is pretty restrictive though.


A good Introduction on MapReduce

MapReduce is a framework to efficiently process a task that can be parallelized using cluster or grid. A good introduction can be found in the link below.

In a sense, MapReduce framework is very similar to message-passing algorithm in graphical models where the Map and Reduce are comparable to building (tree) structure and marginalization of the messages respectively. So, I think MapReduce can make an inference plausible for large-scale graphical models.

Awesome seminars at UW

April 3, 2012 1 comment

There are some fascinating seminars sponsored by UW, and most of them are recorded:

CSE Colloquia:
Every Tuesday 3:30 pm

Yahoo! Machine Learning Seminar
Every Tuesday from 12 – 1 pm

UWTV: Research/Technology/Discovery Channel
Broadcast all the new findings, research, technology for free!!




Cluster Evaluation using Adjusted Rand Index (ARI)

August 17, 2011 Leave a comment

Here is the 2 partitions mentioned in the example1 in the tutorial paper “Details of the Adjusted Rand index and Clustering algorithms
Supplement to the paper “An empirical study on Principal Component Analysis for clustering gene expression data” (to appear in Bioinformatics)” pdf

Partition U (ground truth) and V (predicted)

And I think they did in the example is exactly the same as the following

a = |(4,5) ; (7,8)  (7,9) (7,10) (8,9) (8,10) (9,10)| = (2 choose 2) + (4 choose 2) =  7

b=|(1,2) (3,4) (3,5) (6,4) (6,5) (3,6)| = 6

c = |(1,3) (2,4) (2,5) (6,7) … (6,10)| = 7

d = |(1,4)…(1,10) (2,3) (2,6) …(2,10) (3,7) …(3,10) (4,7)…(4,10) (5,7)…(5,10)| = 25

where (i,j) denotes the pair (or edge) between node i and node j. Then they use this a, b, c and d to evaluate Rand index and adjusted Rand index.

Effects of adding loading factors to a covariance matrix

July 29, 2011 Leave a comment

From my previous post, we know that the update equation for covariance matrix might not be numerically stable because of the matrix not being positive definite. An easy way to stabilize the algorithm is to add a relatively small positive number a.k.a. loading factor to the diagonal entries of the covariance matrix. But, Does the factor loading affect the likelihood or the convergence of the EM algorithm?

Apparently, adding the loading factor to the covariance matrix does impact the log-likelihood value. I made some experiments on the issue, and let me share the results with you as seen in the learning curve (log-likelihood curve) of ITSBN with EM algorithm below. The factor is applied to the matrix only when the determinant of the covariance matrix is smaller than 10^{-6}. There are 5 different factors used in this experiment listed as follows; 10^{-8}, 10^{-6}, 10^{-4}, 10^{-3}, 10^{-2}. The results show that the learning curves are still monotonically increasing* and level off near the end. Furthermore, we found that the level-off value are highly associated with the value of the factor. The bigger the factor, the smaller the level-off value. This suggested that we should pick smallest value of factor as possible in order to stay as close as the ideal learning curve as possible. Note that the loading factor is not added to the covariance matrix until the second iteration.

log-likelihood curve with different loading factors

* Though I don’t think this is always the case because the factor is not consistently added to the matrix, and hence when it is added, it might pull the log-likelihood up to a low value. However, it is empirically shown that the log-likelihood is still monotonically increasing when the factor is big.

What make a covariance matrix NOT positive definite in the EM algorithm?

July 29, 2011 Leave a comment

There are so many plausible reasons. One common reason is that there is at least one Gaussian component not having its cluster members in a close affinity. This situation occurs when the data clusters spread very narrow with respect to the distance between each cluster; in other words, when the intra-cluster distance is much smaller than inter-cluster distance. Let’s assume we have 3 data clusters A, B and C, with A and B are almost merged to each other and very far away from C. We want to cluster the data into 3 components using the EM algorithm.  Suppose the initial locations of the 3 clusters are at the middle of the space among the three clusters, and it occurs that there is one centroid not having its “nearest” members. This also means that it is quite sufficient to use only 2 components to model the whole data rather than 3. Let’s assume the deserted centroid is labeled by the ID ‘2’. In which case, the posterior marginal distribution of each data sample will either have big value for label 1 or 3, but there is no sample give big value for label 2. In fact, to be more precise, the posterior marginal for the label 2 will be virtually zero for all the data samples. Unfortunately the update equation for a covariance matrix weights each atom (i.e.,  (x_i-\mu_2)(x_i-\mu_2)^{\top})  of updated covariance matrix with its corresponding class posterior marginal  p(x_i=c_2|evidence), and hence give zero matrix for covariance matrix of class label 2. So, as you have seen, it is not always an easy case to use EM to cluster the really-far-separated data.

Simple Classification Models: LDA, QDA and Linear Regression

July 28, 2011 Leave a comment

Finally, my website was set free from the hacker–at least for now ^_^. In my backup directory, I found some notes I made for the Pattern Recognition class I taught in Spring 2010. The notes contains the details of the derivation of

  • Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA) [pdf]
  • Linear Regression for Classification [pdf]

Hope this can be useful.