MapReduce is a framework to efficiently process a task that can be parallelized using cluster or grid. A good introduction can be found in the link below.
In a sense, MapReduce framework is very similar to message-passing algorithm in graphical models where the Map and Reduce are comparable to building (tree) structure and marginalization of the messages respectively. So, I think MapReduce can make an inference plausible for large-scale graphical models.
Information theory, pattern recognition, and neural networks
Draft videos (not yet edited):
There are some fascinating seminars sponsored by UW, and most of them are recorded:
Every Tuesday 3:30 pm
Yahoo! Machine Learning Seminar
Every Tuesday from 12 – 1 pm
UWTV: Research/Technology/Discovery Channel
Broadcast all the new findings, research, technology for free!!
Really love this blog
Here is the 2 partitions mentioned in the example1 in the tutorial paper “Details of the Adjusted Rand index and Clustering algorithms
Supplement to the paper “An empirical study on Principal Component Analysis for clustering gene expression data” (to appear in Bioinformatics)” pdf
And I think they did in the example is exactly the same as the following
a = |(4,5) ; (7,8) (7,9) (7,10) (8,9) (8,10) (9,10)| = (2 choose 2) + (4 choose 2) = 7
b=|(1,2) (3,4) (3,5) (6,4) (6,5) (3,6)| = 6
c = |(1,3) (2,4) (2,5) (6,7) … (6,10)| = 7
d = |(1,4)…(1,10) (2,3) (2,6) …(2,10) (3,7) …(3,10) (4,7)…(4,10) (5,7)…(5,10)| = 25
where (i,j) denotes the pair (or edge) between node i and node j. Then they use this a, b, c and d to evaluate Rand index and adjusted Rand index.
My paper did not pass the pdfExpress verification due to the error:
Error Font Helvetica is not embedded (111x)
Here we follow the process from the submission homepage:
If you are using the conference template in its Winword version and still get errors (or no answers) from pdfExpress, try this procedure:
1) Please check the top margin of the the first page. it MUST be 2.5 cm; for OTHER pages it must be 1.9 (cm)
2) Generate the pdf file
3) Then you have to embed the fonts. You can do that either
3.1) by using acrobat professional or
3.2) using the line command
gs -dSAFER -dNOPLATFONTS -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sPAPERSIZE=letter -dCompatibilityLevel=1.4 -dPDFSETTINGS=/printer -dCompatibilityLevel=1.4 -dMaxSubsetPct=100 -dSubsetFonts=true -dEmbedAllFonts=true -sOutputFile=OUTPUT.pdf -f INPUT.pdf
where INPUT.pdf is the pdf source file, OUTPUT.pdf is the output file to be submitted with the fonts embedded and gs is the ghost-script executable command.
If this does not work, or you have other pdfExpress problems, please contact Dr. Romolo Camplani at firstname.lastname@example.org
However, after trying such a method, the pdf is rejected from the submission website. So, we still have to put the output file to pdfExpress to verify again, and of course you can now expect no error. Then you can proceed to submit the pdfExpress-verified pdf to IJCNN, and the file is PID1839743.pdf whose content is identically the same as ClosedFormDcsGMM_IJCNN_final_added2.tex.
From my previous post, we know that the update equation for covariance matrix might not be numerically stable because of the matrix not being positive definite. An easy way to stabilize the algorithm is to add a relatively small positive number a.k.a. loading factor to the diagonal entries of the covariance matrix. But, Does the factor loading affect the likelihood or the convergence of the EM algorithm?
Apparently, adding the loading factor to the covariance matrix does impact the log-likelihood value. I made some experiments on the issue, and let me share the results with you as seen in the learning curve (log-likelihood curve) of ITSBN with EM algorithm below. The factor is applied to the matrix only when the determinant of the covariance matrix is smaller than . There are 5 different factors used in this experiment listed as follows; . The results show that the learning curves are still monotonically increasing* and level off near the end. Furthermore, we found that the level-off value are highly associated with the value of the factor. The bigger the factor, the smaller the level-off value. This suggested that we should pick smallest value of factor as possible in order to stay as close as the ideal learning curve as possible. Note that the loading factor is not added to the covariance matrix until the second iteration.
* Though I don’t think this is always the case because the factor is not consistently added to the matrix, and hence when it is added, it might pull the log-likelihood up to a low value. However, it is empirically shown that the log-likelihood is still monotonically increasing when the factor is big.