Archive for February, 2008


February 29, 2008 2 comments

สุดยอดๆ เขียนบาลีก็ได้ กฺตญฺ Read more…

Categories: Uncategorized

Note on ONR meeting in Orlando – Feb 13, 2008

February 13, 2008 Leave a comment

ONR meeting Feb 13, 2008

Morning session

Curvelet decomposition:

Quite a new issue since 2001

Pretty much like wavelet but the kernel function is made such that directionality can be specified.

Unlike wavelet, hi-freq subband of curvelet contains most of information and there is few information contained in low-freq subband.

Look at the papers:

o The contourlet transform: an efficient directional multiresolution image representation
by Do, M.N.; Vetterli, M.

o Pyramidal directional filter banks and curvelets
by Do, M.N.; Vetterli, M.

Robert Mcdonald (panama city) à slide a box over the image snippet à for each box, decompose the Sonar images into subbands à put some curvelet coefficient thresholds to denoise each subband à reconstruct à the reconstructed version is of high SNR à for eaqch box à compute A = (w1)SNR_original_Image + (w2)SNR_recons à come up with a threshold, if A > the threshold, then the box contains a target. I don’t know how to pick w1 and w2.

Some keywords: Gabor filter, featurematrix, C


Dr. Larry Carin (Duke):

Session#1 (unofficial):

Intra-scene context: semi-supervised* (assume the model but not its parameters)

What’s the difference between boosting and semi-sup?

Inter-scene context: transfer learning = multi-tasking learning

See papers: (Stack et al): supervised STL, supervised MTL

What’s the difference between MI and MTL? Dr. Carin said it’s different but need to read his paper.

Questions: data manifold rep., Markov Random Walk

Session#2 (semi-supervised multi-task learning @ 11.30 AM):

non-parametric clustering method:

o stick-breaking construction of G

o pi4 = v4(1-v3)(1-v2)(1-v1)

o DP(alphaG0) is that a probability in parameter space?

Session#3 (Signal processing for the LFBB system @ 3.15 PM):

Lifelong-learning = semi-supervised?


Sparse Bayesian Classifier

Constructive RVM algorithm à fast? à I have to scan some more detail in Bayes’ classifier + NN

Session#4 (Exploiting context in adaptive underwater sensing): like session1

manifold information?

He mentioned that his students take a lot of statistics courses, but not like real analysis.


Dr. Tien-Hsin Chao (JPL – real-time mine detection & classification):

They exploit NN because they are the expert in NN. Actually they may use other things else later on. NN is fast for classifying but not training; real-time testing but not training.

Shadow is a very important feature for sonar image classification

Questions to discuss with Tory:


byte? C-track


Best way to go for incorporating other features to the DT network; should it be in terms of joint pdf at the leaf nodes or some other special DT architecture?

Categorize the environment according to the classifier?

UV, is U a real Sonar background image?

What’s the DT architecture to anticipate the hr-to-hr bottom change (temporal) à not our focus yet, but should have a good answer for that

In our DT, is likelihood a really good cost function? à if NOT, I think ITL is a good thing to look at. MI between input and output + likelihood is something that we should look at. Dr. Larry Carin’s work is also a good one; I like the concept about one data to explain another data, “inter-scene context”. I think that may help us too. à See in Dr. Carin session.

According to comments from Bob (I think his name is Bob), he mentioned about a robust framework for data fusion using something like x-map in which corporate the dynamic, etc of the data. What is that framework?

Categories: Uncategorized