Discussion Forums

Karhunen–Loève

20 replies [Last post]
Anders Feder
Offline
Joined: 2010-04-22
Posts: 618

Claudio Maccone and others have been promoting the Karhunen–Loève Transform as an alternative to FFT for SETI for many years. Is this algorithm being run on ATA data? If not, how come? Would it be worthwhile to implement?

Supposedly, the KLT is wicked good at detecting broadband and weaker narrowband signals. Is there any merit to this?

Anders Feder
Offline
Joined: 2010-04-22
Posts: 618
There is a detailed paper by

There is a detailed paper by Maccone online about this subject: Innovative SETI by the KLT.

An earlier (1993) paper referenced by the paper above is also online: On the Detection of Unknown Signals

The book "SETI 2020" has this to say:

Currently (2002) only the Karhunen Loeve (KL) transform shows potential for recognizing the difference between the incidental radiation technology and white noise. The KL transform is too computationally intensive for present generation of systems. The capability for using the KL transform should be added to future systems when the computational requirements become affordable.

This is very interesting. The way this transform is described it sounds almost like magic.

I want to understand what "too computationally intensive" means. Why can this not be implemented on something like BOINC?

Anders Feder
Offline
Joined: 2010-04-22
Posts: 618
The plot thickens. Dr.

The plot thickens. Dr. Maccone did not dismiss the idea outright when I contacted him. In fact he wants to help me implement it.

I'm left wondering what the trade-off is here. Supposedly, the KLT can detect signals a few hundred times noisier than the FFT can. Why would something like this not be running already?

gerryharp
Offline
Joined: 2010-05-15
Posts: 365
KLT

Hi

I agree with all of you that the KLT is an interesting transform. As an astrophysicist (not a comm expert!) I understand that the KLT does exactly the same process as does eigenvalue decomposition of a square matrix (or can be modified to do SVD (singular value decomposition, http://en.wikipedia.org/wiki/Singular_value_decomposition) of a rectangular matrix). KLT allows you to determine the largest eigenvalue and eigenvector of the matrix. -- That's any matrix. A matrix of noise will always have a largest eigenvalue...

It is easy to see one way that KLT could be used. If you compute all the eigenvalues(n) and plot them on a graph with n on the x-axis, then noise will give you a slowly decreasing curve where the (second through last eigenvalue) < (first eigenvalue).

OTOH, if the data have a strong signal (significant fraction of noise power in first eigenvalue) then you will see a noticable difference between the first and second eigenvalues. It is only the "difference" that represents the signal. It doesn't matter what the signal is (sine wave or pulse, for example) or how it is encoded. This is the beauty of the KLT. KLT will always return a non-noiselike eigenvalue spectrum which can help to identify data that contains signals.

However, it is important to notice that there are many algorithms for eigenvalue decomposition. Besides SVD, the Lanczos method allows you to compute only the first, second, and perhaps a few more eigenvalue/eigenvector pairs. This algorithm scales with the number of eigenvalues computed and can be extremely efficient compared to KLT for large sample sizes. This would give you the same information as KLT at a fraction of the cost.

Additionally, Maccone's descriptions of how to use the KLT have problems. For N samples = L*L, Maccone suggests to use an LxL matrix where each row is composed of L contiguous samples from the data; each row taken as the samples immediately following the previous row. This imposes a boundary condition on the eigenvectors: they must have a repeat length that can be represented as (L/M), where M is an integer number of samples. If M>L then you're hosed (for Lancsoz method, this limit does not exist). Since you don't know M, this particular implementation of KLT is sub-optimal. But I bet there are other, perhaps related KLT implementations show promise. In my opinion... (^_^)

Regarding sine wave signals: KLT is no better (and much less efficient) than FFT. Early results by Maccone et al., indicating _slight_ improvement of SNR with KLT over FFT are not meaningful, IMO. This apparent enhancement of the first eignenvalue is caused by noise in the data. KLT is not a sine wave detector and the first eigenvalue will _always_ contain a noise power contribution. This makes the first eigenvalue larger, but actually increases the "false alarm" rate for signal detection. I don't recommend KLT for finding sinusoids. If you don't like the FFT, look up "polyphase filter bank" which performs a Fourier transform but has much more stable properties.

As always, don't believe me. Figure out your own answers!

Gerry

af6kd
Offline
Joined: 2010-04-25
Posts: 23
Hi all, I have been running

Hi all,

I have been running KLTs (using Claudio's suggested BAM) against the ATA data released here. The results are interesting; I am getting results that I would expect, and these compare with what the FFTs provide. But no alien signals yet...

I too am not sure if the KLT is the "magic bullet", but I think it is worth examining. There is a computational burden, when compared to the FFT. (I am using AWS, so that helps a bit.) The FFT is nice because it provides processing gain and is FAST. Hopefully a SETI signal would be long-lived enough to appear within FFTs, given enough time and averaging.

I hope to have something to share soon... But it will take a much better mathematician and algorithm expert than I am to do something interesting with KLTs though! :)

Dave

Dave Robinson
Dave Robinson's picture
Offline
Joined: 2010-04-29
Posts: 196
First of all let me thank you

First of all let me thank you for hilighting this paper, I was unaware of its existance. I have made extensive use of the KL transform in several guises, from applications such as transforming three plane colour images to extract hidden detail, to enhancing the dialog in stereo soundtracks by de-emphasising background sound effects. However in all the applications I have ever met previously you always had the data from multiple sensors or channels, and until you hilighted this, I was under the impression that this powerful technique couldn't be applied to single channel, such as we have in the SETI data.

I am busy studying the Maccone paper to see if I can understand the technique. I have been trying to identify Pulsar signals based upon their Frequency domain characteristics that their spectrum will show up as only having frequency components at the fundemental frequency corresponding to the Pulsar period and harmonics thereoff (Remember the high school exercises in Fourier Analysis where any repetitive waveform only consists of sines and cosines of the base repeat frequency of the signal, and even and odd multiples of that frequency). My preliminary experiments of trying to isolate the signal using a comb filter tuned to the known repeat period of the Pulsar is yet to yield any results, the impressive plots in the Maccone paper might be exactly what I am looking for for finding the spiked frequency characteristics I am searching for.

Thanks once again, and keep up the good work.
Regards

Dave Robinson

Anders Feder
Offline
Joined: 2010-04-22
Posts: 618
If you are interested in the

If you are interested in the latest developments, this paper (not free, unfortunately), also by Maccone, was released just days ago.

As far as I understand, Maccone has developed a method (BAM-KLT) which is faster for a certain class of signals. But it may not be as good for Doppler shifted signals as 'pure' KLT.

From your experience with KLT, can you say anything about whether or not it lends itself to being performed in a distributed way (like SETI@home)? Can the computational load be split into smaller (but not too small) work units which can be computed independently and in parallel?

Dave Robinson
Dave Robinson's picture
Offline
Joined: 2010-04-29
Posts: 196
I will need to come back to

I will need to come back to you on this one - I don't think that it is a question that I would like to answer straight off of the top of my head - good if it was though.

Regards

Dave Robinson

Dave Robinson
Dave Robinson's picture
Offline
Joined: 2010-04-29
Posts: 196
KLT and Pulsar Profile extraction

I have made an attempt at applying the KLT to my resynchronized data array in order to extract the Pulsar Pulse Profile. I argued that the 163 rows in my resampled matrix could be considered as 163 independent measurements of the Pulsar Pulse, and could therefore be extracted using a PCA analysis.

I have to report that the result was relatively successful in as much as the Profile information was in the results, but you had to look for it, the top 3 planes contained what at first sight appeared to be sinusoids, probably from the mystery lines described in my Wiki Paper on Frequency Domain Pulsar rate extraction. The first evidence of the Pulsar came on the 4th Plane, but unfortunately the Pulsar data was further decomposed with the 5th Plane carrying a substantial quantity of the Pulse energy. The graph attached below shows the relationship between the 4th and 5th Plane, with the figures rescaled to be consistant with the Synchronous Integration graph shown in my last posting.

The preliminary conclusion must be that the simpler and faster Synchronous Integration algorithm is by far the superior method to select, and that KLT appears to offer no advantage what so ever in extracting the Pulsar Profile.

Regards

Dave Robinson

gerryharp
Offline
Joined: 2010-05-15
Posts: 365
kurtosis

I don't have SETI 2020 memorized, but if this is what it says, then it is wrong.
 
There are some very simple algorithms to detect non-gaussianity in, say, time series data. Here is one:
 
1. Compute the mean of the data
2. Subtract mean value from all points
3. Square each point and sum. This is the variance, closely related to the standard deviation. But let's call it the "2nd moment" of the data.
4. Starting with the squared array, square each point again and sum. This is the 4th moment of the data.
5. Compute the quantity (4th moment)/( (second moment)^2 ). This is called the kurtosis and is a dimensionless quantity. As we all remember from statistics (ha, that's a joke) for a pure Gaussian random sequence, the kuritosis == 1 (or 3 if you follow the definition in Numerical Recipes for C) as the number of samples approaches infinity.
Because of this, people often talk of "excess kurtosis," meaning the difference of the kurtosis from its expected value. http://dictionary.reference.com/browse/kurtosis, gives a colloquial feel. Wikipedia http://en.wikipedia.org/wiki/Kurtosis#Kurtosis_of_well-known_distributions gives some very interesting curves for well known distributions.
 
The point is, it is very easy and computationally efficient to determine how "similar" a distribution is to a Gaussian (Normal) distribution. This can be used to identify parts of the telescope data that are not noise-like, and are therefore a good way to separate the wheat from the chaff. Presently we are only beginning to experiment with algorithms like kurtosis which, in my view, are a lot cheaper and more reliable determinations of non-gaussianity and may be favored over KLT in the short term. Like KLT, Kurtosis does not depend on any properties of the signal apart from statistics.
 
Gerry

Anders Feder
Offline
Joined: 2010-04-22
Posts: 618
Can it be proven that

Can it be proven that kurtosis always give more reliable determinations of the presence of a signal than KLT, even when SNR is very low? There has to be some limit at which one can't tell whether a slight deviation from Gaussianity is due to the presence of some kind of signal or mere chance.

Anders Feder
Offline
Joined: 2010-04-22
Posts: 618
This plot is an attempt to

This plot is an attempt to compare Rob's KLT code above with Gerry's kurtosis algorithm.

The plot shows KLT of five instances of pseudo-random noise on the left-hand side, and KLT of the same noise with an embedded sinusoidal signal (with SNR=1/30) on the right-hand side. You should be able to agree that in all cases the "signal plus noise" graphs look markedly different from any of the "pure noise" graphs. That is: in five out of five cases, we were able to unambiguously determine the presence of a signal using KLT.

Then look at the kurtosis inscribed above each graph. Note that for the raw noise, the kurtosis ranges from 2.995 to 3.009. To stand out from the noise, a signal would have to result in a kurtosis outside of this range. And only one does (fourth from above, kurtosis=3.011).

So, with these parameters at least, KLT seems to give about 5 times more reliable detections than kurtosis. Of course kurtosis is much faster than KLT, so perhaps they could be used in unison: if excess kurtosis is significant, but still not unambiguous, KLT can be run as an additional test on the data in question to either verify or falsify the presence of a signal.

(All this provided that I have implemented both methods correctly in the test. You can check the code for yourself here.)

robackrman
Offline
Joined: 2010-04-15
Posts: 235
Very interesting

Very interesting experiment!

However, the conclusions may be affected by incorrect calculation of kurtosis. I suggest substituting

r = ((sum(b)/N)/((sum(a)/N)^2)) - 3;

for the last statement in your kurtosis function.

Or, use the kurtosis function from the Octave library which you had overridden with your own:

rackrman@dev:~/temp/anders$ octave -q
octave:1> # kurtosis of normal dist. should ~= 0
octave:1> g = normrnd(0, 1, 1, 10000);
octave:2> disp(kurtosis(g))
-0.0089964
octave:3> # kurtosis of Laplacian dist. should ~= 3
octave:3> l = laplace_rnd(1, 10000);
octave:4> disp(kurtosis(l))
3.0069

Would you consider redoing your experiment?

Anders Feder
Offline
Joined: 2010-04-22
Posts: 618
Here is a plot using the

Here is a plot using the built-in kurtosis function. The results seem unaffected: noise kurtosis ranges from -0.0035 to 0.0008, and only one signal kurtosis (0.001) falls outside this range. (Code for this run.)

Here is a plot using my kurtosis function with Rob's fix. Here none of the signal kurtosises fall outside of the noise range of -0.0048 to 0.0035. (Code for this run.)

I've also tried to vary the sample count between each of the three runs so far from 2^20 samples to 2^22 samples. For increasing sample counts, KLT graphs seems to become more sharply defined, while kurtosis, if anything, seem to tend to blend into the noise.

robackrman
Offline
Joined: 2010-04-15
Posts: 235
Outstanding

Outstanding contribution!

Furthermore, note that the y-axis range of the left column (noise) is significantly smaller than the y-axis range of the right column (signal + noise). If scaled the same, the noise only eigenvalues would appear flatter in comparison.

Anders Feder
Offline
Joined: 2010-04-22
Posts: 618
A correction is in order: it

A correction is in order: it turns out that the pseudorandom 'noise' generator is a major source of error in this experiment. I redid the experiment with entropy supplied by QRBG and here kurtosis actually performs just as well as KLT under same SNR and sample count as in the original experiment. (Code)

I am resurrecting my unused 3GHz machine and doing some more detailed and longer running experiments to see if they diverge at more extreme parameter settings. Any suggestions on how to quantify the output from KLT so I can plot it as a function of SNR or sample count?

robackrman
Offline
Joined: 2010-04-15
Posts: 235
There is a problem with this

There is a problem with this latest experiment.  Kurtosis values of ~ -1.2 indicate that your "noise generator" has a uniform distribution (no value is more-or-less likely than another).  To model SETI data, and for the kurtosis test to be relevant, the noise should have a Gaussian distribution.

robackrman
Offline
Joined: 2010-04-15
Posts: 235
The KLT was not considered

The KLT was not considered tractable for real-time SETI until recently due to its polynomial time complexity. The FFT, in contrast, has more efficient logarithmic time complexity.

I am intrigued by the efficiency found in the BAM KLT by predicting, for data of qualifying statistics, the largest eigenvalue from its value and derivative computed from a reduced-size autocorrelation matrix.

The referenced papers, and other papers, compare the KLT and FFT using a single sinusoid buried in noise, but actual implementation of the KLT for SETI will require effort to properly establish an experiment to take into account a more complex signal environment containing known contaminating signals.

For example, let us look at a simple use of the KLT to search for narrow-band signals:

We generate a signal containing three sinusoids imagining one to be a constant from the IF chain, the second to be terrestrial interference, and the third to be an ET candidate.

octave:1> x = sinewave(1024,73) + sinewave(1024, 109) + sinewave(1024, 193);
octave:2> plot(x)

We add noise to the signal.

octave:3> x = (x / 3) + randn(1, 1024);
octave:4> plot(x)

The signals are separated in frequency space such that known contaminating signals can be ignored (will not cause false alarms).

octave:5> plot(abs(fft(x))(1:100))

A plot of the eigenvalues from a partial KLT shows most of the energy captured in the first six eigenvectors.

octave:6> y = autocor(x, 511);
octave:7> z = toeplitz(y);
octave:8> [vctr,val] = eig(z);
octave:9> plot(val(find(val))(512:-1:480))

A plot of the first six eigenvectors demonstrates that the sinusoids are not completely separated. This is to be expected given that the basis vectors are chosen from the data. If detection were based on thresholds set for the largest eigenvalues, known contaminating signals, that could be ignored in frequency space, might cause false alarms when analyzed with the KLT.

octave:10> subplot(6,1,1)
octave:11> plot(vctr(:,512))
octave:12> subplot(6,1,2)
octave:13> plot(vctr(:,511))
octave:14> subplot(6,1,3)
octave:15> plot(vctr(:,510))
octave:16> subplot(6,1,4)
octave:17> plot(vctr(:,509))
octave:18> subplot(6,1,5)
octave:19> plot(vctr(:,508))
octave:20> subplot(6,1,6)
octave:21> plot(vctr(:,507))

Perhaps the KLT would be usefully employed to search channels in which no signals have been detected otherwise?

Anders Feder
Offline
Joined: 2010-04-22
Posts: 618
Thanks for explaining the

Thanks for explaining the practical issues with this algorithm.

I note that you say that your plot is only a 'partial' KLT. Without an in-depth understanding of KLT, I am wondering if a 'full' (?) polynomial time KLT could be used to 'post-process' results from BAM KLT? That is, if BAM KLT (or another method for that matter) finds the three signals as one 'lump' over a relatively narrow bandwidth, could a full KLT be used to discriminate between them? Since I imagine such a full KLT would be a lot more feasible over a narrow band than over the whole spectrum?

robackrman
Offline
Joined: 2010-04-15
Posts: 235
I should not have used the

I should not have used the description "partial" KLT. Perhaps instead "incomplete?" I did not do the final transform (matrix multiply) which would have collapsed most of the energy, for that example, into six coefficients. Instead, I stopped after the autocorrelation and principle component analysis steps where the eigenvalues and associated eigenvectors are identified, which were the steps necessary to make my point.

By the way, I am excited about the potential use of the BAM KLT for SETI research, so my post should not be considered negative. I am struggling a bit with the BAM KLT algorithm (having read the paper multiple times) and experimenting with it to understand the constraints and to build up my own belief that the largest eigenvalue can be predicted.

I think I understand your suggestion: perhaps the sinusoids could be individually determined by, for example, taking the FFT of the eigenvectors? I did that in my analysis to verify that the sinusoids were combined to some degree in the eigenvectors but did not show those results.

sv1etk
Offline
Joined: 2010-04-20
Posts: 32
It's nice to see the

It's nice to see the practical application of eigenvectors in signal detection!
I have two considerations though. The first being that besides the detection itself, with fft you also have the actual signal frequency and magnitude (it's basic characteristics). My second consideration is that as we have already seen there is always some unwanted earth based signals in ATA data sets. Wouldn't that mean that the difference in the eigenvectors will always indicate presence of signals?
Furthermore, I cannot find any good reason why a potential signal would be much stronger (in power) than the unwanted ones....