wavelets

I am trying to learn a bit about wavelets, but never studied the subject academically.

A semester length textbook would be overload. I am looking for maybe a chapter of a book, or a tutorial paper in one of the academic journals.

Suggestions?

-- Mark

Reply to
Mark-T
Loading thread data ...

Hi, Mark:

Wavelets have all sorts of applications, perhaps most prominently in image processing but in other areas as well.

Choosing the best short introduction to this topic would be easier if you told us something about how you'd connect to it, i.e. what background in math or in applied math would you approach the subject from, and for what application would you be apt to consider wavelet basis functions?

regards, chip

Reply to
Chip Eastham

What I got is that it is a cyclons DownLoads - where did you get it - You must answer - Please

Reply to
shipJack

You may want to take a look at

"An Overview of Wavelet Based Multiresolutional Analysis" by Bjorn Jawerth and Wim Sweldens SIAM Review, 36 September 1994, 377=96412

If you don't have access to SIAM, you may want to try these links:

formatting link

formatting link

Dan

Reply to
Daniel J. Greenhoe

Google is your friend. Google "wavelet tutorial". Near the top are two such tutorials.

Reply to
David L. Wilson

Wavelets are like any other functional approximation scheme except that they work on a "local" basis.

Instead of approximating functions with other functions that have infinite support they use functions that have finite support.

One starts with a class of scaling function that are orthogonal to each other. Generally the haar scaling function is introduced at this point because it is the easiest.

The haar scaling function is simply the rectangular function. It has a certain width and height. Note that any other haar scaling function that doesn't overlap will be orthogonal to it. (since there product will be 0)

You actually have used "haar" scaling functions many times before such as in reimann integration(the rectangles used to approximate the area) or in lebesque measure theory when talking about "simple functions".

In this case we have two parameters to deal with. One is the "shifting" of the function and the other is the "Scaling".

Let

f(x) = 1 if 0

Reply to
Jon Slaughter

I'll tell you something about wavelets that books won't tell you. Wavelets, particularly decimated wavelets, are relatively useless for everything except compression. Otherwise, the relatively useful stuff involve over-complete representations such as Laplace pyramids.

A typical scenario is

  1. Some researcher publishes a paper about noise reduction using wavelets.

  1. Then the same or another researcher publishes a paper that improves the result of 1 by cycle spinning.

  2. Then the same or another researcher publishes a paper that improves the result of 1 and 2 by using undecimated wavelets.

  1. Then the same or another researcher publishes a paper that improves the result of 1, 2 and 3 by using a Laplace pyramid.

Since the invention of the Laplace pyramid predates the invention of wavelets, such literature constitutes a big jerk around.

Reply to
aruzinsky

Is light a wavelet?

Reply to
BURT

I've studied Fourier analysis, and linear operators, so I understand decomposition.

I don't expect to become expert, I might never have any professional use for wavelets. But I've attended seminars in information theory, and image processing, where the speakers referenced it, so I'm curious. From what I've seen, wavelets look 'elegant', as the mathematicians say...

I'd like to know the common applications, strengths and weaknesses of the techniques.

-- Mark

Reply to
Mark-T

Thanks, I have access to an engineering library, I'll look it up when I get a chance.

Reply to
Mark-T

Compression is pretty useful...

cycle spinning?

What is a Laplace pyramid, where is it used?

-- Mark

Reply to
Mark-T

Good luck.

There are very useful theorems on Fourier components (the inversion theorem and Parseval's theorem) that ensure that the Fourier decomposition is unique, and that no information is lost (signal/noise ratio is unchanged by the transformation).

You lose both those when you go to wavelets.

Reply to
whit3rd

Googling ' "cycle spinning" wavelet ' gives 876 hits. It means averaging the results from all possible invariant shifts of the wavelet basis with respect to the data.

Googling "Laplace pyramid" gives 660 hits. Typically, data is downsampled by factors of 0.5X using a Gaussian kernel into stages comprising what is called a "Gaussian pyramid." Then every level except the first is upsized 2X, using a Gaussian kernel, and subtracted from the level preceding it, forming the detail coefficients of the Laplace pyramid. The inverse transform is performed, starting from the smallest level, by upsizing each level 2X and adding to the next level. I can't think of an application of the any wavelet transform that can't be done with a Laplace pyramid and, except for compression, better. Whereas both decimated wavelet transforms and the Laplace pyramid are shift variant, unlike decimated wavelets, the Laplace pyramid is practically (almost) shift invariant which means cycle spinning is never needed.

Reply to
aruzinsky

Correction:

"It means averaging the results from all possible invariant shifts of the wavelet basis with respect to the data."

should be

"It means averaging the results from all possible variant shifts of the wavelet basis with respect to the data."

Reply to
aruzinsky

Huh? Common DFT and decimated wavelet transforms are equivalent to multiplication by invertible matrices therefore no information is lost.

Reply to
aruzinsky

Not true: invertibility does not ensure information isn't lost, it only ensures that the exact inversion will recover (from covariances) the original. Exact inversion, undoing the forward matrix from the result, requires maybe 10-digit precision for the result elements that only have two-digit accuracy.

Is that clear? The signal (nonrandom) and the noise (random) both get transformed by the matrix; you can lose signal/noise ratio easily if the eigenvalues of the matrix aren't all the same. The inversion requires higher precision than the data, and restores the signal/noise ratio to the original ONLY because it undoes the second-order effects (the covariances in the output numbers, which are created by that matrix).

In a bad case, a matrix has eigenvalues (1, 1, 10**6), and the data spans only the first two eigenvectors. So, (signal + noise) (matrix) =3D (signal)(matrix) + (noise)(matrix) and if you start with signal/noise of 100/1, you end up with signal/noise of sqrt(100**2 + 100 **2)/(sqrt (1+1+10**12)) i.e. signal/noise of 141.4/10**6

When signal is bigger than noise, you have information. When it's less than noise, you don't. The difference, is a loss of information.

It can be argued that the 'loss' is only apparent, not real, because inversion is possible. That is false, though, because the inversion REQUIRES LOTS MORE INFORMATION, the inverse matrix must be exact to a high degree in order to reproduce the input numbers. All of the correlations of the result numbers have to be intact or the inversion fails (so the result can have lots of elements which have 1-figure accuracy, but only a representation of the result that has 10-figure accuracy can successfully be inverted).

Reply to
whit3rd

The condition numbers for matrices representing orthonormal wavelet transformations are all 1, same as for DFT.

Reply to
aruzinsky

On Mar 13, 11:12 am, "Jon Slaughter" wrote:

I highly reccomend this post by Jon Slaughter as a document of the introduction to wavelet theory. Who says you need a book when such a small amount of information will do? At the kernel the wavelet theory must be simple, or else it isn't a theory at all.

I studied wavelets as an independent study while getting my undergrad degree. At the time (the early nineties) this stuff was all the buzz at UNH in Durham NH USA. A man from MIT named Gilbert Strang came to UNH on a lunch time presentation and the room was packed with physicists, EE's, and mathematicians. This is how important the theory was thought to be at that time and I have not followed it closeley since. But I did write a little bit of C code that would perform the wavelet transform on a one dimensional signal of finite length. This is what I feel comfortable sharing here is just the simple computation of a discrete sampled signal and it need only be ten units long to get the idea. Again, the simplicity is easily embodied, yet the extensions seem to run astray into the modern challenges of the many branches of science. I suppose this comes about because the wavelet can be posed as a new basis. By altering the basis that we work from a new view of the subject matter; an altered context comes about.

So for the Haar coding on a ten unit integer stream let's maintain the signal within the computation by building the signal as a ramp: s = 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 and let's not do any compression while performing the mathematics of the wavelet transform so that we may always traverse bidirectionally forward through the math and back out of it any place along the way. h = + 0 + 1 + 2 + 3 - 4 - 5 - 6 - 7 + 0 + 1 - 2 - 3, + 4 - 5 + 6 - 7 + 0 - 1, + 2 - 3, + 4 - 5, + 6 - 7 = - 16, - 4, - 2, -1, -1, -1, -1 Above is the result but I do wish that Jon would confirm that I got this correct since it has been so long since I've played with this. This is a fine practical instance in that if we were to construct binary hardware we could detect this particular ramp pulse via a simplistic counter on the value -1 coming down a pipe, resetting otherwise, thus flagging a potential operation down the pipe based on the signal feed. I suppose to keep things simple we'd have to packetize the signal source but that is fine for physicists, and these days signal processors too. The mathematician can attempt to keep some general features going but the wavelet was regarded as a practical thing. Upon seeing the simplest transform painted in a clean binary hardware implementation when we go back to the generality it seems easy to break a lot of the construction. I had a hard time with the orthogonal requirement and maybe still do. What cannot be overlooked is that we see structured data on the exit of the Haar transform whereas there was nothing but a nondescript stream flowing in until we packetized. This structured form I am now taking interest in and it may not be so cleanly information theoretic as was the istream of a fourier analysis. This is its beauty as well since we should allow for the idea of the local packet if we accept atomic theory, photons, and so forth. Here is even a simple argument on how an atom might absorb a given photon packet or not just from receiving the packet out of synch so that its own detection mechanism fails its own desire to gain energy. Then too could be a false detection which could lead to atomic loss of energy by misdetecting a nonreceivable packet, thus allowing stimulated emission of stowed energy. Oy, there's a double negative in there.

It does hold the tatrix structure (triangular matrix) so I guess I'll have to play even more with it sometime. Sorry, got to go. - Tim

Reply to
Tim BandTech.com

Somebody put this on a chip (IC).

Regards, Jay Bala.

Reply to
Jay Bala

Sorry, this was supposed to be s = 0, 1, 2, 3, 4, 5, 6, 7 .

h = + 0 + 1 + 2 + 3 - 4 - 5 - 6 - 7, (forgot a comma) + 0 + 1 - 2 - 3, + 4 + 5 - 6 - 7, ( and a boo boo) + 0 - 1, + 2 - 3, + 4 - 5, + 6 - 7 . (that's the end but you can see it would grow out like a tree if it kept going) = - 16, - 4, - 4, -1, -1, -1, -1 ; unboobood this is a structured form but in this stream equivalent no structure has been graphically presented. I didn't do the reverse transform here but that should follow.

sadder.

This description is not an exact ramp detector.

Close to the tatrix structure but not quite exact, so the question arises whether a tatrix style equivalent can be formed. - Tim

Reply to
Tim BandTech.com

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.