This sounds interesting. Am I understanding this right: by splitting the region (say image) along straight lines, it’s possible to find wavelets that encode the sub-regions with high fidelity. This technique can be successively applied to the resulting regions till you get a good enough encoding.
Sounds a little like simple mixture-of-experts setups with neural nets; a simple linear classification on top of more complex systems allows them specialize and produce better results.
I also saw some paper on face compression that used a technique of breaking up the image into rectangular regions and apply more sophisticated techniques to the sub regions. I think a similar thing is done with some forms of SVMs.
Seems like a general technique – break up the problem along simple linear lines, solve the sub-problems.
Yes, though “possible to find wavelets” should read “possible to find polynomials”. The idea is to divide the image into smaller and small regions and to apply polynomial fitting over each subregion. The compression kicks in when you discard some of these small regions because fitting a polynomial over a subregion doesn’t improve the (local) accuracy much.
Yes, it is a very general paradigm, one I work a lot with these days since I do time series segmentation.
Saravanansays:
Wavelets are very powerful and gives better result than other approximations. Wavelets will be the future, the processors very powerful, so we dont need to worry about the number of cycles, memory is also not a big issue nowadays.
This sounds interesting. Am I understanding this right: by splitting the region (say image) along straight lines, it’s possible to find wavelets that encode the sub-regions with high fidelity. This technique can be successively applied to the resulting regions till you get a good enough encoding.
Sounds a little like simple mixture-of-experts setups with neural nets; a simple linear classification on top of more complex systems allows them specialize and produce better results.
I also saw some paper on face compression that used a technique of breaking up the image into rectangular regions and apply more sophisticated techniques to the sub regions. I think a similar thing is done with some forms of SVMs.
Seems like a general technique – break up the problem along simple linear lines, solve the sub-problems.
Yes, though “possible to find wavelets” should read “possible to find polynomials”. The idea is to divide the image into smaller and small regions and to apply polynomial fitting over each subregion. The compression kicks in when you discard some of these small regions because fitting a polynomial over a subregion doesn’t improve the (local) accuracy much.
Yes, it is a very general paradigm, one I work a lot with these days since I do time series segmentation.
Wavelets are very powerful and gives better result than other approximations. Wavelets will be the future, the processors very powerful, so we dont need to worry about the number of cycles, memory is also not a big issue nowadays.