ZIP Algorithm on PIC 16F/18F ?

I'm working on a datalogger app. The design uses a 64kb ferroRAM for storage (great items btw - forget about cell fatigue and post-write delays). This is semi-adequate, but I'd really like to be able to stuff more data into the space allotted.

Has anyone come across a ZIP algorithm that will fit comfortably onto common PIC processors (we're using an

18F2620 in this instance) ? Doesn't have to achieve the absolute ultimate in compression, just gain us maybe a third more space.

The project is being writ in MikroPascal, but that language easily handles ASM inclusions and anything in 'C' oughtta translate pretty easily.

I suppose we could add another fRAM chip, but, well, hardware complexity costs dontchaknow ...

Any info helpful.

Reply to
B1ackwater
Loading thread data ...

A lot depends on what the data looks like. Run length encoding is pretty easy to code/decode in a small micro if the data patterns fit. Maybe a sigma-delta approach if successive values are different but by small amounts.

You'll get better recommendations if you can describe the data layout but some brute-force GPL code is up at

formatting link
Some assembly required...

--
Rich Webb     Norfolk, VA
Reply to
Rich Webb

Did you checkout zlib?

formatting link

--
Regards,
Richard.

+ http://www.FreeRTOS.org & http://www.FreeRTOS.org/shop
17 official architecture ports, more than 5000 downloads per month.

+ http://www.SafeRTOS.com
Certified by TÜV as meeting the requirements for safety related systems.
Reply to
FreeRTOS.org

Rich makes a good point, here. General purpose compression algorithms like zip depend on data patterns in the data to be compressed. They work great with text, but won't necessarily be able to tease out the redundancies in measured 'data logger' type data. You could check this by taking a raw binary image of your data and zipping it -- in particular, you could try it out while forcing each of the individual algorithms that zip uses, and see which one seems to be the most efficient.

You may do better by using some form of DPCM (Rich's sigma-delta approach, with different terminology). I don't know what's out there for lossless compression using sigma-delta techniques, but if your data has a lot of DC content you may be able to compress it to a considerable degree.

There are some easy lossy compression techniques that you could use, too. As one example, you could oversample compared to what goes into memory, then at each sampling instant you could store a few meaningful statistics of the data during the sampling period, such as the average, the maximum and the minimum. This is what a digital scope does when you put it into "envelope" mode, and it is often all the information one needs.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" gives you just what it says.
See details at http://www.wescottdesign.com/actfes/actfes.html
Reply to
Tim Wescott

A simple and fast algorithm for the general purpose compression is LZSS. You can easily customize it for the available RAM size and CPU speed.

The most important question is what is the structure of your data. I.e. what kind of redundancy can be exploited to get the compression.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

Ah HA ... might be able to do something with that. Worth trying anyhow.

As others have mentioned, the nature of the data determines how well it can be compressed. I'll just have to try it and see. If this doesn't work, well, I suppose I could pack my bits in a *little* bit tighter ....

Reply to
B1ackwater

i have a project where i'm planning on using

LZMA SDK (C, C++, C#, Java)

formatting link

first time i hear about zlib.. so i might spend some time comparing both.

Reply to
leblancmeneses

The problem with general lossless compression is the amount of memory required. The various ZIP (LZ) protocols will require a minimum of 4 to 8 k of storage for the purpose, and the better the compression the more processing storage is needed.

For use on low memory devices, such as the PIC, consider Hoffman (sp?) compression. You can use either adaptive compression, for single pass operation, or full Hoffman compression with two pass input (the first pass counts the occurances of each byte).

You get less compression, but faster and easier, with such things as repetitive char. compression. This can function with only a dozen or so bytes of compressor storage.

--
 [mail]: Chuck F (cbfalconer at maineline dot net) 
 [page]: 
            Try the download section.


** Posted from http://www.teranews.com **
Reply to
CBFalconer

That won't even come close to fitting; it needs far more RAM than the maximum of 4KB available in PIC18 series microcontrollers.

Depending on the nature of the data to be compressed, Huffman encoding with a fixed dictionary might be reasonable, and is fairly easy to implement.

Reply to
Eric Smith

or just move to a larger one ? Ramton have bigger devices then 64kb. They do i2c to 512kb, and also I see a 2Mb SPI model, with a 40Mhz spec

-jg

Reply to
Jim Granville

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.