ECG signals Compression/Decompression

Hi,

I want to ask you if it's feasible to create an ECG compression/decompression algorithm using decomposition of the ECG signal into orthogonal polynomials bases on FPGA.

And if it's possible, do you think this solution is more efficient on FPGA ? Compared to PSoC or DPSs for say.

--------------------------------------- Posted through

formatting link

Reply to
Weiss
Loading thread data ...

A good rule of thumb is that if you can do it in software, that's probably easier and cheaper.

--
These are my opinions.  I hate spam.
Reply to
Hal Murray

The choice of algorithm for your compression is another field of study than FPGA design. There is a newsgroup for that, comp.compression.

Once you have found a reasonable compression algorithm for your signal, then you can choose an implementation based on the various requirements.

--

Rick
Reply to
rickman

In most cases, the sampling rate is low enough that even general-purpose micros ought to be able to handle the task.

Jon

Reply to
Jon Elson

Is comp.compression active? I haven't heard of it.

comp.dsp may be a good resource, too. ECG signals (assuming you mean electro cardio-gram) have some unique features that should make efficient compression both a joy and a terror, given that (a) the signal has a lot of "quiet time" with low information content, and (b) it's medical, which means that lives will depend on the fidelity of the reconstruction.

Given the life-critical nature of the thing, I'd certainly want to start by finding a lossless compression method, and see if that's good enough.

--
Tim Wescott 
Wescott Design Services 
 Click to see the full signature
Reply to
Tim Wescott

(snip)

It has been pretty quiet lately, as has comp.dsp.

OK, who remembers watching "Emergency!" many years ago. They have a portable machine which, if I remember right, sends the signal though a phone line. That is, pretty much an analog modem. (Actually, one direction, so modulator on one end, demodulator on the other.) Seems like that sets a limit on the needed bandwidth.

But I presume the needed bandwidth is a lot less than 4kHz.

If you know exactly the features that are needed, you can compress down to just those features. But the bandwidth seems low enough to me, that I don't see why you need to work that hard.

-- glen

Reply to
glen herrmannsfeldt

The OP did not say what he wanted to compress down to, true -- but your small may be his huge.

My gut feel is that you can accurately reproduce an ECG signal with less than 100Hz of bandwidth, but without knowing the OP's needs, who can say if that's small enough?

--
Tim Wescott 
Wescott Design Services 
 Click to see the full signature
Reply to
Tim Wescott

I've never seen a time when comp.compression had much useful activity, but from the postings I see there the people frequenting the group seem to understand the topic pretty well.

I'm not sure the OP needs that particular group since he seems to have picked his compression algorithm. I don't know how that particular algorithm works so I can comment on how easy it would be to implement in an FPGA. Perhaps comp.compression can help him understand how to implement it.

I would bet it is like SONAR work. They aren't likely to be willing to give up much fidelity because of fear of not conveying some information. Military SONAR doesn't compress the signal even when sending over expensive satellite links because there is no way to compress noise and most of the signal is noise. Compress that and you lose the opportunity to pull weak signals from it.

If you could isolate the important features for compression you are just one step away from reading the EKG and eliminating the radiologist or whoever interprets those things. I expect they aren't interested in letting that happen.

--

Rick
Reply to
rickman

(snip, I wrote, regarding EKG signals)

(I always heard it as EKG, thought I don't know why)

(snip)

OK, so 100Hz, say 8 bit samples are enough, and 10s for total length, so 16000 bits. So, it won't fit in a tweet, but is small enough for just about everything else.

-- glen

Reply to
glen herrmannsfeldt

Many people would be interested in that. But development and, mostly, certification of such a device will be far more than that of a 'simple' ECG monitor. Such a diagnostic device would have to be proved in probably years of clinical studies. For an ECG monitor it is enough to comply with existing standards as it is proven technology.

--
Stef    (remove caps, dashes and .invalid from e-mail address to reply by mail) 

When the going gets tough, the tough go shopping.
Reply to
Stef

What bandwidth is required, depends on the application. IIRC, the standard bandwidth for 'normal' ECG is 150Hz. For that you would at least need

300SPS, and I believe 500SPS is commonly used. But in some conditions higher rates are required.

For some applications, 10 seconds may be enough. But in others you would want to see an entire 24h period, looking for rate variations and other abnormalities.

And resolution? 8-bit may be enough in some applications to send to send the filtered result. Aquisition is often done at 24-bit, but this usually includes a large DC offset.

--
Stef    (remove caps, dashes and .invalid from e-mail address to reply by mail) 

The meek shall inherit the earth -- they are too weak to refuse.
Reply to
Stef

Exactly, this algorithm is needed for those cases where you need to record

24H worth of the signal, and then upload it, which will take too much time with no or lossless compression.

I just want to be clear about this, so basically, this algorithm has to :

1- Sample the ECG signal (Analog) to pick every cardiac cycle. 2- Decompose every cardiac cycle signal using an orthogonal polynomial base (Legendre polynomials for example). 3- Save the more relevant coefficients of this decomposition, it's the compression part. (These coefficients will be used to recreate the signal)

And all that must be done on an FPGA. And about the comp.compression, i don't know which one you are referring to, because i only found one Google group.

--------------------------------------- Posted through

formatting link

Reply to
Weiss
[ECG compression]

Why? That seems like a poor approach. It will be much simpler and cheaper to do it in software.

--
These are my opinions.  I hate spam.
Reply to
Hal Murray

Ok, so where is the question?

I'm not sure what to say. comp.compression is a newsgroup just like comp.arch.fpga. I only suggested that group in case you were not familiar with the math behind the compression method. If you understand the algorithm then implementation is the next step. I take it you are not familiar with FPGAs? What exactly do you need help with?

--

Rick
Reply to
rickman

I am pretty sure that 24 bits is too much. While I do record audio in 24 bits, I am already pretty sure that many of the bits are noise, such as from the amplifiers. There has to be a lot more noise in the EKG signal.

OK, but besides the time to upload, it will take too long for anyone to look at. The compression system will naturally have to find the similarities between cycles and factor them out.

You also have to remove most below the thermal noise level, as that won't compress at all.

So, what should be left is the difference between cycles, as a function of time, which should be exactly what one wants to know.

Different cycles will have different lengths. Can that be factored out. (Resampled such that all have the same length? At the same time, storing the actual length.)

Also, normalize the amplitude (vertical scale), again saving the actual values. (I don't know if either the period or amplitude are important in actual analysis.)

Comp.dsp people tend to like sinusoids, but other transforms are fine, too.

I would first, after resampling and normalizing, compute the mean and subtract that from all of them.

Seems to me that at this point, you need to know exactly the features that are actually important. The compression needs to explicitly extract those features on each cycle. Once that is done, it should be easy to show exactly how those vary over time, which is the only reason I can see for wanting 24h of data.

There is a comp.compression usenet group, but comp.dsp might be a better choice.

Why does it have to be on an FPGA?

Not that it is a bad idea, but the exact reason can affect the best way to do the design.

-- glen

Reply to
glen herrmannsfeldt

Yes, but it could be done in a soft processor on the FPGA.

I think by now a small (relatively) FPGA is cheaper than many other processors, along with the support circuitry needed.

Especially if the hardware design needs to be done before the rest of the logic is spec'ed. (Or needs to be able to be easily changed for updates.)

But I suspect it is a project for an FPGA class.

I don't believe that should disallow soft processors.

-- glen

Reply to
glen herrmannsfeldt

I would not say it is cheaper for all cases. The devil is in the details. Yes, many things can be done in a soft core in an FPGA, but whether it is the best way depends on many factors. Before making any sort of judgement on this I'd like to know why the OP thinks the FPGA is needed.

Again, that depends. Let's hear the reasons and then discuss the validity or other options.

--

Rick
Reply to
rickman

Yes, it's a project, so i must use a compression algorithm based on orthogonal polynomials and it must be done on FPGA.

In my opinion, it would have been better if i was able to use Matlab and do a software solution.

Also, i'm not familiar with the mathematical portion of the project, and my knowledge on FPGAs is basic (from VHDL courses), that's why i'm trying to have some insight from more experienced people in the Electrical Engineering field.

--------------------------------------- Posted through

formatting link

Reply to
Weiss

(snip, I wrote)

I like FPGAs for these problems, as you can get them to run very fast. You might be able to filter/compress input at 100MHz or so. But that is completely useless for a 300Hz input rate.

You should still debug the algorithms in matlab before implementing them.

Unless there are rules against it, I would do it with a soft processor inside the FPGA, along with the other logic needed for I/O. You need to interface with the A/D converter and whatever the data is going out through.

Well, as I said debug with matlab until you understand the math.

You should have one sample data stream to work with.

If you can't use a soft processor, my favorite way of doing these problems in FPGAs is with systolic arrays. There should be some literature on them.

Othewise, see the suggestions from the previous post.

-- glen

Reply to
glen herrmannsfeldt

FPGAs also run slowly pretty well too, lol.

YES! No point at all in trying to implement an algorithm before you completely understand it and have it working in something like Matlab. At least that is the way we would do it on a work project rather than a school project.

I don't agree with this. It is a level of abstraction that is likely not needed or useful unless there are problems fitting the logic into the FPGA. It also will be slow to simulate running a CPU emulation in the HDL simulator in essence.

Yes, your Matlab run will process the sample stream and produce identical results to the HDL simulation of your VHDL code. Then I suggest you design a way to run the same data through the FPGA to make sure it is working like the simulation. Finally connect your ADC(s) and process real data if that is part of your project.

A systolic array will in essence create a logic function for each operator in the algorithm and process one data sample through the pipeline on each clock. That may be overkill here, but I don't know what is involved in running this algorithm. Depending on the math needed, this may end up being a daunting project.

--

Rick
Reply to
rickman

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.