ELF Monitoring Station with Real Time Streaming

I am looking for a viable method to detect geomagnetic micropulsations in the 3-20Hz range and provide a real time continuous data stream via the internet.

This link provides an overview of the type of signals involved.

formatting link

There are monitoring stations at various locations globally, but the data transfer is periodic, ie. once every 30 minutes. At least for anything I could find.

We already have the XY coils and amplifier operational, but would like to know the following.

  1. A type of low pass filter (40Hz cutoff) that does not impose phase distortion.

  1. The most straightforward off-the-shel technology for streaming the aquired data in real time so it can be monitored on an internet connected PC.

Any thoughts on this would be much appreciated.

Mark Harris

Reply to
mharris
Loading thread data ...

---------^-^^^----------------------^^^^^^^^^-^^^^^^^^^^

What's the problem with that frequency? If data were batched in smaller bundles and sent more frequently (1 minute of data in a

1 second "pulse" once each minute) is it better? Or, do you need *this* 1/20th of a second of data *now* -- followed by the next 1/20th of a second data immediately thereafter?

Is the latency (30 minutes) the problem? Or, the fact that you have to "unbundle" the packed data to recreate a continuous time representation?

"Real time" has lots of different meanings to different people. At the very least, you have transport delays depending on where, on the globe, the recipient is located. At the other end of the issue, you have the fact that The Internet employs packet-switching technology so any "streamed" data is actually "chunked" -- some of the observations obviously being older than others in a particular "chunk".

Do you have the data in digitized form? Or, is it just an analog "audio" (VLF) signal that needs to be digitized, packetized and "broadcast"?

What sorts of resolution and sample rates are of interest? And, any extraordinary measures to protect against data loss (in a stream and/or "archive")?

Do you expect "recipients" to monitor the stream "live"? And, if they happen not to be "listening" at a particular time, they *miss* the data that was sent in that interval?

[Think of "internet radio" in which if you aren't listening when a particular piece is being played, you don't hear it. At all. Until some future date when the station may elect to rebroadcast it. Contrast this with viewing a web page or downloading a file where the process may pause or be interrupted -- but can be resumed or restarted at a later point in time]
Reply to
Don Y

You might want to provide a definition of phase distortion. Do you mean linear phase? Personally, though I like designing active filters, I would do the filtering digitally. Lots of linear phase FIR filters on the internet, or fire up remez and make one of your own.

Read up on netcat. It is a very generic way to stream data around the internet. You can just pipe in/out of it.

Reply to
miso

Intervals would not give the desired results. The data stream needs to be continuous so as to interact in real time with equipment on the receiving end.

Yes, but we are talking subaudio frequencies here. You can stream live music with reasonable fidelity. I would think ELF, requiring fewer sample points, should be easier not harder.

The signal, as acquired from the sensors, is an analog complex, non-repetitive waveform with a bandwidth of 2-30Hz.

That could come later. We need to get something up and running first for proof of concept.

This is for live monitoring or recording. If no one listens, the signal can be recorded if necessary.

I hope that clarifies what we are after.

Mark Harris

Reply to
mharris

You haven't defined what you mean by "real time". Any signal sent over the Internet will have an undefined latency. How much delay you can stand will tell if this is a workable plan.

There is also the format you choose to send the data. Normally, packets are

1500 bytes long. If you just take one sample and push it out, you are wasting lots of bandwidth.

There is also the question of point to point or multicast to consider. Will one sensor talk to just one receiver? Or many receivers.

Reply to
tm

What would happen if it was continuous but delayed by 200ms with occasional missing chunks?

would 15 seconds delayed but continuous be better?

It's not really live. It's just extermely recent.

--
?? 100% natural 

--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
Reply to
Jasen Betts

Understood. Of course, the point of my "packet switch" comment was to draw attention to the fact that *all* Internet traffic has some underlying "interval" -- it's not a "continuous signal" in the same sense that the signal from your sense coils is.

You are hoping for a COTS technology that recreates the

*illusion* of a continuous signal -- shifted in time (and space). E.g., "internet radio".

You haven't *said* what resolution and sampling rate you require for your data. Would *one* bit at 40Hz suffice?

Also, while your problem superficially resembles "streaming audio", you may find that most COTS "solutions" treat your signal as "DC".

I.e., were you expecting to feed your signal into a "sound card" ("line input" on a PC) and hope some software would sample, packetize and "broadcast"/multicast that over the network? IIRC, most audio processing is AC coupled (to be fair, I haven't looked in detail at the front-end of a "sound card" in many years so no idea how things are done, today). If that's the case, then your 3Hz is going to be invisible to the hardware/software in that COTS solution.

[use your signal to modulate a carrier up into the middle of the audio band? or, some DC-coupled front end??]

Yes, I saw that in your post. My point was, do you *process* that signal in any way? I.e., if you've already digitized it, then how it is treated thereafter is a lot different than if it was still in the analog domain.

I admit to only a cursory examination of the URL you provided. But, didn't that URL *prove* this to be possible?

Sorry, I don't mean to pick nits... consider the "recorder" as a "listener". If that listener sits on the internet *anywhere*, then this is very different than if the listener sits at the output of your *sensor*.

I.e., once you're "on the wire", you have to deal with how that "broadcast" can be interrupted, corrupted, etc. With no means for retransmission, if interrupts the broadcast, then *all* listeners lose that signal -- even the "recorder".

Closer. But, I think you still have a lot of blanks you will have to fill in. :<

As The Internet is a packet switched technology, it is inherently discontinuous. So, when "live audio" is "streamed", what is really happening (lots of hand-waving follows) is:

- the analog signal is digitized (iff live) at some resolution/rate

- these samples are assembled into "packets"

- the packets are "broadcast" down the network AS BANDWIDTH PERMITS

- receiver(s) pull packets off the network AS THEY ARRIVE

- SOME NUMBER OF PACKETS ARE ENQUEUED IN THE RECEIVER

- packets are dequeued in the receiver and presented to the audio out at some resolution/rate (related to original sample resolution/rate)

Each of these steps takes time. This translates to a temporal skew between when the original "audio" was presented to the transmitter and when it is reproduced at the receiver.

[Two different receivers may opt to reproduce the signal at different times relative to the input source! I.e., one device may opt to enqueue 10,000 samples while another enqueues 20,000 samples. As a result, the *latency*/skew manifested in the output of the first is *potentially* half that of the second. The queue sizes can be different to reflect how well a given receiver can "guarantee" that it can get around to processing the data in a timely manner]

Each step can have lots of "jitter". I.e., you might *assume* that if 100 samples fit into a single packet, that these packets would be transfered onto the network at 1/100 the sample rate. In reality, that's only the AVERAGE rate that they will be injected. There might be a lapse corresponding to 500 sample times with NO packets sent. This may then be followed by five 100-sample packets sent in rapid succession.

Similarly, at the receiving end, a receiver might encounter these same five packets as two arriving in rapid succession... followed by an idle period corresponding to *10* nominal inter-packet times... followed by one packet... then another... then the fifth... then a group of four... etc.

The receiver enqueues these so that there is "always" data ready for the "audio out" that MUST happen at a rate related to the original sampling (otherwise there would be a "dropout" in the signal in those periods where there was a longer than average delay between packets).

If your transmitter is in the next room (on the same physical network) from your receiver, you can probably keep the buffers (queues) very short -- assuming you can keep other traffic off the network. Short buffer means short overall latency.

OTOH, if you're trying to stream signal from the other side of the globe and this has to pass through many gateways and intermediaries to get from "there" to "here", then each introduces uncertainty in packet delivery time (instantaneous interpacket delay). So, you want a longer buffer to "carry you over" in these potentially long periods of "no packets".

Long buffer means longer skew between input event and output reproduction. (Queuing Theory 101)

At the same time, the transmitter has to be able to deal with the case where there isn't enough *instantaneous* bandwidth to push the next packet onto the wire. I.e., if the network is "busy" when it tries to push out a packet, it needs to be able to store the *new* input samples in another "empty" packet while it waits to get rid of the previously filled packet. So, you want a queue in the transmitter, as well!

[There are things you can do to keep this short]

Finally, the receiver needs to be able to recognize that a packet got *dropped*. So, when it comes "time" for the contents of that packet to be routed to the audio output, it knows to insert "silence" for one packet-time (100 samples in this example). Does this mean "maintain value of last sample throughout this period? Or, "drive output to '0'"? What happens when signal "returns" after such a silence? Will your ancillary equipment see this as a giant "step function" and think there's been an earthquake?? (gee, the signal went from a nice, steady '0' to a '27' instantaneously!)

I.e., it's not like running a really long *wire*...

HTH,

--don

Reply to
Don Y

I mean zero group delay. Here is a reference, (see page 9) but it looks a bit "exotic".

formatting link

Here is a bit more.

formatting link

FIR filters are a pretty specialized area, in terms of design, and I note some FIR filteres are not suitable for low pass.

I would need to see something that had been already implemented and work backward from there. Where would this methodology be in existing use?

This looks interesting. The object to have a near instantaneous replicate of the sensed signal at the receiving end.

Mark Harris

Reply to
mharris

On a sunny day (Tue, 29 Oct 2013 16:08:59 +1100) it happened snipped-for-privacy@comprodex.com wrote in :

FM modulate a few kHz carrier with the the 40 Hz.

One could use for example a Raspberry Pi with an ADC or some USB soundcard to connect to the net. Send as a wave stream. There are more clever ways actually, compress the data...

Still there will be network delays etc, but buffering at the receiving end should give you a good steady output. The buffering delay will stay.

Reply to
Jan Panteltje

I think you need to rethink your plan or decide how much lag is OK. Even fastpath ping latency across the net is 10ms so you can't expect to send packets in realtime without buffering at the receiving end.

It would be a lot more efficient to send a standard sized packet of about 1500 bytes at a time than to send each one individually. Path MTU will affect what gets through reliably without fragmentation.

A modified sound card DC coupled to you sensor might be one way to proceed. The streamed music is buffered and played with a delay.

DC to 30Hz would be more practical. Simple tweak to the input circuitry provided that you have a steady hand and design the signal source right.

I think you need to define what you mean by "live" here. ISTM that you can't hope to get better than within 1s or so by the time the stuff has been digitised packetted up and routed across the net to a remote station buffered and decoded. You might manage 0.5s lag but then you risk buffer underrun if a single packet gets dropped in transit.

The slightest network congestion and you are toast!

It would be trivial to do on a *local* Ethernet or USB connection.

--
Regards, 
Martin Brown
Reply to
Martin Brown

How instantaneous? And how do the receivers know or care? (I suppose you want that zero-group-delay filter to have zero time delay as well?)

What is this, some sort of distributed ELF-radar-over-net? Overly elaborate lightning tracker?

In almost all cases, you're fooling yourself if you think something must be done "near instantaneously", especially if you're already asking about signals that don't vary any faster than 32 milliseconds!

Tell us more, don't be shy. When asking questions, you can never tell too little...

I say, do what the radio observatories do: record an observation period, mark it with accurate timestamps, and play it back as needed. Recordings from different stations can be played back and compared, not in real time, but lined up after the fact (within seconds or years).

Easiest way to implement that over the net would be a proprietary protocol (unless one already exists..) where a data series is sent, consisting of the samples, plus the current system time at the start and end of that series. This works ONLY if ALL systems are actively synchronized to the same reference (e.g. NIST) via internet time: hopefully, within a few miliseconds.

Tim

--
Seven Transistor Labs 
Electrical Engineering Consultation 
Website: http://seventransistorlabs.com
Reply to
Tim Williams

You absolutely have to do this for internet transmission since there is no guarantee that the packets will arrive at their destination in the same order as they were sent. Any monotonically increasing counter will do for ordering provided that you have standardised sampling intervals.

Depending on how precisely it is needed the timestamps for radio astronomy VLBI are typically by local H-maser reference synched with national time standards. This makes the search for the white light fringe a tractable problem for the offline correlators.

A toy version for long baseline VHF interferometry at 151MHz was done using portable Rubidium clocks and MSF Rugby as the reference sync timebase in the late 70's. They also found that Rugby 60kHz signals ran slow in the morning when there was heavy dew on the ground!

GPS time would get you a much tighter and cheaper timestamp today.

--
Regards, 
Martin Brown
Reply to
Martin Brown

On 2013-10-29 08:39, snipped-for-privacy@comprodex.com wrote: [...]

You haven't yet explained why it's necessary to have instantaneous transmission. What does it matter if the signal arrives a bit delayed?

Jeroen Belleman

Reply to
Jeroen Belleman

This is what I was going to suggest.

In order to avoid mains (50/60 Hz), you might be interested to use some multiples of 300 Hz or some multiples of this. Avoiding the relationship with the mains frequency (such as 321.123 Hz) might be useful in some cases.

Anyway, if you have a stereo audio recording system, you could put the actual upconverted signal on one channel and some time coded signal on the other channel. Thus you are capable of reconstructing the events within a few milliseconds.

Reply to
upsidedown

Thank you for all the responses so far, some quite detailed. They have clarified a number of issues for me.

I recognize there would necessarily be a variable lag in data transmission. 500mS or less would be acceptable. Occasional drop-outs could be tolerated to speed things up.

An FIR filter (to be designed) at the receiving end would clean up the signal without introducing phase distortiol. Probably better there than at the sending station.

I have a DC modified sound card for data acquisition. So modulation is not required. Sampling rate would then be standard 44.1KHz.

I have also considered using a USB-connected PC oscilloscope as the input interface. This has the advantage of providing a monitor for the acquired signal, prior to digitization.

There are also a number of cheap digitizers such as "LabJack".

Additionally, because the signal is complex, the composite frequency is higher than the stated 2-30Hz range.

The signal would be accessed by logging onto a dedicated website. There would be no inherent limit in the number of viewers.

At the initial stages of development, it would be fine to accept a few compromises so as to implement the concept with as much off-the-shelf equipment and software as possible.

There are a number of online sources of pre-recorded geomagnetic micropulsation data. The idea is to improve upon these with a near continuous, near instantaneous signal.

Any further solutions along these lines would be most appreciated.

Mark Harris

Reply to
mharris

hey, apples oranges here! "...live music with reasonable fidelity." is ABSOLUTELY a subjective experience.

You suddenly compare a 'listening' experience to a 'machine' control experience. Luckily, a machine responds so slowly the delay won't too much unstabilize a control loop. Take a look at the specs for 'streaming live music' usually a 20 mSec latency. A pole at 20 mS will make it difficult to close/stabilize most control loops. What is 20mS to a listening experience? Sound travels at approx 1100ft/sec so that is around an 18 foot delay, the delay across a reasonably sized room. No biggie for listening but can be a pure catastrophe for machine control.

That said, when I read your first question I thought you were trying to monitor ELF variations in the earth's field, from various locations, while preserving absolute timing information [phase] in order to make some kind of very large phased array reeiver. Not a bad goal. It turns out that Chernov radiation as measured inside those sunken mines filled with water are being used as cosmic telescopes, so why not replace them with a 'simpler' system and track a particle while still in the upper atmosphere

- using a phased array ELF antenna. Doing so, and 'straigtening out' the data to show path [vector] and energy could possibly create the largest telescope ever made and allow man to see further into the universe than he has ever been able to see before. Anyway, that's what I thought you wanted to do.

Control a machine? Better to first simulate some of the effects you'll get just to be sure it's all worth it.

Are you monitoring the magnetic fields emanating from a motor? Does the motor sit in one spot? If so, why the 'net connection? If you need anyhelp making the ELF magnetic field measuring equipment; contact me. I've designed/built portable metering that has FLAT 1V/uT sensitivity over the range of 5Hz to 2MHz with something like 5nTpp noise; and/or 0.001Hz to

2kHz FLAT 0.1V/uT sensitivity. The key here is FLAT for processing through an ADC system. If you stood next to the street with that first meter, it used to deflect off scale whenever a vehicle went by.Only had a 2V FS capability and that means 2uT and the earth's field is around 50uT so you see the potential.

not to me, but I'm dense.

Reply to
RobertMacy

TIME STAMP the data and you won't be bothered by packets, missing chunks, latency etc etc.

Reply to
RobertMacy

pre-recorded?! Talk about latency, eh?

OK, so you're monitoring ELF of the earth's field. That IS DC upwards into the multi-kHz, but you knew that.

What is that resonance called at around 7.3Hz?

Oh, well as you monitor the earth's field, it is my understanding that it has dropped to around half what it was at the time of Christ, and continues to drop. Keep in mind that EVERY ice age had a collapse and subsequent zero crossing of the field [however, not every zero crossing has coincided with an ice age]

I think you can get live monitoring from several sites around the world, some five are specifically listed in the literature. Have you contacted geophysical institute facility at Boulder, CO for info?

May be of interest, take a look at

There are a lot of measurements of man-made noise, too.

research here:

  1. E.L. Maxwell and D.L. Stone, "Natural Noise Fields 1cps to 100kc", IEEE Transactions on Antennas an Propagation, Volume AP-11, Number 3, pp
339-43, May 1963. 2_ D.A. Chrissan, A.C. Fraser-Smith, Seasonal Variations of Globally Measured ELF/VLF Radio Noise, Technical Report D177-1, Stanford University Dept of Electrical Engineering, STAR Lab, December 1996. 3_ R. Barr, D. Llanwyn Jones, C.J. Rodger, "ELF and VLF radio waves", Journal of Atmospheric and Solar-Terrestrial Physics, Volume 62, Issue 18, pp 1689-1718, November 2000. 4_ Kenneth Davies, Chapter 9, Propagation of Low and Very Low Frequency Waves, Ionospheric Radio Propagation, New York, N.Y., Dover Publications, Inc., 1966. 5_ M. Balser and C.A. Wagner, "Observations of Earth-ionosphere cavity resonances", Nature, Volume 188, pp 638-41, 1960
Reply to
RobertMacy

Yep.

I don't know anything about VLF receivers. However, it appears to me to be the same problem as distributing seismographic data, which is also low frequency. The same methods should work. Upconvert or modulate the VLF analog signal to a higher frequency, add 1 second timer ticks to the data for sync, and transmit over whatever.

Something like these: possibly using the same methods and technology.

--
Jeff Liebermann     jeffl@cruzio.com 
150 Felker St #D    http://www.LearnByDestroying.com 
Santa Cruz CA 95060 http://802.11junk.com 
Skype: JeffLiebermann     AE6KS    831-336-2558
Reply to
Jeff Liebermann

Realize that this is going to chew up a lot of network bandwidth, most of which is carrying frequencies which are far outside of your measurement bandwidth. You'd be transmitting (and receiving) over half a megabit per sensor, assuming one channel of 16-bit data at 44.1 kilosamples per second.

I'd recommend pre-filtering the data at the sending end, and then downsampling to a lower data rate. For example, if you were to use a low-pass filter with a knee somewhere between 1 kHz and 2 kHz, you could down-sample the resulting data by a factor of 10:1 and cut your network traffic by the same factor. Since you're interested only in the signal components below 40 Hz, you could use a relatively simple low-pass filter for this step - since its "knee" frequency would be more than a decade above your 40 Hz limit, even a simple filter would have relatively little effect on the phase of the signal components below 40 Hz.

Reply to
David Platt

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.