gps sattelites

Another project to get around to someday. I'm currently using a rtl-sdr dongle to pick up ADS-B information.

Reply to
rbowman
Loading thread data ...

I wasn't doing averaging, but the GPS was. I connected it to a PC program that showed the readings on a map. It was mostly in a small area, but once in a while, it would take a trip, up to some 100 feet away from the location. That entire excursion would be a significant part of 8 minutes and would cause noticeable error in data collection with a short term average. There was no reason to suspect satellite positioning as the excursion was too short. Sats take hours to move across the sky. I think their orbit is 1/2 day, no?

I also found a location that simply would not register a solid reading. It wandered around by 100's of feet it seemed. This was on a ridge, potentially with line of sight, although at some 10 mile distance, to Camp David. Word is they use GPS spoofing in the area at times. I guess that can't be confirmed very easily.

GPS modules can be bought for $25 the last time I looked. One could be connected to an inexpensive data logger and left for a day or more. Plot the data on a map and you will see a drunkards walk. The excursions will be minimal from my observations.

Reply to
Ricky

Roughly that, yes. It's not a geostationary or geosynchronous orbit.

Position "wander" (GPS drift) can be caused by a whole bunch of phenomena. The signal path from any given satellite to the receiver is going to be affected by multipath - e.g. signal reflections from buildings, trees, airplanes, the ground, and so forth. As the satellite moves, the multipath behavior will change. This creates an effect similar to audible "picket fencing" in a VHF-FM signal.

If I recall correctly, ionospheric disturbances can also perturb the signal. The ionosphere is far from static, and if there's significant solar activity the signal propagation can change on a minute-to-minute basis.

I find it quite amazing that the GPS receiver's front end and signal processing logic can pick out a whole bunch of extremely weak signals all transmitting on the same frequency, and measure their arrival times (phases) to such a high resolution.

It would be interesting to use a single high-quality GPS antenna, with an active signal booster/splitter, and feed the split signal to a group of GPS receivers of different make/model/design, and then compare and contrast the position reports and the "which satellites were in view and which ones were used for this position report" data. For best accuracy all of the receivers would need to be programmed correctly with the length (i.e. propagation delay) between cable and receiver.

That sort of comparison might help separate site-specific issues (e.g. patterns of multipath) from device-specific ones (e.g. differences in the algorithms used by different receivers' firmware).

Reply to
Dave Platt

Chris "working reliably" has numerous definitions. The objective here is to reduce the number of readings required, ie the number of days required. At this point, I'm guessing that a lack of overhead sattelites is causing large altitude errors. Might work. One way to find out.

Hul

Chris J> > John I am looking at positions generated each second by one of Ublox's

Reply to
Hul Tytus

Thanks Lasse. I'll take a look.

Hul

Lasse Langwadt Christensen snipped-for-privacy@f> s??ndag den 12. juni 2022 kl. 15.40.57 UTC+2 skrev Chris J> > > John I am looking at positions generated each second by one of Ublox's

Reply to
Hul Tytus

A GPS receiver needs four sats to get a 3D lock. A few more improve the accuracy. But it also provides redundancy. If one sat is arriving by multi-path, the calculations using that sat will result in significant deviations. I don't know if they do, but such a sat can be removed from the calculations, improving the accuracy.

Variations due to atmospheric disruptions are minimized by WAAS and similar correction schemes. I find without WAAS the error was typically 30 feet with larger excursions, while with the WAAS correction turned on, normal accuracy is typically better than 10 feet. Apparently they are done by brute force, a stationary... station, measures it's location and periodically reports the error. This is broadcast by the sats as a correction based on your area.

I assume you know this is done by correlating PRN codes which give a lot of gain in the signal. Each sat has a different PRN which looks like noise to all the other codes, so very little interference. I believe the code is 1024 bits long, so lots of gain.

As long as they all see the same delay, it won't make a difference.

We evaluated small, GPS circuit boards for use in a product once and came down to two candidates, so we lab tested them using an external antenna. One worked just fine with some seconds to 1st lock (20 sec maybe). The other brand never got a lock. The vendor didn't care enough to find out why their units weren't working.

Reply to
Ricky

Did you also record the HDOP and VDOP figures from the GPS?

It calculates how the trigonometry propagates the signal timing errors into positioning errors, depending on the specific locations of the satellites it's relying on. If you aren't using DOP, you should be.

Reply to
Clifford Heath

It's usually the other way around, several overhead sats and not so many closer to the horizon. Low elevation sats are often blocked by structures or terrain. Elevation accuracy is inherently poorer than the map coordinates. Having low elevation sats helps that issue, not only the sats overhead.

GPS measures timing reported by the sats in a relative manner only. This creates a 3D hyperbola for each pair. The intersections show the location. The best accuracy comes from sats spread around so the axis of the hyperbolae are not so close to one another. Only once you calculate your position, can the actual time be determined from the reported times and the now known path delays.

Reply to
Ricky

VSOP is really for very accurate solar system object positions. The trouble with multibody gravitational effects is that they cause the orbital elements of each component to evolve with time.

Historically when computer controlled scopes first became available the planets were easy enough but the moon was well beyond what they could fit in the firmware - too many perturbations and the scope would almost never point at the largest object in the night sky!

You should be able to get away with something much cruder for taking satellite orbital elements to actual positions. The tedious bit will be obtaining the relevant orbital elements every couple of weeks.

You are after all only interested in the the number of birds within a given zenith angle at your location.

I have a spreadsheet that can take classical orbital elements for solar system objects and turn them into x,y,z and thence to RA & Dec. It is more intended for amateur astronomers doing comet hunting though.

Reply to
Martin Brown

Do you have two examples of the raw data good and bad as CSV files?

I'd be interested to take a quick look (although it will be July before I have any slack time for interesting look see type things).

My instinct is that switching from mean to median would go a long way to solving your immediate problem by weighting down the sporadic outliers.

Averaging is only helpful against Gaussian distributed noise and my instinct is that your noise is decidedly not that friendly.

Reply to
Martin Brown

Martin while seeking "Astronomical Algorithms" I bumped into "Sun Position" by someone named Craig. A British fellow apparantly. Claim was code for accuratly finding the sun's position, in 6 languages! I had hopes Craig was using the current VSOP procedure which you mentioned. The intrest in the sun's position came from a different project and is not a part of the GPS efforts I've described. What's needed now is a description of the procedure for deriving a satellite's position at a given time or an example of such code such as "Predict" on qsl.net. Any suggestions?

Hul

Mart> > Martin the VSOP methods sound like what I was after, thanks for mentioning

Reply to
Hul Tytus

I already responded with a pointer to this: <

formatting link

Contact the author of that package and ask if they're interested in helping implement that?

CH

Reply to
Clifford Heath

Martin by "raw data" you're refering to data recieved each second of the 8 minute sampling period each is not recorded but included in the average. CSV is a term unknown to me, but that's probably due to memory and/or lack off access to the source code or attending documentation. However, the position of the satellites & some other data is recorded upto, I think, 64 entries. Therafter the oldest is replaced by the newest. That is held in RAM and is consequently lost when power goes. The results, ie the positions, are recorded by hand. Along the lines you've mentioned regarding methods of averaging, especially those avoiding the "outliers" here is a possible scheme: record 1000 positions on each new position calculate the average remove the most distant entry and place the new in it's place

Hul

Mart> > Martin the readings I've recently taken are formed by waiting about 2 minutes

Reply to
Hul Tytus

CSV = comma separated variables. I was hoping that maybe the device could output an ASCII raw data dump.

time, x, y, z (or lat, long, height)

One thing to bear in mind is that the unit is mostly concerned with obtaining a latitude & longitude for the observer so in marginal signal or constellation situations it will tend to put the observer onto the default oblate spheroid of the Earths surface (or some other perhaps more detailed internal topographic map).

Keeping a running mean and average for the length of buffer that you have and then ignoring any values more than 3 sigma away from the mean is one sort of quick and dirty heuristic I have seen used in realtime crude and not very powerful data acquisition with (very) noisy data.

Basically it computes mean and variance of the original buffer and then mean and variance of the modified dataset. Keeping track of number of samples actually used. Rinse and repeat.

If you have the entire dataset at once then after the raw data are all acquired you can do it better in post processing.

Reply to
Martin Brown

Martin, on common terms, the "terminal" used here doesn't store the data but just adds it to the average. I found a source for Meeus' Astronomical Algorithms. Thanks for mentioning it.

Hul

Mart> > Martin by "raw data" you're refering to data recieved each second of

Reply to
Hul Tytus

Values. Comma separated values. Variables would leave you guessing :)

You need to keep the sum of squares if you want variance. But yes, a good technique.

CH

Reply to
Clifford Heath

The simple version of this is the trimmed mean.

The more powerful approach is a Jackknife Estimator.

.

formatting link

Joe Gwinn

Reply to
Joe Gwinn

In rough and ready engineering terms it was frequently referred to as the mean whose parents were unmarried at my place of work.

Doesn't really lend itself to real time computation with limited resources though. Summing a few extra terms is more easily done.

Reply to
Martin Brown

Heh.

Well, it is often used in realtime, for radar, but the computer is generally pretty capable.

For a jackknife mean, it usually means subtracting ni/N from the mean of all N samples, for each sample ni, dropping the sample with the largest effect. This is pretty fast. Keep doing this until the successive means are reasonably close to one another, where "reasonably" is domain dependent.

Joe Gwinn

Reply to
Joe Gwinn

There's failures of most schemes; the old map-and-compass days got interesting around Mt.St. Helens, in May 1980; I took sightings off local peaks to locate a seismometer or two, but the big mountain was... unavailable at that time.

Reply to
whit3rd

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.