We see accelerometers everywhere today. I never designed with any, but a question nags at me -
There must be tolerance and temperature errors, like any other component. The problem isn't with accel. (or force), but the velocity and position errors will integrate.
So am I justified in being suspicious of many of the applications?
Accelerometers are quite noisy and have other sources of errors. When they are used to build an Inertial Measurement Unit (IMU) they are often combined with other sensors (gyros, magnetometers, air pressure...). Drones do this reasonably well (outdoors often thanks to GPS) and I have seen videos of guys attaching an IMU to shoes where you could see a 3D
walking over the starting room... The required data fusion (Kalman filtering in the optimum case) is not straightforward but it can be done.
What Pere said. Basically, if you have some redundant data then you can use Kalman filtering or other techniques to get a better measurement.
An easy example, if you don't think about orientation, is GPS merged with acceleration. The GPS gives you position with more or less zero error at DC and lots of noise at higher frequencies, while the accelerometers give good information at high frequencies with infinite error at low frequencies. If you put those together (i.e. "sensor fusion") you get a much better overall answer.
It'd be interesting to see how they stack up. "Kalman" does not equal "magic", so having a Kalman filter in your system doesn't necessarily mean that magic will happen.
A small error in the computed position of the vertical gives rise to very large, quadratically growing position errors.
If you're off by 0.1 degree, it looks like a lateral acceleration of
1g * sin (0.1 degree) = 1.75 cm/s**2. That adds up fast--in 6-1/2 minutes you're off by a kilometre, and going at almost 6 m/s.
Figuring out the vertical to sufficient accuracy was one of the main challenges in building accurate ICBMs, for instance. That's most of the reason for all that work on satellite geodesy back in the '50s to '70s.
George Gamow wrote a very amusing piece entitled "Vertical, vertical, who's got the vertical?" on that subject at the time. (I haven't seen it myself, but I've talked to people who have.)
I think that "Kalman filter" in this context means "phrase that will make people want to buy".
Not that I'm, like, cynical or anything.
In general a Kalman filter is effective when you (a) know what your sensors are saying, and (b) can either get redundant sensor information or can restrict the expected state in some way.
The second part is missing in this case, unless they have some specific task in mind ("gesture recognition", for instance).
Yup. It's amazing, when you're merging GPS and IMU data, how quickly a Kalman filter that tracks orientation as a state gets itself pointed in the right direction. It does take some non-zero acceleration in a sufficient number of directions (all three if you're also keeping accelerometer bias as a set of states, just one sideways direction if your accelerometer is perfect).
LORAN would have worked perfectly well, except for the difficulty of getting the Ruskies to install it around all of their major military targets.
Now that you find accelerometers and gyros inside of phones, Kalman filters have gained a certain cachet. No understanding, mind you, but certainly cachet. I suspect the term is going to be abused so badly that in a while it'll be useless.
Kalman filtering is nice with noisy measurements as long as the target moves at uniform speed and direction. Equally important is when to reset the Kalman filter when there is a true change in direction (detected by some other means).
According to other posts in this thread, is Kalman filtering really a "new" thing ? I have used it for target acquisition for more than 30 years ago.
They've been around for a while. I described the digital filtering on the C ambridge Instruments electron beam tester as "Kalman filtering" back around 1990 - by which I meant that early measurements had more weight on the num ber presented to the customer than later measurements - not because we thou ght that they were more reliable, but because what we presented was effecti vely the sum over all the measurements since we'd last changed the operatin g conditions.
It wasn't done perfectly - fast digital multipliers were a bit too expensiv e and rather too slow for what we wanted to do - but we decreased the weigh t of the latest updates by a factor of two from time to time in a tolerably rational way.
A bit like fuzzy logic was in the 1990's. We had a fizzy logic washing machine in Japan - once in a while it would do something random like not adding the soap powder or adding it in the final rinse stage.
They were certainly known in the late 1970's in radio astronomy circles as one way to handle turbulent atmospheres - Iowa state Okatan & Basart had a paper using Kalman filtering to eliminate phase errors in the Vol
76 Image formation from coherence functions in astronomy, Reidel 1979.
Modelling closure phases became the definitive way to do it but their paper was an alternative approach that some groups used for a while.
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.