I want to write an algorithm to determine sunset/sunrise from an ambient light sensor. I have a process scheduled to convert the analog value to digital at a specific rate (which needs defining). The values will then be tossed into a FIFO of some length (needs defining as well). The algorithm needs to filter large changes which might come from some other source (headlights of a car, an object temporarily passing in front of the sensor) and catch only the gradual slope (which I assume is non-linear) from the natural sunset/sunrise.
The first thing I need to do is choose a sampling rate and buffer size. I'd like to keep the buffer size around 256 bytes. The AD conversions will consume 2 bytes each (10 bit). Sunrise and sunset are slow processes so I think I should analyze over a 30 minute period. Now, if I just sample once a minute for 30 minutes I am likely to get a few anomalies which will throw the process off. So, I was thnking it might be wise to perform the conversion at say 1Hz and then average/filter 60 of these samples into a single value which is tossed in the 30 minute buffer. This 30 minute buffer is what will be analyzed for sunset/sunrise condition. I was thinking the best approach to this is to continually calculate the slope of the data, then evaluate the slope along with the current value. So, for example (using arbitrary numbers) if the slope is aprox negative 35 and the current light reading is aprox 147, then you could say it's dark enough and the slope resembles that of sunset... so let's call this a sunset.
So, those are my initial thoughts. I am not a DSP guru (and I am sure you can tell that) which is why I am here. I'd like to know...
1) Based on what I described, is this a reasonable approach? If not, how do you recommend I go about it?2) Is a FIR filter a good approach to applying a low pass to the one minute values?
Thanks!
--------------------------------------- Posted through