Jump to content
abecedarian

Oversampling, averaging and getting confused.

Recommended Posts

I see things like 'sample the sensor 16 times then divide by 16' to reduce errors and influence caused by noise.

 

Is that any better than running continuous 'sample the sensor and add it to the result and divide by two'?

Share this post


Link to post
Share on other sites

To complicate the matter further you can alter the weighting in a weighted average.

BattVolt = (BattVolt + BattADCValue*3)/4;

This will respond quicker to changes in the current battery voltage. useful if the sample rate is low, (order of seconds or minutes)

 

 

These are just technically just low-pass filters.

 

If you can isolate where your noise is actually coming from, (sleep CPU while performing, isolate AVCC from DVCC powersupply) you may be able to remove, or improve, the need for filtering.

Share this post


Link to post
Share on other sites

think of 'averaging' as a low pass filter in this case.

 

if you use the 16-sample-then-average (window size 16), then one noise spike in that batch of samples gets strongly reduced.

 

in the other case, where you take the last sample added to the running average (window size 2, sort of), if the last sample is a noise spike, it will come through distinctly.

 

 

 

(EDIT 1 - in fact, it's a convolution - a time-domain rendition of a low pass filter - which can lead to some very interesting filtering in a 430 with the hardware multiplier.   A  FIR, if you're a DSP guy.

Block averaging being a 'rectangular' window.  Other shapes are handy, like sin x / x  or  triangle,  and correspond to the filter's frequency response expressed as the time-domain impulse response.)

 

 

(EDIT 2  -  and to your question on 'oversampling', this means the sample rate (ADC rate) compared to the highest signal frequency of interest (eg: the change in temperature of a room).   

In many uController apps, our sample rate is much higher than the signal's rate of change  ----   eg:   Ksamples/sec vs Degrees C per second    --- and this is an example of oversampling.  You can throw away samples (decimation) if you want to reduce computation / memory array size / etc  without significantly reducing the fidelity.  Note, FIR filters have a delay.

Much of this is equally - or better - handled in the frequency domain.)

Share this post


Link to post
Share on other sites

I'd typically use the first method when measuring something that changes slowly compared to the sample rate. The second method would be when the thing being measured is likely to rapidly change and the most recent result is the most important. For example if I was sampling barometric pressure, I'd use the first method for a weather station, but the second method for an altimeter.

 

 

Sent from my iPad using Tapatalk

Share this post


Link to post
Share on other sites

I just whipped up this quick example in excel.

 

post-274-0-21835600-1405413009_thumb.png

 

In this case +-10% was added to a constant value 100.

 

we have a 5x oversampled.

50% weighted, A = (A+B)/2;

10% weighted, A = (A*9 + B)/10;

 

I would recommend doing something like this to evaluate which method would work best. preferably on real data.

 

 

For example if your input signal changed values suddenly.

post-274-0-04076500-1405413537_thumb.png

 

The same filter might not be appropriate.

Share this post


Link to post
Share on other sites

Sorry for the long winded response with probably more details than really needed. Your replies have been helpful with my thought processes.

 

What I have being sampled are:

 

- manifold air pressure; voltage output; will vary with crankshaft position and engine RPM; will probably implement two sampling 'window' related to crankshaft position, 180 degrees crankshaft duration, each window starting at each cylinder's top dead center with the intake valve open, holding the highest vacuum off boost or highest pressure on boost, and changing the net sample rate inversely with relation to RPM as at 10000 RPM, one full engine cycle is around 22ms, so only enough time to sample the sensor 5-10 times, maybe.

 

- manifold air temperature; resistive sensor; will fluctuate but is relatively slow, probably sample 10-50 times per second

 

- coolant temperature; resistive sensor; relatively slow and stable, oscillating between around 170F-220F

 

- throttle position; linear resistive sensor / potentiometer with voltage output; typically stable but can change rapidly, i.e. closed to open throttle under 0.1 second; probably sample at 100-1000 SPS and do the simple (previous reading + new reading) / 2 averaging

 

- oxygen sensor; voltage output; relatively slow, around 10 updates per second realistically, oscillating from ~0.2 to ~0.8v (narrow band) or 0.2-5v (wide band). Will probably sample and average with the (old + new) / 2 method at 100 SPS to catch excursions before they go too far.

 

- battery voltage; typically stable but can possibly range from ~7v to 15v under certain circumstances. Cranking a weak battery could drop it to 7v easily and recovery once the engine is running will initially be quick but considering it's a motorcycle with headlight and tail light and dashboard drawing power it could probably hit 12.5v at idle and 14v at 4-5K RPM.

 

I'm still thinking and planning and not really seeing any particularly strong reason to do significant oversampling. +/- one degree temperatures won't be significantly influential to fuel calculations; seeing the TPS open quickly can influence things a bit as if it's not accommodated the fuel mixture can go lean (not enough fuel) quickly. Battery voltage... well a 0.05 volt flux won't significantly influence how much fuel the injectors squirt and the O2 sensors are slow and a narrow band sensor is really only useful for squeezing out the MPG's when cruising, and a wide-band is useful for tuning to specific fuel ratios but are slow and not well suited for anything but wide-open throttle acceleration, or that tuning for MPG's when cruising.

Share this post


Link to post
Share on other sites

You can also use a center-weighted method such as taking the running median of a sample set.  This eliminates outliers, high and low.  The response to change is slowish due to the need to sort the list.  However, the sample size can be adjusted for the best accuracy/speed balance of what you are measuring.

Share this post


Link to post
Share on other sites

In radio modulation, which is a course of study with a mountain of research behind it, there is this thing called SNR (signal to noise ratio).  If I decrease the data rate, not only do I increase the amount of energy per bit (the bit is longer, hence it is a bigger pulse), but I also can reduce the amount of noise because the signal is narrower.  So I get bigger SNR.

 

This is like increasing the duration of the sample.  For the most part, it works great.

 

In certain cases, however, you will find that there is *frequency dependent interference*.  In other terms, spurious readings you can't avoid, no matter how hard you try.  In RF there is this thing called DSSS (direct sequence spread spectrum) which is really just a way to do controlled sampling and averaging.  However, DSSS has been studied deeply.  Amazingly, it seems to work better than does decreasing the data rate, despite the free space advantage of the latter.  This is simply an artifact of the *real world.*  Granted, DSSS is not simply averaging, it is correlation, but the idea is the same.

 

Anyway, two cents.

Share this post


Link to post
Share on other sites

The biggest differences between the two styles mentioned in your first post, based on my experimentation with both of them:

 

1) The (oldRead+newRead)/2 uses far less RAM than holding 16 samples in RAM.

2) The 16 samples and divide by 16 gives you the average of the last 16 samples.

3) The (oldRead+newRead)/2 gives you the running average from the beginning of time till now. It will never catch up completely to now.

 

I'd use old+new/2 for the slow moving sensors, battery voltage, temp, and o2 sensor.

MAP and TPS I think I would go with the first method, as you want to add more fuel now on a throttle that is snapped open, and you don't really want to be dumping a bunch of fuel into the cylinder / turbo if the throttle is snapped closed.

 

On all of them (but especially TPS and MAP) I would try to sample fairly often over a complete engine revolution, and average the data from that revolution.

What you don't want is to end up doing most of your samples right as a spark plug fires, or have your MAP sample time drift slowly through mostly sampling at BDC exhaust stroke (maximum intake manifold pressure) through just before BDC on the intake stroke (maximum manifold pressure).

Sampling often through one cycle of everything gives you a nice average that isn't skewed. With normal ADC stuff I like to sample for ~17ms, one cycle for the 60Hz field we typically live in.

On a bike where the cycle time varies things are more complicated, but given that you have a crank sensor you can get the cycle time and adjust for sampling times. If, of course, the MCU you're using is fast enough.

at 10k RPM you've still got 6ms/cycle, that should be enough on TI's more powerful MCU end at least.

 

Mostly I think my point is that I think it's a good idea to consider the source of the noise you're trying to get rid of. There's mechanical (position in the engine cycle for MAP), external electrical (coil firing), and internal electrical (ADC jitter, etc.)

Sampling like mad and dividing gets rid of the internal easily.

The others require some thought as to when you're sampling.

 

Your project sounds interesting, do you have a blog or a status thread for it?

Share this post


Link to post
Share on other sites

Bit late to the party on this post, but I found the graphing tool in Code Composer Studio very useful to see the direct effects of over sampling.  It's not the fasted refresh rate, as in the debug mode you can only get a 100mS maximum refresh rate, but it's really good to see trends in data.

 

This video shows it in action https://www.youtube.com/watch?v=AV4J9uGyiKo&list=UUUIN5H4aVjwwqXUS39_CEBQ

 

I also did some tests with a digital filter using the graph tool and comparing the unfiltered data with the filtered.  Both images below are of the same filter, if you refresh the data after the initial start condition, then you get a smaller resolution so effectively zoom in.

 

Zoomed-out-Digital-Filter.jpg

 

Zoomed-In-Digital-Filter.jpg

 

Cheers,

Ant

Share this post


Link to post
Share on other sites

That's very cool, I was having to visualize the data in Processing or on a little LCD screen, neither of which is ideal.

I'll definitely have to check that section of CCS out.

Share this post


Link to post
Share on other sites

@@bobnova it's very easy to use

 

  1. Go into the debug mode
  2. Add a watch expression for the variable you want to graph
  3. Right click on the variable in the expressions window, then scroll down to graph
  4. Then there are various parameters you can change to display the data    

 

if you have any issues let me know and can walk you through it....maybe it's worth a basic tutorial?

 

Not sure if IAR has this feature?  As I would like to switch over to IAR at somepoint.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×