Jump to content
43oh

Optimising math


Recommended Posts

:blush: sorry for not paying attention. Would it be feasible to approximate the curve with a look-up table and some linear or polynomial interpolation?

Potentially. However would that be smaller than the math library? We are talking 30000m max altitide with a resolution of .2-.5m (I forget which term to use lol). Accuracy is important so I wouldn't want it to be too linear.

 

I understand the basic concept of lookup tables but I have never really looked into them (haha) so dont know the finer details. Im guessing this is where interpolation comes in? :)

 

Sent from my GT-I9300 using Tapatalk

 

 

Link to post
Share on other sites
  • Replies 43
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Lots of interesting stuff, but you're forgetting one thing. Do you NEED to optimize? For example, if you're checking altitude 5 times a second and your non-optimised code takes a (relatively) very slo

Over in the Energia libraries forum I have posted a port of the libfixmath 16.16 fixed point math library. It is not Energia specific, so should compile if you are using CCS. It is much smaller and fa

Since your MSP430 has a hardware multiplier, the multiply would be quicker. But modern divide function don't take too long.   One interesting trick, since alot of maths (especially with ADC) will in

With lookup tables you basically trade static memory (i.e. Flash) for performance, code size and potentially memory used by the algorithm you replace. From what I understand from your post up-thread, you're currently running out of RAM.

 

Accuracy of this approach is determined by

- how much memory you can allocate to the lookup table (i.e. more values = better accuracy) 

- what range of values needs to be covered (i.e. smaller steps = better accuracy)

- how close to linear (or whatever interpolation you use) is the curve within a step (i.e. maximum deviation)

 

You will also have to decide what part of your formula to convert into a lookup table, ideally it would have just one variable, two variables will increase table size or require more interpolation.

 

Personally I would start with a spreadsheet, plugging expected ranges of parameters into the formula that you want to replace. Then pick evenly spaced points and calculate how big the deviation will be.

 

Last but not least: Do you really need .2-.5m resolution and/or accuracy? And is that goal achievable given other variability in the system?

Link to post
Share on other sites

0.2m may be unnecessary, but 0.5m would be nice.

Its actually a bit inaccurate stating my accuracy/resolution as a distance, because the rate of change of pressure vs altitude changes as we go up, so it should really be expressed in mbar which is after all what we are measuring...

 

The sensor has a resolution of 0.012mBar and an accuracy of +/- 1mbar.

Link to post
Share on other sites

Yup, been suggested lol

My problem now is not how long it takes, but how much RAM the floating point math requires.

During run time I can get away with comparing pressure with pre-set pressure target. However when the altimeter has detected landing, I want it to 'beep out' the max altitude, so I will need to calculate altitude at least once. This means the math function will still be there and still hogging all my space :(

Link to post
Share on other sites

BMP085 is accurate, per data sheet here to just below 2 meters. Part of that is that even at low-res values, you have over 4ms for the sensor to settle and transmit data over IIC.

 

Better solution might be MPX4115 that you can sample directly, at 1ms intervals, but you would have some voltage level issues to deal with since it is a 5v in / 0-5v out sensor.

Link to post
Share on other sites

Ok ive been thinking about the lookup table. If I make a table and lookup the result of P0/P, to find the result for the 'power of' bit that should work. Let's say I make 2k entries in the table and they are each signed long thats 8k FRAM right? Is 2k samples ridiculous?

 

Sent from my GT-I9300 using Tapatalk

 

 

Link to post
Share on other sites

2K sounds like plenty, if not overkill.

 

Looking at this table, P0/P will go from 1 to roughly 0.01.

 

In the lowest 1000m with 2048 samples you get roughly 200 steps, i.e. one every 5 m. At 5km it's one every 7.5m, at 10km one every 10m, at 20km one every 60m, at 30km one every 250m. So accuracy probably drops at very high altitudes.

 

One countermeasure could be to have a second P/P0 indexed lookup table for anything above 10km or so (P*4 > P0). But I'd first calculate the error of interpolated vs. calculated values before optimizing prematurely.

 

Link to post
Share on other sites

Hi Chicken,

 

Above certain altitudes, the lapse rate changes (which is part of the exponent of the POW), so I was considering making 3 lookup tables, 1 for each lapse rate.

This would somewhat achieve the same as what you are suggestion. Maybe I can put less samples in the low alt table, and more in the higher ones. 

 

Altitude (m)         Lapse rate                            =1/x (exponent)

0->11000            0.0065 some calculations     0.190267

11000->20000     0 some calculations             0

20000->32000     0.001 some calculations       -0.02927

 

 

Whats the story with polynomial interpolation? Linear sounds easy, but which would be more accurate? Im not too concerned about execution time as in sure it will still be better than a POW (that is if I could get it into the RAM!)

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...