Jump to content

enl

Members
  • Content Count

    341
  • Joined

  • Last visited

  • Days Won

    22

Reputation Activity

  1. Like
    enl got a reaction from tripwire in Reversing the LED? i.e. sinking it to MCU?   
    See SLAS735 (MSP430G2X53 datasheet, or other device datasheets) for the graphs. SLAS735G, fig 6-9, "Typical characteristics, Outputs" shows the curent vs output potential vs supply potential.
     
    The MSP430 devices are pretty much symmetric with respect to sourcing and sinking. Driving at too high a current may not *immediately* damage an output, but driving an LED without a limiting resistor (or current limiting driver) will likely either smoke the LED or damage the I/O in the microcontroller eventually. The damage may not happen immediately, but the drive transistor will be dissipating power due to the potential drop, and will fail sooner or later due to the cumulative effect of the heat. A good rule is no more load than keeps you within the safe spec for logic signals, which is about 12mA at 3V for the MSP430G series. Half that is better. Many LEDs will fail very quickly even with fairly small overcurrent.
     
    Remember, the I-V curve for an LED, like any diode, is substantially exponential. Small increase in potential, large increase in current.
     
    Driving individual LEDs isn't a big deal these days. At 4mA (the drive current for the red LED on the launchpad), modern high efficiency devices are as bright as or brighter than the 20mA devices 20 years ago. Not lighting bright, but display and indicator bright, even outdoors during the day. Blinding in a poorly lit room.
     
    Any more current than this, it is best to use an external driver (NPN transistor to ground with limit resistor; current limiting LED driver IC; etc). When driving multiple LEDs from an output (multiplex display, etc) you pretty much need to use external drivers, as, even when PWM'd to low *average* current, the instantaneous current will be an issue.
     
    Also note that overloading the GPIO pins, for example with an LED, may effect UART by dropping the internal power bus or raising the internal ground bus: the UART output won't go full supply range. If all you are doing is lighting LEDs, not a big deal until the core can't run anymore (3V supply, that can be as low as 35 to 40mA total, depending on the temperature)
  2. Like
    enl got a reaction from abc in Reversing the LED? i.e. sinking it to MCU?   
    See SLAS735 (MSP430G2X53 datasheet, or other device datasheets) for the graphs. SLAS735G, fig 6-9, "Typical characteristics, Outputs" shows the curent vs output potential vs supply potential.
     
    The MSP430 devices are pretty much symmetric with respect to sourcing and sinking. Driving at too high a current may not *immediately* damage an output, but driving an LED without a limiting resistor (or current limiting driver) will likely either smoke the LED or damage the I/O in the microcontroller eventually. The damage may not happen immediately, but the drive transistor will be dissipating power due to the potential drop, and will fail sooner or later due to the cumulative effect of the heat. A good rule is no more load than keeps you within the safe spec for logic signals, which is about 12mA at 3V for the MSP430G series. Half that is better. Many LEDs will fail very quickly even with fairly small overcurrent.
     
    Remember, the I-V curve for an LED, like any diode, is substantially exponential. Small increase in potential, large increase in current.
     
    Driving individual LEDs isn't a big deal these days. At 4mA (the drive current for the red LED on the launchpad), modern high efficiency devices are as bright as or brighter than the 20mA devices 20 years ago. Not lighting bright, but display and indicator bright, even outdoors during the day. Blinding in a poorly lit room.
     
    Any more current than this, it is best to use an external driver (NPN transistor to ground with limit resistor; current limiting LED driver IC; etc). When driving multiple LEDs from an output (multiplex display, etc) you pretty much need to use external drivers, as, even when PWM'd to low *average* current, the instantaneous current will be an issue.
     
    Also note that overloading the GPIO pins, for example with an LED, may effect UART by dropping the internal power bus or raising the internal ground bus: the UART output won't go full supply range. If all you are doing is lighting LEDs, not a big deal until the core can't run anymore (3V supply, that can be as low as 35 to 40mA total, depending on the temperature)
  3. Like
    enl got a reaction from spirilis in Reversing the LED? i.e. sinking it to MCU?   
    See SLAS735 (MSP430G2X53 datasheet, or other device datasheets) for the graphs. SLAS735G, fig 6-9, "Typical characteristics, Outputs" shows the curent vs output potential vs supply potential.
     
    The MSP430 devices are pretty much symmetric with respect to sourcing and sinking. Driving at too high a current may not *immediately* damage an output, but driving an LED without a limiting resistor (or current limiting driver) will likely either smoke the LED or damage the I/O in the microcontroller eventually. The damage may not happen immediately, but the drive transistor will be dissipating power due to the potential drop, and will fail sooner or later due to the cumulative effect of the heat. A good rule is no more load than keeps you within the safe spec for logic signals, which is about 12mA at 3V for the MSP430G series. Half that is better. Many LEDs will fail very quickly even with fairly small overcurrent.
     
    Remember, the I-V curve for an LED, like any diode, is substantially exponential. Small increase in potential, large increase in current.
     
    Driving individual LEDs isn't a big deal these days. At 4mA (the drive current for the red LED on the launchpad), modern high efficiency devices are as bright as or brighter than the 20mA devices 20 years ago. Not lighting bright, but display and indicator bright, even outdoors during the day. Blinding in a poorly lit room.
     
    Any more current than this, it is best to use an external driver (NPN transistor to ground with limit resistor; current limiting LED driver IC; etc). When driving multiple LEDs from an output (multiplex display, etc) you pretty much need to use external drivers, as, even when PWM'd to low *average* current, the instantaneous current will be an issue.
     
    Also note that overloading the GPIO pins, for example with an LED, may effect UART by dropping the internal power bus or raising the internal ground bus: the UART output won't go full supply range. If all you are doing is lighting LEDs, not a big deal until the core can't run anymore (3V supply, that can be as low as 35 to 40mA total, depending on the temperature)
  4. Like
    enl got a reaction from pabigot in Reversing the LED? i.e. sinking it to MCU?   
    See SLAS735 (MSP430G2X53 datasheet, or other device datasheets) for the graphs. SLAS735G, fig 6-9, "Typical characteristics, Outputs" shows the curent vs output potential vs supply potential.
     
    The MSP430 devices are pretty much symmetric with respect to sourcing and sinking. Driving at too high a current may not *immediately* damage an output, but driving an LED without a limiting resistor (or current limiting driver) will likely either smoke the LED or damage the I/O in the microcontroller eventually. The damage may not happen immediately, but the drive transistor will be dissipating power due to the potential drop, and will fail sooner or later due to the cumulative effect of the heat. A good rule is no more load than keeps you within the safe spec for logic signals, which is about 12mA at 3V for the MSP430G series. Half that is better. Many LEDs will fail very quickly even with fairly small overcurrent.
     
    Remember, the I-V curve for an LED, like any diode, is substantially exponential. Small increase in potential, large increase in current.
     
    Driving individual LEDs isn't a big deal these days. At 4mA (the drive current for the red LED on the launchpad), modern high efficiency devices are as bright as or brighter than the 20mA devices 20 years ago. Not lighting bright, but display and indicator bright, even outdoors during the day. Blinding in a poorly lit room.
     
    Any more current than this, it is best to use an external driver (NPN transistor to ground with limit resistor; current limiting LED driver IC; etc). When driving multiple LEDs from an output (multiplex display, etc) you pretty much need to use external drivers, as, even when PWM'd to low *average* current, the instantaneous current will be an issue.
     
    Also note that overloading the GPIO pins, for example with an LED, may effect UART by dropping the internal power bus or raising the internal ground bus: the UART output won't go full supply range. If all you are doing is lighting LEDs, not a big deal until the core can't run anymore (3V supply, that can be as low as 35 to 40mA total, depending on the temperature)
  5. Like
    enl got a reaction from abc in Turning MSP430 into an amperemeter - possible?   
    measuring currents at the range used by these processors is not easy. Several things make it tough: the low end is very, very low when sleeping, and can be the dominant usage in systems where the processor sleeps most of the time; The active current can be orders of magnitude larger than the sleep current; and many applications have very short active bursts.
     
    If you are trying to tune code by power usage, you will have an..... interesting time. If you are running on a launchpad rather than a bare processor, the power used by the processor will likely be buried under all of the other draws on the LP board and, for practical purposes, unmeasurable without much more expensive gear than you likely have. Just the power LED on the LP uses 2000 or more times the standby power (1mA vs 0.5uA). The active power for the core is 230uA, or 500 times he standby, not counting peripheral usage and I/O. (at the datasheet spec 2.2V and 1MHz... the difference in core usage is a larger factor at higher speed and supply voltage) The INA219 doesn't really have the resolution for this.
     
    If it were me, and it has been on occasion, I would just use a small shunt resistor to provide a signal that can be used to detect active time and try to minimize that. Next would be to measure charge consumption, using a cap to power the device and measuring voltage drop, which directly reflects charge loss, which is integrated current. By monitoring the cap at no load, and in your application, you can use the difference to get pretty accurate charge used by your circuit.
  6. Like
    enl got a reaction from greeeg in floating point - Math - and Trigonometry with msp430g2553   
    If the inputs are 10 bit (or 12 bit) ints, and they represent sine and cosine with fixed scaling, so the same angle always produces the same input values, I would do the following:
     
    Identify the octant the angle is in (0-45 deg, 45 to 90, etc)
     
    Use the smaller value (which will be the higher resolution value) as index, look up a reference angle between 0 and 45deg.
     
    Then correct the result for the appropriate quadrant.
     
    Requires a lookup table with about 720 values (for 10 bit input)
     
    If the table will be too big, Cut the number of entries by a factor of 4  or 8 (simple shift) and interpolate between values. Linear is likely to be ok (meets inherent error), for angles less than 45deg, with a table having 1/4 as many entries as input range transformed.
     
    This avoids: division, trig, and, unless needed, floating and fixed point.
     
     
    If the inputs may vary in scale, Divide smaller by larger, multiply by table size, and do the same thing, interpolating if needed.
     
     
    You may find that even a fairly large table will take less space than the trig function in the build. Additionally, you can directly get the angle in degrees, or other units, if desired, rather than needing to convert from rad to degrees.
     
    If you are going to us inverse tan, I would still identify octant, divide smaller by larger, and correct from there. Angles closer to 0 will be the highest accuracy and avoid risk of div by 0.
  7. Like
    enl got a reaction from tripwire in MSP 430 Timer Logic?   
    For longer periods, where the divider still won't get you enough time, You can also use a counter in the interrupt routine. Initialize to the number interrupts you need in a cycle,, decrement each interrupt, and, if not zero, return. If it is zero, do your task and reset the counter.
    // counting interrupts example _interrupt void TIMER0_A0_ISR(void) { const unsigned cycle=24; // presuming 24 interrupts is a cycle static unsigned icount=cycle; // never reinitialized after program initialization if (--icount) return; P1OUT ^= 0x01; icount = cycle; return; } Your total period needs to be factored into two values: the timer cycle and the number of interrupts per operation cycle. With care, the overhead can be minimized
  8. Like
    enl got a reaction from abc in UART and alternatives   
    Not really. Without a method of synching the clocks, either a common clock for asynch communication, a synchronous or self-clocking comm method, of a clock resynch method, you will have a tough time.
     
    If you MUST use the standard UART and may have excessive drift, you may want to use one of the supported synchronous methods. You will need an extra line for the clock.
     
    If you are willing to write some code, there are a wide variety of self-synchronizing methods. You could use the 1-wire protocol (Dallas Semi--- used on such things as the DS18B20 temp sensor), for which there is some support out in the wild. You could use Manchester coding (basically an FM scheme used in wired LANs), or MFM (designed back in the day for magnetic storage, but still a nice self clocking scheme with a good tolerance for clock rate drift), or a wide variety of other schemes.
     
     
    The standard RS232-like serial connection, while simple and robust, is not very tolerant of clock mismatch. Adding extra stop bits won't help. The only resynch is at the start bit, and if the drift is more than half a bit time over the next 9 bits, there will be  an error <--- oversimplified, but substantially correct.
     
     
     
    -----replacing section  as MSP430 doesn't support ----
     
     
    There is some support on some processors (I have never used it with MSP430, but I think MSP430 is a family this applies to. Not sure and don't have the time or will to go through the docs right not) for resynch at each detected edge. This makes for much better tolerance fr clock drift IF there are frequent enough transitions in each datum. You can insure this by using only a portion of the available code words. This is called run-length-limited (RLL) coding, and is still a fairly common scheme. If there is no hardware support for this, then you code it up yourself. Straightforward to do using edge triggered interrupts.
     
    ---- with ---
     
     
    See SLAU144j, section 15.3.4, page 416 for auto baud rate synchronization.
     
    If this is done periodically before rates drift too far, you can maintain good synch
  9. Like
    enl got a reaction from spirilis in Energy use for interrupts   
    Debounce time depends on a bunch of things including button design, capacitance and pullup resistance, how it is pressed, etc.
     
    You can debounce either in hardware or software.
     
    For slow inputs, software is easier. Just disable the input for some period (a couple hundred ms is usually safe) before reenabling.
     
    For better response, at the expense of a little processing power, once the input (active) is detected, keep the interrupt disabled until  a stable inactive is detected. Periodic monitoring of the input status is needed, and stable is usually determined as several readings in a row at the inactive state. With a latching input, reset the input, but not the interrupt. Periodically check it, and if active, reset it again. After some number of reads without it being active, call it inactive and reenable the interrupt. Key thing is DO NOT reenable interrupt until the input is stable. Similar concepts work if you want to detect both press and release, but changing the interrupt edge is needed before reenabling.
     
    Hardware debounce takes extra components, but can give better response if properly tuned. Simplest is a capacitor across the button. Needs to be small to control risk of fusing contacts, and, if  possible, or an additional resistor to control current on press. Works best when the input has hysteresis. There are also debounce IC's available that do this (with internal software) for very low power cost. Many of them will do several channels. A little more power, and an additional IC, but likely less power than the gold standard solution.
     
    The gold standard is use SPDT buttons and a hardware RS-type flip-flop. More power and more hardware, but totally bounce free.
     
    If you are willing to use two inputs per button to avoid the external flip-flop, and spend the same  power on pullups, a SPDT button can be used with one leg to each pin and the common to ground. One pin is active, the other is inactive. Platinum standard is the same concept,but avoiding the power waste in a pullup resistor. Use a DPDT, where one pin is pull up, the other pull down, and one pole to each pin. This is overkill for darn near any application where it is not an absolute need. I could imagine maybe certain aerospace applications.
     
    There are a lot of references available, froom Horowitz and Hill, _the_Art_of_Electronics, which goes into a fair bit of detail on options, to Jack Ganssle's  http://www.ganssle.com/debouncing.htm , to Maxim's http://www.maximintegrated.com/glossary/definitions.mvp/term/Debounce/gpk/82 , and so on.
     
    The Ganssle ref if one of the more straightforward, without being tied to a particular manufacturer.
     
    For reference, my solution is generally software wait for stable input or an external RC, depending on circumstances. I have, when time critical, used periodic sample and wait for a stable input, such as doing  sub-microsecond timing with optical sensors, or SPDT button with an external flip-flop.
  10. Like
    enl got a reaction from abc in UART and alternatives   
    If all components are at the same temp, drift shouldn't be too big an issue. If the components can be at radically different temps, there might be a concern.
     
    Other methods you can use:
     
    Generate a common clock to sync the devices for asynch UART communication. Uses an I/O (the Xin pin is good)
     
    Use a synchronous comm scheme clocked from one master device (or external clock)
     
    Determine max clock drift analytically (or by measuring samples) and, if worst case relative drift it is less than about 5% (depending on asynch data format.... RS232-format serial can tolerate about 5% maximum, less than 3% is better), just let it go.
     
     
    -----
    I would probably just go with a synchronous scheme. Allows highest speeds with low error.
  11. Like
    enl got a reaction from abecedarian in sampling frequency   
    As low as you want.
     
    General idea is: set up timer to provide periodic interrupt. If not low enough freq, count the interrupts and do what is needed every n interrupts. Put processor to sleep.
     
    On interrupt, or after n interrupts, start ADC conversion. Set for interrupt on completion. Let processor go back to sleep.
     
    When conversion is done, interrupt wakes it back up so you can read value and do what you will with it.
     
    I have a device that has been going for over a year on a set of AA batteries using a 32KHz crystal for moderately precise timing, waking up every 8sec (max that the process will allow... could go longer) to do its thing. Most of the time, it is just counting and going back to sleep, for a period of roughly 5 min, but the period does vary, so the count is varied as needed.
     
    If major processing is needed from the ADC value, leave that in your main loop, and force full wake up when ADC is complete, so that the processing can be done. Then go back to sleep.
  12. Like
    enl got a reaction from abecedarian in Probably dumb question regarding timers   
    Still quite doable. Probably want linear response on a pot, So I would go with a table for the divisors. Either of the above methods will work, but to match the pulse width for the short pulse will be easier with the interrupts. As the cam turns at half crank, if it has one tooth for the sensor, that is divisor of 64, not 32. does it have 2 teeth?
     
    This range is great enough that I would handle the low rate pulse with an interrupt, even if the high rate is handled with hardware. Interrupt each rising edge of high speed, and keep a counter in the ISR. At 0x1f in the low bits (easy test with AND), output  high, otherwise output low.
     
    Your base freq at 11000RPM with 32 teeth on crank is 5867HZ, for a divisor at 1MHz of 85 if using up/down counting. Low end of 200RPM is 106.7HZ, for a divisor of 4716 at 1MHz. Using a clock  of 4MHz for the timer quadruples these divisor values and gives higher resolution for smoother transition.  16MHz clock with clock divisor of 2 gives best precision at 8 times these divisors.
     
    Given the new info, and @@grahamf72 pointing out the easier way, I would likely go hardware for high rate with interrupt and counter for low. Update count limit each time the low rate is output is set low by reading ADC input, triggering next ADC cycle, and storing the new limit value in the CCR. The ADC reads are one cycle behind the response. Still better than the response of any throttle.
  13. Like
    enl got a reaction from abecedarian in Probably dumb question regarding timers   
    yes
     
    How precise do you want the rate and what characteristic do you want for the adjustment? Linear with resistance? More precision at one end or the other? How much resolution?
     
     
    Without the answers to these questions, I'll give a basic summary of one way to approach it.
     
    Connect the pot between Vcc and Gnd, and the wiper to a ADC input. This gives 1024 steps for freq selection. For (fairly) precise timing, you will need a crystal, but the standard clock system is pretty good, and will give greater resolution. At 5.5Hz, the higher speed osc will be 176Hz. At the low end, you are looking at 32Hz. Consider only the higher frequency, and every 32 transitions of that (every 16 full cycles) toggle the lower frequency output. For a symmetric square wave, your maximum basic timing is 362Hz.
     
    If you aren't concerned about exact frequency, or having perfectly uniform steps, it is easier. Assuming a main clock of 1MHz (the lowest G-series clock from DCO that is factory calibrated), you can built a table of 1024 divider values for the timerA, with the highest freq value being 2762 and the lowest being 31250, to get double your higher freq. Write an interrupt handler that toggles your high freq output on each interrupt and your low freq output every 32nd interrupt (using an counter in the interrupt handler to keep track). Periodically read the ADC for pot position, and use this to set your timerA period.
     
    There are a lot of way to do these things, so I am not being really specific. You could just scale the ADC value by 28 and add 2760 to it for your period, if less linear behaviour is ok. A table will allow you to get very close to linear. The timer can be set up several ways. One easy way to handle it is let it free run, for a period of about 1/160th of a second. Each interrupt, add the count increment to the trigger value for the interrupt, Then have it interrupt on each period and use that to trigger the ADC read for pot position.
  14. Like
    enl reacted to grahamf72 in Probably dumb question regarding timers   
    Outputting frequencies on a pin can be done by the timer modules without needing to use interrupts & manually toggling pin outputs. The MSP430G2553 for example has 2 timers so can produce 2 different frequencies.
     
    The following code is based on the MSP430G2553 and will produce 1 frequency on P1.1, and 1/32 of that frequency on P2.0.  As you will note, there are no interrupts, and no code is used to toggle pin outputs - they are purely conrolled by the timer.  In fact, the CPU isn't doing anything in this example - in my loop I put it into Low-Power-Mode 3, to show that the timer is doing all the work.  The code is designed to run at a slow speed so the effect can be observed on the Launchpad's LED's.  Note that the standard launchpad has the LEDs connected to P1.0 & P1.6. But you can remove the jumpers and use a F-F cable to connect P1.1 & P2.0 to the LED-side of the pins where the jumper attaches.
     
    I know this example isn't exactly what you were asking, but you should find it fairly trivial to make the necessary changes.
     
    All you need to do to change the frequencies at run-time is alter the TA0CCR0 and TA1CCR0 registers. 
    void setup() { //This is designed to run on the MSP430G2553 which has 2 Timer A modules. //The output of TimerA0 Capture/Compare Register 0 is attached to P1.1 //The output of TimerA1 Capture/Compare Register 0 is attached to P2.0 //This will output a square wave on P1.1 & a square wave of 1/32 the //frequency on P2.0 //P1.1 & P2.2 can be jumpered to the Launchpad's LEDs for a visual //indication of the operation. P1DIR |= BIT1; //P1.1 set to output P1SEL |= BIT1; //P1.1 set to SEL = select timer output P1SEL2 &= ~BIT1; //P1.1 clear SEL2 = select timer output P2DIR |= BIT0; //P2.0 set to output P2SEL |= BIT0; P2SEL2 &= ~BIT0; //The desired frequency of the fast clock in Hz #define FREQUENCY 2 //The value that we count up to. Shown like this to show how it is derived: //clock frequency (nominally 12khz) divided by the amount we divide the timer //by with our TAxCTL command, divided by the desired output frequency, divided by 2. #define COUNT (12000 / 8 / FREQUENCY / 2 ) //note the exact frequency depends on the accuracy of the VLO clock, which is not very //accurate. For more precise timing we could use the crystal as the timebase, or //the system clock. Note that the maximum the timer can count to is 65536, so if //using the 16MHz system clock, speeds low enough to observe will be impossible to achieve. TA1CCR0 = 32 * (TA0CCR0 = COUNT); //set TimerA0 to our count, and TimerA1 to count*32. //this will make TimerA1 run 32 times slower than TimerA0 TA1CCTL0 = TA0CCTL0 = OUTMOD_4; //Set to toggle the output every time the count hits CCR0. TA1CTL = TA0CTL = TASSEL_1 // Source timer's clock from ACLK | ID_3 // Divide by 8 | MC_1; // Count up to TA0CCR0 mode } void loop() { LPM3; //stop in LPM3 mode. }
  15. Like
    enl got a reaction from energia in Probably dumb question regarding timers   
    yes
     
    How precise do you want the rate and what characteristic do you want for the adjustment? Linear with resistance? More precision at one end or the other? How much resolution?
     
     
    Without the answers to these questions, I'll give a basic summary of one way to approach it.
     
    Connect the pot between Vcc and Gnd, and the wiper to a ADC input. This gives 1024 steps for freq selection. For (fairly) precise timing, you will need a crystal, but the standard clock system is pretty good, and will give greater resolution. At 5.5Hz, the higher speed osc will be 176Hz. At the low end, you are looking at 32Hz. Consider only the higher frequency, and every 32 transitions of that (every 16 full cycles) toggle the lower frequency output. For a symmetric square wave, your maximum basic timing is 362Hz.
     
    If you aren't concerned about exact frequency, or having perfectly uniform steps, it is easier. Assuming a main clock of 1MHz (the lowest G-series clock from DCO that is factory calibrated), you can built a table of 1024 divider values for the timerA, with the highest freq value being 2762 and the lowest being 31250, to get double your higher freq. Write an interrupt handler that toggles your high freq output on each interrupt and your low freq output every 32nd interrupt (using an counter in the interrupt handler to keep track). Periodically read the ADC for pot position, and use this to set your timerA period.
     
    There are a lot of way to do these things, so I am not being really specific. You could just scale the ADC value by 28 and add 2760 to it for your period, if less linear behaviour is ok. A table will allow you to get very close to linear. The timer can be set up several ways. One easy way to handle it is let it free run, for a period of about 1/160th of a second. Each interrupt, add the count increment to the trigger value for the interrupt, Then have it interrupt on each period and use that to trigger the ADC read for pot position.
  16. Like
    enl got a reaction from abecedarian in Optimising math   
    If float is used, moderate memory overhead. Most (all?) operations are implemented as function calls, so te functions used must be included. The difference between one add and 20 adds is minimal, tho. Once the function is in the build, calling it isn't a lot of space.
    \
    Big thing is time. Software implementations of FP can be slow. A device with hardware mult (integer) can do many FP operations a lot faster than those wihout hardware mult. Hardware div (integer) makes things better yet. A few operations are not going to be a big issue, timewise. The functions that use a lot of operations are the killer, like exponentiation and logs. These can be worth optimizing in many cases. If previous thread hadn't given pretty loose timing for the altitude comp, I would call this a prime candidate for a specialty function for the exponentiation. Might still need it, but my guess is not.
     
    Greeg's methods apply to the integer math (or fixed point), and can be used to avoid FP in cases where the final result needed is integer )or, again, fixed point), but intermediate comps may need FP or fraction.
     
     
    A question I still have is: Must the altitude be computed in-flight? Or can the sensor date be stored and converted on the ground? Is the altitude needed? Or only some property, such as detecting when max altitude is reached? The answers can make a big difference in what math need be done on MSP430, and on how to doit.
  17. Like
    enl got a reaction from abecedarian in Optimising math   
    @Fred: I agree with you 100%. As I said: modern compilers tend to be smart.
     
    @basil4j: Based on the operations you listed, I would guess close order of magnitude of 200 cycles NOT INCLUDING the logs/antilogs for the exponentials (basing this on about 20 cycles for FP mult or div with the hardware integer mult) At 25MHz, this is 8 microseconds. The only real time taker will be the exponentials. I doubt that they will be more than on order of magnitude longer, giving an estimate of roughly 100 microseconds for the math, almost certainly less than a millisecond.
     
    Going back to your needed time scale, I don't think efficiency is likely your main concern. Efficient enough does the job. Within broad limits, the guideline is make it right, then make it fast. If you need to worry about fast to make it work, only make it as fast as you must, then put effort elsewhere. Ditto for size: If it fits, don't worry about saving a few bytes. If it doesn't fit, you need to worry, and it is often more than a few bytes that are the concern.
  18. Like
    enl got a reaction from basil4j in Optimising math   
    (please excuse crummy typing. Cat on lap takes one hand as he has paws wrapped around it)
     
    Last thing first: It is the same. If n is a const, then it really doesn't matter... the compiler will precompute what it can. In a case like this, I use #define for the constant, but a const variable should generally work the same: compiler precomputes where it can, and only makes var if it needs to, such as if you make a pointer to it. Note that to do exponentiation, you need to call a (slow) function. The ^ is exclusive-or.  The underlying function uses log and antilog. If you have a  fixed power, you can speed it up by either expanding as a Taylor series or a continued fraction, since thst it how the log and the antilog are done. This would go faster as Taylor series. If time isn't a major issue (from your previous posts, it looks like it isn't at 25MHz), use the lib function from math.h.
     
    Trade off: speed vs.range. 32bit takes 4 times as long, but has much more range (signed is 32000 vs 2000000000). For floating point, you have 32 bit no matter what (for float... In general, you don't use double unless you must n an embedded device)
     
    The first questions: What are the types? Both signed and unsigned log are integer types. The div by 2 will be done as a shift if the types are all long and unsigned long. Modern compilers are smart.
     
    If One of them is a float, the arithmetic will be done as float when needed, and from then on. What can be done as integer will be done so. Div by two will be optimized by most compilers as a decrement of hte binary exponent, so no worry about floating point divide there, either. Modern compilers are real smart. Don't break it up. The compiler will make it better than you can, unless there is something you haven't said. If you need the result to be float, and ALL Of the vars are integer, you MUST use a cast to force conversion to floating point where you want the conversion done. Use parenthesis to  control exactly when the conversion happens, so it isn't done early. If P is float, and all else is integer, yhe conversion will be done when truncated result i s stored.
  19. Like
    enl got a reaction from basil4j in Optimising math   
    If float is used, moderate memory overhead. Most (all?) operations are implemented as function calls, so te functions used must be included. The difference between one add and 20 adds is minimal, tho. Once the function is in the build, calling it isn't a lot of space.
    \
    Big thing is time. Software implementations of FP can be slow. A device with hardware mult (integer) can do many FP operations a lot faster than those wihout hardware mult. Hardware div (integer) makes things better yet. A few operations are not going to be a big issue, timewise. The functions that use a lot of operations are the killer, like exponentiation and logs. These can be worth optimizing in many cases. If previous thread hadn't given pretty loose timing for the altitude comp, I would call this a prime candidate for a specialty function for the exponentiation. Might still need it, but my guess is not.
     
    Greeg's methods apply to the integer math (or fixed point), and can be used to avoid FP in cases where the final result needed is integer )or, again, fixed point), but intermediate comps may need FP or fraction.
     
     
    A question I still have is: Must the altitude be computed in-flight? Or can the sensor date be stored and converted on the ground? Is the altitude needed? Or only some property, such as detecting when max altitude is reached? The answers can make a big difference in what math need be done on MSP430, and on how to doit.
  20. Like
    enl got a reaction from abecedarian in Optimising math   
    @abecedarian: first one is good. second can lose precision, as the LSB is thrown away before mult. Makes a difference if SENSE is odd. Compiler should do it first way
  21. Like
    enl got a reaction from tripwire in What are you doing right now..?   
    Grading papers and having a beer (or, more accurately, distracting myself from grading papers. Students are doing pretty good, but decoding vector calculus proofs takes a mental toll...)
     
    Oh, in general....
     
    Working on an articulated tail, neck, and head. Too many servos! If I ever get anywhere with it, I will post. Right now, still mocking up test system. I can't wait for next haloween......
  22. Like
    enl got a reaction from tripwire in Compile questions - Routines within ISR   
    This warning comes about because it is generally (but not always) bad practice to call a function from within an interrupt handler. Interrupt handlers should generally be as short as possible and as fast as possible. The function call overhead can slow things down, increasing response time to other interrupts, and add to the stack burden, which is a significant thing when you have limited memory or time critical response.
     
     
    I would ask if you are sure that inlining it (or replacing the function with a #define macro) would increase code size. Incrementing a pointer isn't a big deal. Function calls take a bit of code space for setup and stack space for parameters. I presume that there is more than just incrementing a simple pointer, so.....
     
     
    That said, there are times when it is perfectly acceptable to call a function within an interrupt handler. For example, if your interrupt is the only one active, and there is no chance of missing the next one because the interrupt rate is known to be low enough the routine will finish before the next interrupt, it is ok. Not best practice, but ok.
     
    If other interrupts are active, and they can wait for this one to finish, and there is no chance of missing one, it is, again, ok, but not best practice.
     
    Things to consider: A good compiler can determine the needed stack depth by tracking the call chain. This is not as easy if there are function calls in an interrupt, and may, in fact, be impossible if interrupts are re-enabled within the routine (not recommended on MSP430, IMHO). I don't know off hand if the compiler you are using does this-- I use CCS and have no idea if it does, as it has never been an issue for me. This is important in many cases, as if allows the compiler to manage RAM usage appropriately based on the context.
     
     
    A better way to structure things, if you can, is have the interrupt routine do as little as possible, and handle everything else in your general code. The model that is commonly used is to have a main loop to do the work, and goes to sleep when the work is done. The interrupt does what it must, and resets the sleep on return (resets the low power mode bits on the MSP430), signalling what must be done for the main loop if needed. If this is not practical, and you can be sure that you won't lose due to memory or timing in the interrupt, go with it. Nothing says that the 'best' way is always the right way. I have shoved entirely too much into interrupt handlers at times, when it was the most practical solution for one reason or another.
     
     
    The pragma referenced is likely the FUNC_CANNOT_INLINE pragma. This will tell the compiler that the function is not inlinable, and is will stop yelling at you about it, or DIAG_SUPRESS for the given message (before your function) paired with DIAG_DEFAULT (after your function)
  23. Like
    enl got a reaction from basil4j in Compile questions - Routines within ISR   
    This warning comes about because it is generally (but not always) bad practice to call a function from within an interrupt handler. Interrupt handlers should generally be as short as possible and as fast as possible. The function call overhead can slow things down, increasing response time to other interrupts, and add to the stack burden, which is a significant thing when you have limited memory or time critical response.
     
     
    I would ask if you are sure that inlining it (or replacing the function with a #define macro) would increase code size. Incrementing a pointer isn't a big deal. Function calls take a bit of code space for setup and stack space for parameters. I presume that there is more than just incrementing a simple pointer, so.....
     
     
    That said, there are times when it is perfectly acceptable to call a function within an interrupt handler. For example, if your interrupt is the only one active, and there is no chance of missing the next one because the interrupt rate is known to be low enough the routine will finish before the next interrupt, it is ok. Not best practice, but ok.
     
    If other interrupts are active, and they can wait for this one to finish, and there is no chance of missing one, it is, again, ok, but not best practice.
     
    Things to consider: A good compiler can determine the needed stack depth by tracking the call chain. This is not as easy if there are function calls in an interrupt, and may, in fact, be impossible if interrupts are re-enabled within the routine (not recommended on MSP430, IMHO). I don't know off hand if the compiler you are using does this-- I use CCS and have no idea if it does, as it has never been an issue for me. This is important in many cases, as if allows the compiler to manage RAM usage appropriately based on the context.
     
     
    A better way to structure things, if you can, is have the interrupt routine do as little as possible, and handle everything else in your general code. The model that is commonly used is to have a main loop to do the work, and goes to sleep when the work is done. The interrupt does what it must, and resets the sleep on return (resets the low power mode bits on the MSP430), signalling what must be done for the main loop if needed. If this is not practical, and you can be sure that you won't lose due to memory or timing in the interrupt, go with it. Nothing says that the 'best' way is always the right way. I have shoved entirely too much into interrupt handlers at times, when it was the most practical solution for one reason or another.
     
     
    The pragma referenced is likely the FUNC_CANNOT_INLINE pragma. This will tell the compiler that the function is not inlinable, and is will stop yelling at you about it, or DIAG_SUPRESS for the given message (before your function) paired with DIAG_DEFAULT (after your function)
  24. Like
    enl got a reaction from bluehash in Mailbag   
    Banner day:
     
    Noritake 114X16 display and Rigol DS2072 'scope. Scope is gonna take some getting used to.
     
    Now, to figure out what to do with the old HP 130C scope. Still works great and checks to spec (actually, better than spec. nominal is 500KHz, but quite acceptable for sub microsecond pulses), but at 50 years old and all tubes, it is getting a little long in the tooth, and space being what space is...
  25. Like
    enl got a reaction from samurai440 in How to use hall sensor with a brushed dc motor   
    What type of hall sensor?
     
     
    ---
    If it is a switch type (output is active if there is a field of the appropriate orientation): tie the output of the sensor to an input that can trigger an interrupt. Have a counter (global) that the interrupt routine increments or decrements (as appropriate based on direction) each time the interrupt is triggered. If the sensor needs to have a response, respond in the interrupt handler. The counter will need to be declared as volatile, and can be read anywhere in the code as a position.
     
    ----
    If it is analog output, tie it to a comparator input and use the comparator to trigger the interrupt.
×
×
  • Create New...