Jump to content
43oh

pabigot

Members
  • Content Count

    577
  • Joined

  • Last visited

  • Days Won

    30

Reputation Activity

  1. Like
    pabigot got a reaction from GeekDoc in Newbie general C code question   
    I see that happening more and more; I blame it on injudicious application of MISRA rule 6.3.
     
    It's absolutely the case that types with specific sizes should be used when the range is potentially beyond the native word size of the processor or data is exchanged over a communication link, including via a file.
     
    I have yet to see a cogent argument why specific-size types should be used in preference to "int" and "unsigned int" for indexes and offsets where the native word size is appropriate, or when you're manipulating a processor peripheral that is known to be that native word size.
     
    On one hand, you might select a type that's too large for a target processor, impacting portability. TI did this recently to the CC3000 host interface, changing it so every parameter is a uint32_t even if its value range is less than one octet. This unnecessarily bloats code and hampers performance on the MSP430. Sure, when it goes out over SPI it needs to be 4 octets: but that's an encoding issue and should not propagate up to the library API.
     
    Or you might go too small, and select uint8_t. Two problems: (1) it's a pain if you wrote code to index over an array, and somebody increases the array size above 255. (2) A uint8_t value will promote to the signed type int when used in expression calculations. I've got some discussion of this on my blog; the non-C++ stuff related to uint8_t is at the end. tl;dr: -x doesn't mean what you probably think it does.
     
    The advice @@roadrunner84 gave is better, though if the "appropriate type" argument is applied consistently one should use size_t for all variables used as indexes as well as sizes. Which will hurt you badly on the MSP430 if you're using a memory model that supports objects larger than 64 kiBy, so I prefer unsigned int there too.
     
    Note: If you are going to use the size-specific types, get in the habit of explicitly including either <inttypes.h> or its lesser cousin <stdint.h>. Yes, for MSP430 and ARM the vendor headers provide it for you, but if you don't remember that it comes from the C library, not the C language, you'll be confused when you graduate to host platforms where it's not part of the null context.
  2. Like
    pabigot got a reaction from roadrunner84 in Newbie general C code question   
    I see that happening more and more; I blame it on injudicious application of MISRA rule 6.3.
     
    It's absolutely the case that types with specific sizes should be used when the range is potentially beyond the native word size of the processor or data is exchanged over a communication link, including via a file.
     
    I have yet to see a cogent argument why specific-size types should be used in preference to "int" and "unsigned int" for indexes and offsets where the native word size is appropriate, or when you're manipulating a processor peripheral that is known to be that native word size.
     
    On one hand, you might select a type that's too large for a target processor, impacting portability. TI did this recently to the CC3000 host interface, changing it so every parameter is a uint32_t even if its value range is less than one octet. This unnecessarily bloats code and hampers performance on the MSP430. Sure, when it goes out over SPI it needs to be 4 octets: but that's an encoding issue and should not propagate up to the library API.
     
    Or you might go too small, and select uint8_t. Two problems: (1) it's a pain if you wrote code to index over an array, and somebody increases the array size above 255. (2) A uint8_t value will promote to the signed type int when used in expression calculations. I've got some discussion of this on my blog; the non-C++ stuff related to uint8_t is at the end. tl;dr: -x doesn't mean what you probably think it does.
     
    The advice @@roadrunner84 gave is better, though if the "appropriate type" argument is applied consistently one should use size_t for all variables used as indexes as well as sizes. Which will hurt you badly on the MSP430 if you're using a memory model that supports objects larger than 64 kiBy, so I prefer unsigned int there too.
     
    Note: If you are going to use the size-specific types, get in the habit of explicitly including either <inttypes.h> or its lesser cousin <stdint.h>. Yes, for MSP430 and ARM the vendor headers provide it for you, but if you don't remember that it comes from the C library, not the C language, you'll be confused when you graduate to host platforms where it's not part of the null context.
  3. Like
    pabigot got a reaction from igor in __delay_cycles   
    I don't get why every time I get into this sort of discussion I'm the one who gets to spend time doing the experiment to find the real answer, but what the heck. 
    In the interests of best scientific practices, before I get started I'll define the experiment:
    Timing will be performed by reading the cycle count register, executing an instruction sequence, then reading the cycle counter. The observation will be the difference between the two counter reads. The sequence will consist of zero or one context instructions followed by zero or more (max 7) delay instructions The only context instruction tested will be a bit-band write of 1 to SYSCTL->RCGCGPIO enabling a GPIO module that had not been enabled prior to the sequence. The two candidate delay instructions will be NOP and MOV R8, R8 Evaluation will be performed on an EK-TM4C123GXL experimenter board using gcc-arm-none-eabi-4_8-2013q4 with the following flags: -Wall -Wno-main -Werror -std=c99 -ggdb -Os -ffunction-sections -fdata-sections -mthumb -mcpu=cortex-m4 -mfpu=fpv4-sp-d16 -mfloat-abi=softfp The implementation will be in C using BSPACM, with the generated assembly code inspected to ensure the sequences as defined above are what has been tested The predictions:Null hypothesis (my bet): There will be no measurable cycle count difference in any test cases that vary only in the selected delay instruction. I.e., there is no pipeline difference on the Cortex-M4. "Learn something" result (consistent with my previous claims but not my expectations): For cases where N>0, one cycle fewer will be measured in sequences using NOP than in sequences using MOV R8,R8. I have no prediction whether the context instruction will impact this behavior. I.e., on the Cortex-M4 only one NOP instruction may be absorbed. "Surprise me" result (still consistent with my previous claims but demonstrating a much higher level of technology in Cortex-M4 than I would predict): A difference in more than one cycle will be observed between any two cases that vary only in the selected delay instruction, but the difference has an upper bound less than the sequence length. I.e., the pipeline is so deep multiple decoded instructions can be dropped without impacting execution time. "The universe is borked" result (can't happen): The duration of a sequence involving NOP is constant regardless of sequence length while the duration of the sequence involving MOV R8,R8 is (in the limit) linear in sequence length. I.e., the CPU is able to decode and discard an arbitrary number of NOPinstructions in constant time. @@Lyon, @@spirilis, @@igor, @@bluehash, and anybody else: please post your predictions (or state you have none) while I'm gone. I expect the execution and analysis of the experiment to take less than the 40 minutes it took to design the experiment and document the plan, but because I'm curious at a meta level and I'm donating time to this I'm not going to comment further or post my results until other people put some skin in the game. 
    To the lab!
  4. Like
    pabigot got a reaction from igor in __delay_cycles   
    It gets interesting if you dig into it.
     
    The ARMv6-M Architecture Reference Manual in section A6.7.47 specifies that NOP is an "architected NOP" that is a hint instruction as defined in section A5.2.5. Hint instructions are what implement sleep/wake features (SEV, WFE, WFI, YIELD). The assembly code "NOP" expands to the 16-bit instruction 0xBF00. This specific instruction was introduced in ARMv6T2. (The phrase "architected" appears to mean that every ARM implementation must behave within the defined limits, as opposed to non-architected behaviors where a vendor may change the behavior, e.g. to reduce power consumption.)
     
    A6.7.47 does say that the timing effects are not guaranteed (it could even reduce execution time), and notes they are not suitable for timing loops. Other resources also note that it may be removed from the pipeline before it reaches the execution stage.
     
    Section D.2 says that before the Unified Assembly Language, NOP was a pseudo-instruction that was replaced by MOV r0, r0 (ARM) or MOV r8, r8 (Thumb).
     
    Based on that detail, I'm going to conclude that the architected behavior of NOP is indeed limited to instruction alignment, and that it is incidental and non-architected that the pseudo-instruction implementation had the effect of a delay.
     
    I'm still reluctant to roll my own __DELAY_ONE_CYCLE() function that expands to MOV r8, r8. I still say that if the instruction has to be decoded, put enough of them in there and you'll get a delay. But I'm not as confident in that decision as I was before doing the research.
  5. Like
    pabigot got a reaction from enl in Newbie general C code question   
    I see that happening more and more; I blame it on injudicious application of MISRA rule 6.3.
     
    It's absolutely the case that types with specific sizes should be used when the range is potentially beyond the native word size of the processor or data is exchanged over a communication link, including via a file.
     
    I have yet to see a cogent argument why specific-size types should be used in preference to "int" and "unsigned int" for indexes and offsets where the native word size is appropriate, or when you're manipulating a processor peripheral that is known to be that native word size.
     
    On one hand, you might select a type that's too large for a target processor, impacting portability. TI did this recently to the CC3000 host interface, changing it so every parameter is a uint32_t even if its value range is less than one octet. This unnecessarily bloats code and hampers performance on the MSP430. Sure, when it goes out over SPI it needs to be 4 octets: but that's an encoding issue and should not propagate up to the library API.
     
    Or you might go too small, and select uint8_t. Two problems: (1) it's a pain if you wrote code to index over an array, and somebody increases the array size above 255. (2) A uint8_t value will promote to the signed type int when used in expression calculations. I've got some discussion of this on my blog; the non-C++ stuff related to uint8_t is at the end. tl;dr: -x doesn't mean what you probably think it does.
     
    The advice @@roadrunner84 gave is better, though if the "appropriate type" argument is applied consistently one should use size_t for all variables used as indexes as well as sizes. Which will hurt you badly on the MSP430 if you're using a memory model that supports objects larger than 64 kiBy, so I prefer unsigned int there too.
     
    Note: If you are going to use the size-specific types, get in the habit of explicitly including either <inttypes.h> or its lesser cousin <stdint.h>. Yes, for MSP430 and ARM the vendor headers provide it for you, but if you don't remember that it comes from the C library, not the C language, you'll be confused when you graduate to host platforms where it's not part of the null context.
  6. Like
    pabigot got a reaction from timotet in Newbie general C code question   
    I see that happening more and more; I blame it on injudicious application of MISRA rule 6.3.
     
    It's absolutely the case that types with specific sizes should be used when the range is potentially beyond the native word size of the processor or data is exchanged over a communication link, including via a file.
     
    I have yet to see a cogent argument why specific-size types should be used in preference to "int" and "unsigned int" for indexes and offsets where the native word size is appropriate, or when you're manipulating a processor peripheral that is known to be that native word size.
     
    On one hand, you might select a type that's too large for a target processor, impacting portability. TI did this recently to the CC3000 host interface, changing it so every parameter is a uint32_t even if its value range is less than one octet. This unnecessarily bloats code and hampers performance on the MSP430. Sure, when it goes out over SPI it needs to be 4 octets: but that's an encoding issue and should not propagate up to the library API.
     
    Or you might go too small, and select uint8_t. Two problems: (1) it's a pain if you wrote code to index over an array, and somebody increases the array size above 255. (2) A uint8_t value will promote to the signed type int when used in expression calculations. I've got some discussion of this on my blog; the non-C++ stuff related to uint8_t is at the end. tl;dr: -x doesn't mean what you probably think it does.
     
    The advice @@roadrunner84 gave is better, though if the "appropriate type" argument is applied consistently one should use size_t for all variables used as indexes as well as sizes. Which will hurt you badly on the MSP430 if you're using a memory model that supports objects larger than 64 kiBy, so I prefer unsigned int there too.
     
    Note: If you are going to use the size-specific types, get in the habit of explicitly including either <inttypes.h> or its lesser cousin <stdint.h>. Yes, for MSP430 and ARM the vendor headers provide it for you, but if you don't remember that it comes from the C library, not the C language, you'll be confused when you graduate to host platforms where it's not part of the null context.
  7. Like
    pabigot got a reaction from igor in __delay_cycles   
    Apparently not. From the ARM CMSIS core_cmInstr.h header (ARM GCC flavor):
    /** \brief No Operation No Operation does nothing. This instruction can be used for code alignment purposes. */ __attribute__( ( always_inline ) ) __STATIC_INLINE void __NOP(void) { __ASM volatile ("nop"); } So this is the first "nop" I've ever encountered that explicitly notes its use might not produce a delay. Good to know, and for pipelined architectures obvious (at least once it's pointed out).
     
    I could imagine that an unadorned asm("nop") as described by @@Lyon in that thread might not work if the compiler optimizes it away, but the volatile qualifier should ensure it gets into the instruction stream.  In practice, if it's present in the instruction stream it simply has to impact execution, even if the net effect is it gets absorbed into an unused pipeline stage.  If you're concerned about that happening, insert another one, and repeat until the desired effect is visible.
     
    The situation where I've used __NOP() is to enforce the 3-cycle delay after enabling a GPIO module before accessing its registers, and __NOP() works fine there (in the sense that I get a hard fault if I don't put any in, and don't when I put three in).
  8. Like
    pabigot got a reaction from spirilis in __delay_cycles   
    Apparently not. From the ARM CMSIS core_cmInstr.h header (ARM GCC flavor):
    /** \brief No Operation No Operation does nothing. This instruction can be used for code alignment purposes. */ __attribute__( ( always_inline ) ) __STATIC_INLINE void __NOP(void) { __ASM volatile ("nop"); } So this is the first "nop" I've ever encountered that explicitly notes its use might not produce a delay. Good to know, and for pipelined architectures obvious (at least once it's pointed out).
     
    I could imagine that an unadorned asm("nop") as described by @@Lyon in that thread might not work if the compiler optimizes it away, but the volatile qualifier should ensure it gets into the instruction stream.  In practice, if it's present in the instruction stream it simply has to impact execution, even if the net effect is it gets absorbed into an unused pipeline stage.  If you're concerned about that happening, insert another one, and repeat until the desired effect is visible.
     
    The situation where I've used __NOP() is to enforce the 3-cycle delay after enabling a GPIO module before accessing its registers, and __NOP() works fine there (in the sense that I get a hard fault if I don't put any in, and don't when I put three in).
  9. Like
    pabigot reacted to igor in __delay_cycles   
    As a followup to the original question - how does one go about adding a very small delay to a C program.
    By small I mean maybe 1 or two clock cycles (as compared SysCtlDelay - which involves the overhead for a procedure call
    plust 3 cycles per loop).
    From this thread http://forum.stellarisiti.com/topic/1577-very-simple-question-using-noop/ it appears that noop
    doesn't necessarily fill the bill.
  10. Like
    pabigot got a reaction from oPossum in Heap / Stack pointer   
    Egads. Yes, I suppose it may work.
     
    The standard "intrinsic" in CCS is _get_SP_register() and msp430-elf-gcc should provide the same. mspgcc provides __read_stack_register() (it might not have the alias, in which case energia should probably add it).
     
    Any toolchain should have something that does this for you without having to introduce undefined behavior.
  11. Like
    pabigot reacted to jon1426459908 in On Bit-Banding   
    Have you tried using exclusive load/stores?  Something like:
    while(__strex((__ldrex(&events)|EVENTS),&events)); using intrinsics supplied by CCS.. not sure what would be the equivalent for GCC, other than simply inlining the assembly.
  12. Like
    pabigot got a reaction from jon1426459908 in On Bit-Banding   
    As noted in the previous post, a primary value of bit-band for user memory is to record an event in a flag variable without risking a race condition. Finding myself using this idiom inside an interrupt handler where it wasn't necessary I wanted to see whether I was decreasing code size or improving performance by doing so.
     
    Compiler: gcc version 4.8.3 20131129 (release) [ARM/embedded-4_8-branch revision 205641] (GNU Tools for ARM Embedded Processors)
     
    Optimization-related flags: -ggdb -Os -mthumb -mcpu=cortex-m4 -mfpu=fpv4-sp-d16 -mfloat-abi=softfp
     
    A Read-Modify-Write update of a single bit in a SRAM variable produces this code:

    34:main.c **** events |= EVENT; 36 0002 054A ldr r2, .L2+4 41 0006 1068 ldr r0, [r2] /* BEGIN RACE CONDITION */ 42 0008 40F01000 orr r0, r0, #16 43 000c 1060 str r0, [r2] /* END RACE CONDITION */ which executes in 7 cycles (including 1 cycle overhead reading the cycle counter) on a TM4C123GH6PM. (NB: I removed from the listing the instruction at offset zero that read the cycle counter). 
    The bitband update produces this code:

    45:main.c **** BSPACM_CORE_BITBAND_SRAM32(events, EVENT_S) = 1; 73 0000 0549 ldr r1, .L5 77 0004 4901 lsls r1, r1, #5 78 0006 01F10851 add r1, r1, #570425344 /* 0x22000000 */ 79 000a 0120 movs r0, #1 which executes in 6 cycles. (Cycle counter read at offset 2 removed from listing.) 
    So: No difference in code size, one cycle timing difference. No clear reason to pick one over the other for performance reasons.
     
    Full code (which will also eventually show up in BSPACM). There is no performance difference between inline and outline code; both were included to ensure previous use of the address of events within the function didn't affect the timing.

    /* BSPACM - misc/bitband demonstration application * * Written in 2014 by Peter A. Bigot <http://pabigot.github.io/bspacm/> * * To the extent possible under law, the author(s) have dedicated all * copyright and related and neighboring rights to this software to * the public domain worldwide. This software is distributed without * any warranty. * * You should have received a copy of the CC0 Public Domain Dedication * along with this software. If not, see * <http://creativecommons.org/publicdomain/zero/1.0/>. */ /* Evaluate performance of a read-modify-write sequence to set a * single bit in an event mask versus a bitband assignment. */ #include <bspacm/core.h> #include <stdio.h> #define EVENT_S 4 #define EVENT (1U << EVENT_S) volatile unsigned int events; unsigned int rmw_set () { unsigned int t0; unsigned int t1; t0 = BSPACM_CORE_CYCCNT(); events |= EVENT; t1 = BSPACM_CORE_CYCCNT(); return t1-t0; } unsigned int bitband_set () { unsigned int t0; unsigned int t1; t0 = BSPACM_CORE_CYCCNT(); BSPACM_CORE_BITBAND_SRAM32(events, EVENT_S) = 1; t1 = BSPACM_CORE_CYCCNT(); return t1-t0; } void main () { unsigned int t0; unsigned int t1; unsigned int cycles; BSPACM_CORE_ENABLE_INTERRUPT(); printf("\n" __DATE__ " " __TIME__ "\n"); printf("System clock %lu Hz\n", SystemCoreClock); BSPACM_CORE_ENABLE_CYCCNT(); events = 0; t0 = BSPACM_CORE_CYCCNT(); events |= EVENT; t1 = BSPACM_CORE_CYCCNT(); printf("Inline RMW %x took %u cycles including overhead\n", events, t1-t0); events = 0; t0 = BSPACM_CORE_CYCCNT(); BSPACM_CORE_BITBAND_SRAM32(events, EVENT_S) = 1; t1 = BSPACM_CORE_CYCCNT(); printf("Inline BITBAND %x took %u cycles including overhead\n", events, t1-t0); events = 0; cycles = rmw_set(); printf("Outline RMW %x took %u cyclesincluding overhead\n", events, cycles); events = 0; cycles = bitband_set(); printf("Outline BITBAND %x took %u cycles including overhead\n", events, cycles); t0 = BSPACM_CORE_CYCCNT(); t1 = BSPACM_CORE_CYCCNT(); printf("Timing overhead %u cycles\n", t1-t0); }
  13. Like
    pabigot got a reaction from oPossum in MSP430 GCC - does it support 20 bit addressing?   
    The 4.7 dev version of mspgcc which supports 20-bit addresses tried to support mixed memory models so code could be small with data large, or vice-versa, but GCC wasn't architected for that sort of thing so there can be issues.
     
    AFAIK the Red Hat port supports only everything-16-bit and everything-32-bit, with -mlarge having a pretty big impact on code and data size (all pointers and size_t become 32-bit). I don't have a lot of experience with it, though, since a feature I need that's in the CCS6 version still hasn't been made available in the upstream source repositories.
  14. Like
    pabigot got a reaction from bluehash in declare ISR to be weak   
    You don't really need a separate declaration in a header; it should be enough to put __attribute__((__weak__)) on the definition of the function. I do this sort of thing in my Cortex-M infrastructure. If you do have both, it may be necessary that they be consistent.
     
    For mspgcc, the compiler detects that the function has an __interrupt__ attribute and adds a second name to the assembly source which corresponds to the name crt0 is looking for, with the value being the address of the handler. crt0 in turn has a weak definition for that symbol that resolves to the default handler, so you do need to be careful about ordering your object files so the right definition is found first.
     
    You can also do something like the following, so you could make the PWM interrupt implementation available in a function named (e.g.) PWM_handler, then select which timer interrupt invokes it in an application-specific implementation file. You might need to use the actual symbol name __irq_# for this to work.

    /* Provide a weak alias that will resolve to the unlimitedstack * implementation in the case where _sbrk() is requested. This * matches the nosys behavior of newlib. */ void * _sbrk (ptrdiff_t increment) __attribute__((__weak__,__alias__("_bspacm_sbrk_unlimitedstack")));
  15. Like
    pabigot got a reaction from fairysontk92 in How to calc the clock   
    An anomaly with this: if you're using driverlib TivaWare_C_Series-2.1.0.12573 on a TM4C123GH6PM (i.e. an EK-TM4C123GXL launchpad) you can use

    SysCtlClockSet(SYSCTL_SYSDIV_2_5 | SYSCTL_USE_PLL | SYSCTL_OSC_MAIN | SYSCTL_XTAL_16MHZ); /* 2*PLL/5 = 80 MHz */ to set the clocks to 80MHz, but if you then call SysCtlClockGet() it'll tell you you're running at 66MHz. This is because SYSCTL->DC1.MINSYSDIV, which driverlib consults, insists that the minimum divider is 3, so you can't possibly be running faster than 66MHz. 
    At least TI gives us the source code so we can figure these things out, and per this e2e post with a patch it'll be fixed some day.
  16. Like
    pabigot got a reaction from bluehash in Wolverine Launchpad   
    I got this toy supported under BSP430 today.  I don't do pictures, and it's slapdash writeup short on detail, but I wrote up some notes on my blog, for those who are interested.  The short version is it works, I now understand why it uses the 20-pin interface (there aren't enough pins to warrant adding the other 20), and it still has a long way to go before it's no longer experimental.
  17. Like
    pabigot got a reaction from bluehash in How to calc the clock   
    An anomaly with this: if you're using driverlib TivaWare_C_Series-2.1.0.12573 on a TM4C123GH6PM (i.e. an EK-TM4C123GXL launchpad) you can use

    SysCtlClockSet(SYSCTL_SYSDIV_2_5 | SYSCTL_USE_PLL | SYSCTL_OSC_MAIN | SYSCTL_XTAL_16MHZ); /* 2*PLL/5 = 80 MHz */ to set the clocks to 80MHz, but if you then call SysCtlClockGet() it'll tell you you're running at 66MHz. This is because SYSCTL->DC1.MINSYSDIV, which driverlib consults, insists that the minimum divider is 3, so you can't possibly be running faster than 66MHz. 
    At least TI gives us the source code so we can figure these things out, and per this e2e post with a patch it'll be fixed some day.
  18. Like
    pabigot got a reaction from timotet in How to calc the clock   
    An anomaly with this: if you're using driverlib TivaWare_C_Series-2.1.0.12573 on a TM4C123GH6PM (i.e. an EK-TM4C123GXL launchpad) you can use

    SysCtlClockSet(SYSCTL_SYSDIV_2_5 | SYSCTL_USE_PLL | SYSCTL_OSC_MAIN | SYSCTL_XTAL_16MHZ); /* 2*PLL/5 = 80 MHz */ to set the clocks to 80MHz, but if you then call SysCtlClockGet() it'll tell you you're running at 66MHz. This is because SYSCTL->DC1.MINSYSDIV, which driverlib consults, insists that the minimum divider is 3, so you can't possibly be running faster than 66MHz. 
    At least TI gives us the source code so we can figure these things out, and per this e2e post with a patch it'll be fixed some day.
  19. Like
    pabigot got a reaction from igor in How to calc the clock   
    An anomaly with this: if you're using driverlib TivaWare_C_Series-2.1.0.12573 on a TM4C123GH6PM (i.e. an EK-TM4C123GXL launchpad) you can use

    SysCtlClockSet(SYSCTL_SYSDIV_2_5 | SYSCTL_USE_PLL | SYSCTL_OSC_MAIN | SYSCTL_XTAL_16MHZ); /* 2*PLL/5 = 80 MHz */ to set the clocks to 80MHz, but if you then call SysCtlClockGet() it'll tell you you're running at 66MHz. This is because SYSCTL->DC1.MINSYSDIV, which driverlib consults, insists that the minimum divider is 3, so you can't possibly be running faster than 66MHz. 
    At least TI gives us the source code so we can figure these things out, and per this e2e post with a patch it'll be fixed some day.
  20. Like
    pabigot got a reaction from Mr.Cruz in Execution time - the easy way   
    FWIW, DWT is a compile-time constant pointer, not a variable, so there's no double-dereference (the CYCCNT value is read directly from its register). This is true of all CMSIS pointers to mapped memory.
  21. Like
    pabigot got a reaction from chicken in SHARP Memory Display Booster Pack   
    Without looking at the source, there's a startup procedure for these displays that, if not followed, might explain that behavior. My implementation of it can be seen here.
  22. Like
    pabigot got a reaction from xpg in The Datasheet "Book Club"   
    Today's research is on Cortex-M sleep modes and interrupt handling. I'm still digesting the bulk of it, but the short form is there are three common operational modes (RUN, SLEEP, DEEP SLEEP) that are implemented through Cortex-M instructions WFI/WFE and bits in the SCR. (Anything lower than DEEP SLEEP, like "hibernate" or EM3/EM4, is probably vendor-specific). Based on a sample size of two (TM4C and EFM32), RUN and SLEEP are simple, but DEEP SLEEP can reconfigure your processor and peripheral clocks back to their power-up defaults. PRIMASK plays an important role in this too, if you want to be sure all interrupts are handled before you commit to sleeping. (Which you probably do, if what your interrupts do includes assumptions that the clocks are configured in a particular way.)
     
    The "book-club" relevant part is that section 5.2.6 "System Control" of the TM4C123GH6PM datasheet clearly specifies that software must confirm the EEPROM is not busy prior to entering either SLEEP or DEEP SLEEP. TivaWare's ROM_SysCtl*Sleep and CPUwfi() don't do this explicitly, though many of the EEPROM_* commands do check EEPROM.EEDONE before returning. Keep this in mind if you happen to be mucking with the EEPROM just before going to sleep, say, maybe to save some state in non-volatile memory.
  23. Like
    pabigot got a reaction from bluehash in Turning MSP430 into an amperemeter - possible?   
    Very interesting. From the MSP430_EnergyTrace header in the source release (which is BSD-3-Clause):
     

    Record format ------------- The default record format for operation mode 1 for the MSP430FR5859 is 22bytes for each record consisting of: [8byte header][8byte device State][4byte current I in nA][2byte voltage V in mV][4byte energy E in uWsec=uJ] Where the header consists of: [1byte eventID][7byte timestamp in usec] The eventID defines the number of arguments following the header. eventID = 1 : I value, 32 bits current eventID = 2 : V value, 16 bits voltage eventID = 3 : I & V values, 32 bits current, 16 bits voltage eventID = 4 : S value, 64 bits state (default type for ET_PROFILING_DSTATE) eventID = 5 : S & I values, 64 bits state, 32 bits current eventID = 6 : S & V values, 64 bits state, 16 bits voltage eventID = 7 : S & I & V & E values, 64 bits state, 32 bits current, 16 bits voltage, 32 bits energy (default type for ET_PROFILING_ANALOG_DSTATE) eventID = 8 : I & V & E values, 32 bits current, 16 bits voltage, 32 bits energy (default type for ET_PROFILING_ANALOG) eventID = 9 : S & I & V values, 64 bits state, 32 bits current, 16 bits voltage Sampling frequencies from 100 Hz to 100 kHz. 
    It's probably the same general solution as Energy Micro uses for the energyAware Profiler.
     
    Might be a capability that could be added to mspdebug when using the tilib driver.
     
    Tempting project, but I really need to focus on BSPACM for now.
     
    *EDIT* See http://forum.43oh.com/topic/5364-msp-debug-stack-no-longer-open-source/ for licensing restrictions related to this feature.
  24. Like
    pabigot got a reaction from xpg in MSP debug stack no longer open source   
    Just a heads up for those concerned with software licensing: although TI has open-sourced the MSP Debug Stack, in fact it now contains material that is not open-source, related to an Energy Trace facility. That facility and the change in licensing appeared in June 2013 in slac460f, prior to which this package was entirely BSD.
     
    I've confirmed that objects derived from the sources with the TI TSPA license (displayed below) are present in libmsp430.so, which appears to bring the licensing below into effect. There is a single API header which is BSD-3-Clause, but to use it would explicitly invoke material that is restricted.
     
    Anybody using the MSP debug stack who has not reviewed the license on each new release should be made aware of the change and consult qualified legal counsel as appropriate.
     

    /* * EnergyTraceProcessor.h * * Copyright (c) 2007 - 2013 Texas Instruments Incorporated - http://www.ti.com/ * * All rights reserved not granted herein. * Limited License. * * Texas Instruments Incorporated grants a world-wide, royalty-free, * non-exclusive license under copyrights and patents it now or hereafter * owns or controls to make, have made, use, import, offer to sell and sell ("Utilize") * this software subject to the terms herein. With respect to the foregoing patent * license, such license is granted solely to the extent that any such patent is necessary * to Utilize the software alone. The patent license shall not apply to any combinations which * include this software, other than combinations with devices manufactured by or for TI (<93>TI Devices<94>). * No hardware patent is licensed hereunder. * * Redistributions must preserve existing copyright notices and reproduce this license (including the * above copyright notice and the disclaimer and (if applicable) source code license limitations below) * in the documentation and/or other materials provided with the distribution * * Redistribution and use in binary form, without modification, are permitted provided that the following * conditions are met: * * * No reverse engineering, decompilation, or disassembly of this software is permitted with respect to any * software provided in binary form. * * any redistribution and use are licensed by TI for use only with TI Devices. * * Nothing shall obligate TI to provide you with source code for the software licensed and provided to you in object code. * * If software source code is provided to you, modification and redistribution of the source code are permitted * provided that the following conditions are met: * * * any redistribution and use of the source code, including any resulting derivative works, are licensed by * TI for use only with TI Devices. * * any redistribution and use of any object code compiled from the source code and any resulting derivative * works, are licensed by TI for use only with TI Devices. * * Neither the name of Texas Instruments Incorporated nor the names of its suppliers may be used to endorse or * promote products derived from this software without specific prior written permission. * * DISCLAIMER. * * THIS SOFTWARE IS PROVIDED BY TI AND TI<92>S LICENSORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, * BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL TI AND TI<92>S LICENSORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, * OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. */
  25. Like
    pabigot got a reaction from spirilis in MEMORY ORGANIZATION IN MSP340FR5969 - WOLVERINE   
    The "clean" way to solve this is to define an array in your code that will hold the data. Because the size exceeds the RAM capacity of the device, things get a bit complicated. @@spirilis' approach works (and I've used it before) but you do have to finesse const validation.
     
    The "clean" way is to define a new linker section and tell the compiler to put the array into that section. Here's part of a patch that does this for a large display buffer, assuming you're using Code Composer Studio:
     

    diff --git a/430BOOST-SHARP96_GrlibDisplay/LcdDriver/Sharp96x96.c b/430BOOST-SHARP96_GrlibDisplay/LcdDriver/Sharp96x96.c index eddcf43..b2f1845 100755 --- a/430BOOST-SHARP96_GrlibDisplay/LcdDriver/Sharp96x96.c +++ b/430BOOST-SHARP96_GrlibDisplay/LcdDriver/Sharp96x96.c @@ -46,6 +46,7 @@ #include "../driverlibHeaders.h" #include "inc/hw_memmap.h" +#pragma DATA_SECTION(DisplayBuffer, ".fram_data") unsigned char DisplayBuffer[LCD_VERTICAL_MAX][LCD_HORIZONTAL_MAX/8]; unsigned char VCOMbit= 0x40; unsigned char flagSendToggleVCOMCommand = 0; diff --git a/430BOOST-SHARP96_GrlibDisplay/lnk_msp430fr5969.cmd b/430BOOST-SHARP96_GrlibDisplay/lnk_msp430fr5969.cmd index 1c19b99..1b47735 100644 --- a/430BOOST-SHARP96_GrlibDisplay/lnk_msp430fr5969.cmd +++ b/430BOOST-SHARP96_GrlibDisplay/lnk_msp430fr5969.cmd @@ -136,6 +136,7 @@ SECTIONS { .cio : {} /* C I/O BUFFER */ .sysmem : {} /* DYNAMIC MEMORY ALLOCATION AREA */ + .fram_data : {} /* Stuff that won't fit in RAM */ } ALIGN(0x0400), RUN_START(fram_rw_start) GROUP(READ_ONLY_MEMORY) -- Other toolchains support the same concept but may use a different pragma or linker-script edit. 
    You can put multiple objects in the same linker section, if you need to add more data in the future.
×
×
  • Create New...