Jump to content
43oh

pabigot

Members
  • Content Count

    577
  • Joined

  • Last visited

  • Days Won

    30

Reputation Activity

  1. Like
    pabigot got a reaction from greeeg in New BSP430 release available   
    BSP430 has been updated to the 20141115 release, which includes full support for msp430-elf and support for the new FR4xx/2xx chip families. Details are available on the web site.
     
    Of particular interest to 43oh folks might be the updated script to build an msp430-elf toolchain, and the newlib sys integration.
     
    If you use BSP430 and encounter any problems, please file issue reports on github.
     
    This is the last version of BSP430 that will default to mspgcc. Henceforth it's going to be msp430-elf, though mspgcc will still be supported (at about the level that CCS is supported, i.e. not very much). I highly recommend everybody else start transitioning too. The new toolchain has its problems, but it's still the way forward.
  2. Like
    pabigot got a reaction from bluehash in New BSP430 release available   
    BSP430 has been updated to the 20141115 release, which includes full support for msp430-elf and support for the new FR4xx/2xx chip families. Details are available on the web site.
     
    Of particular interest to 43oh folks might be the updated script to build an msp430-elf toolchain, and the newlib sys integration.
     
    If you use BSP430 and encounter any problems, please file issue reports on github.
     
    This is the last version of BSP430 that will default to mspgcc. Henceforth it's going to be msp430-elf, though mspgcc will still be supported (at about the level that CCS is supported, i.e. not very much). I highly recommend everybody else start transitioning too. The new toolchain has its problems, but it's still the way forward.
  3. Like
    pabigot got a reaction from Rickta59 in Mac Homebrew formulas for GCC 4.9   
    The current version of my msp430-elf build script is available in BSP430's next branch.  It might be helpful.
  4. Like
    pabigot got a reaction from igor in __delay_cycles   
    I think it's probably right: C defines the behavior of arithmetic on unsigned int very carefully. (Beware, though, that unsigned char does not behave as nicely as it can promote to a signed type, and once something becomes signed all bets are off.)
     
    http://www.thetaeng.com/TimerWrap.htm is my go-to site whenever I start having doubts about whether I'm handling counter comparisons correctly.
  5. Like
    pabigot got a reaction from igor in [Tiva] delayMicroseconds unreliable   
    I'm not paying a lot of attention to this because it's Energia and Tiva neither of which I use right now, but for low-overhead delays of reasonable duration you might want to look at how BSPACM does it. For microsecond counts you should just be able to convert the count to cycles based on the CPU frequency. Wouldn't be dependent on systick then.
     
    There's some discussion of this on the Stellarisiti forums somewhere.
     
    For what that's worth.
  6. Like
    pabigot got a reaction from spirilis in [Tiva] delayMicroseconds unreliable   
    I'm not paying a lot of attention to this because it's Energia and Tiva neither of which I use right now, but for low-overhead delays of reasonable duration you might want to look at how BSPACM does it. For microsecond counts you should just be able to convert the count to cycles based on the CPU frequency. Wouldn't be dependent on systick then.
     
    There's some discussion of this on the Stellarisiti forums somewhere.
     
    For what that's worth.
  7. Like
    pabigot got a reaction from SvdSinner in I2C basic functions on f5529   
    You can take a look at how BSP430 does it to see if that's helpful, though you'd have to refactor the I/O to do single bytes in interrupts rather than polled I/O, if you're doing enough I2C activity to be worth it.
  8. Like
    pabigot reacted to enl in Convert char to integer issue   
    This thread is now required reading for my intro programming students. And now I also understand (or at least think I do)  more about how Python strings work, as well as the character and string encodings in modern practice.
  9. Like
    pabigot got a reaction from enl in Convert char to integer issue   
    That depends on the level you're working at. Yes, at the bottom it's all bytes (or bits, or transistor states, or whatever). In C, you're nominally limited to text that can be expressed as ASCII characters, and the side comment that led us down this rabbit hole was prompted by the OP's confusion between text strings and sequences of characters and NUL-terminated sequences of characters: three distinct concepts.
     
    Even if it's not necessary to completely understand a specific subtlety at a particular stage of development, I do think it's worth a hic sunt dracones (i.e., "by the way, you're making an assumption here that won't always work out for you").
     
    Back to the lecture.
     
    Unlike C, in Python unicode strings are a first-class data type. You can operate on them (calculate length, extract substrings, sort, catenate) with complete disregard for how the text is represented as a sequence of characters, and how each character is represented. Similarly you can do this with C++11 with wide character support. In practice, these systems generally use UCS-16 or UCS-32 underneath, but to the developer it's just text of some arbitrary language.
     
    In these environments, you can certainly manipulate and display ??????? without caring how that string is encoded in memory or by the I/O subsystem (which might translate for you from the internal representation to the encoding specified by the environment, e.g. the LANG variable or a previous invocation of setlocale(3)). In an embedded environment, you are more likely to need to know that the display you're writing to requires a specific byte to represent a specific character (extended ASCII), or that you must do the translation from characters to glyph bitmaps yourself.
     
    Here's an example Python program to play with. It works in both Python 2 (2.6+?) and Python 3.
    # -*- coding: utf-8 -*- from __future__ import unicode_literals; import binascii te = 'text' de = te.encode('utf-8') print(type(te)) print(type(de)) print(te) print(binascii.hexlify(te)) print(de) print(binascii.hexlify(de)) print(te == de) tr = '???????' dr = tr.encode('utf-8') print(tr) print(binascii.hexlify(tr)) print(dr) print(binascii.hexlify(dr)) print(tr == dr) In Python 2, the t* values have type unicode and the d* values have type str. (Without the "from __future__" line the t* values would also have type str because Python 2 failed to distinguish sequences of (ASCII) characters from sequences of bytes.) 
    In Python 3, the t* values have type str and the d* values have type bytes.
     
    The output from this under Python 2 is:

    llc[49]$ python /tmp/x.py <type 'unicode'> <type 'str'> text 74657874 text 74657874 True ??????? Traceback (most recent call last): File "/tmp/x.py", line 20, in <module> print(binascii.hexlify(tr)) UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-6: ordinal not in range(128) In the first block, you see that the text and encoded versions have the same bit representation and compare equal, even though they have different types. 
    The second block blows chunks, because binascii.hexlify() can't operate on strings that have non-ASCII characters. If you comment out that line, you get a warning and the two strings are not equal.
     
    Under Python 3.4.2 you have to comment out the hex conversion of the text versions because it doesn't know how to convert Unicode to bytes, but doing that you get:

    llc[50]$ /usr/local/python-3.4.2/bin/python /tmp/x.py <class 'str'> <class 'bytes'> text b'text' b'74657874' False ??????? b'\xd1\x8d\xd0\xbd\xd0\xb5\xd1\x80\xd0\xb3\xd0\xb8\xd1\x8f' b'd18dd0bdd0b5d180d0b3d0b8d18f' False Even for the strings that are entirely ASCII, the text and the UTF-8 encoded text are not equal, because Python 3 distinguishes them by type. 
    This is why, in unit tests where I was checking whether the XML was right, it was necessary to know whether the XML was text or had been encoded in some way, e.g. for storage on disk. This arose in part because Python's standard library for converting Document Object Model (DOM) representations of XML into "XML" produces encoded text ready to be transferred to another system, not Unicode strings suitable for use in the application.
  10. Like
    pabigot got a reaction from roadrunner84 in Convert char to integer issue   
    That depends on the level you're working at. Yes, at the bottom it's all bytes (or bits, or transistor states, or whatever). In C, you're nominally limited to text that can be expressed as ASCII characters, and the side comment that led us down this rabbit hole was prompted by the OP's confusion between text strings and sequences of characters and NUL-terminated sequences of characters: three distinct concepts.
     
    Even if it's not necessary to completely understand a specific subtlety at a particular stage of development, I do think it's worth a hic sunt dracones (i.e., "by the way, you're making an assumption here that won't always work out for you").
     
    Back to the lecture.
     
    Unlike C, in Python unicode strings are a first-class data type. You can operate on them (calculate length, extract substrings, sort, catenate) with complete disregard for how the text is represented as a sequence of characters, and how each character is represented. Similarly you can do this with C++11 with wide character support. In practice, these systems generally use UCS-16 or UCS-32 underneath, but to the developer it's just text of some arbitrary language.
     
    In these environments, you can certainly manipulate and display ??????? without caring how that string is encoded in memory or by the I/O subsystem (which might translate for you from the internal representation to the encoding specified by the environment, e.g. the LANG variable or a previous invocation of setlocale(3)). In an embedded environment, you are more likely to need to know that the display you're writing to requires a specific byte to represent a specific character (extended ASCII), or that you must do the translation from characters to glyph bitmaps yourself.
     
    Here's an example Python program to play with. It works in both Python 2 (2.6+?) and Python 3.
    # -*- coding: utf-8 -*- from __future__ import unicode_literals; import binascii te = 'text' de = te.encode('utf-8') print(type(te)) print(type(de)) print(te) print(binascii.hexlify(te)) print(de) print(binascii.hexlify(de)) print(te == de) tr = '???????' dr = tr.encode('utf-8') print(tr) print(binascii.hexlify(tr)) print(dr) print(binascii.hexlify(dr)) print(tr == dr) In Python 2, the t* values have type unicode and the d* values have type str. (Without the "from __future__" line the t* values would also have type str because Python 2 failed to distinguish sequences of (ASCII) characters from sequences of bytes.) 
    In Python 3, the t* values have type str and the d* values have type bytes.
     
    The output from this under Python 2 is:

    llc[49]$ python /tmp/x.py <type 'unicode'> <type 'str'> text 74657874 text 74657874 True ??????? Traceback (most recent call last): File "/tmp/x.py", line 20, in <module> print(binascii.hexlify(tr)) UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-6: ordinal not in range(128) In the first block, you see that the text and encoded versions have the same bit representation and compare equal, even though they have different types. 
    The second block blows chunks, because binascii.hexlify() can't operate on strings that have non-ASCII characters. If you comment out that line, you get a warning and the two strings are not equal.
     
    Under Python 3.4.2 you have to comment out the hex conversion of the text versions because it doesn't know how to convert Unicode to bytes, but doing that you get:

    llc[50]$ /usr/local/python-3.4.2/bin/python /tmp/x.py <class 'str'> <class 'bytes'> text b'text' b'74657874' False ??????? b'\xd1\x8d\xd0\xbd\xd0\xb5\xd1\x80\xd0\xb3\xd0\xb8\xd1\x8f' b'd18dd0bdd0b5d180d0b3d0b8d18f' False Even for the strings that are entirely ASCII, the text and the UTF-8 encoded text are not equal, because Python 3 distinguishes them by type. 
    This is why, in unit tests where I was checking whether the XML was right, it was necessary to know whether the XML was text or had been encoded in some way, e.g. for storage on disk. This arose in part because Python's standard library for converting Document Object Model (DOM) representations of XML into "XML" produces encoded text ready to be transferred to another system, not Unicode strings suitable for use in the application.
  11. Like
    pabigot got a reaction from spirilis in Convert char to integer issue   
    That depends on the level you're working at. Yes, at the bottom it's all bytes (or bits, or transistor states, or whatever). In C, you're nominally limited to text that can be expressed as ASCII characters, and the side comment that led us down this rabbit hole was prompted by the OP's confusion between text strings and sequences of characters and NUL-terminated sequences of characters: three distinct concepts.
     
    Even if it's not necessary to completely understand a specific subtlety at a particular stage of development, I do think it's worth a hic sunt dracones (i.e., "by the way, you're making an assumption here that won't always work out for you").
     
    Back to the lecture.
     
    Unlike C, in Python unicode strings are a first-class data type. You can operate on them (calculate length, extract substrings, sort, catenate) with complete disregard for how the text is represented as a sequence of characters, and how each character is represented. Similarly you can do this with C++11 with wide character support. In practice, these systems generally use UCS-16 or UCS-32 underneath, but to the developer it's just text of some arbitrary language.
     
    In these environments, you can certainly manipulate and display ??????? without caring how that string is encoded in memory or by the I/O subsystem (which might translate for you from the internal representation to the encoding specified by the environment, e.g. the LANG variable or a previous invocation of setlocale(3)). In an embedded environment, you are more likely to need to know that the display you're writing to requires a specific byte to represent a specific character (extended ASCII), or that you must do the translation from characters to glyph bitmaps yourself.
     
    Here's an example Python program to play with. It works in both Python 2 (2.6+?) and Python 3.
    # -*- coding: utf-8 -*- from __future__ import unicode_literals; import binascii te = 'text' de = te.encode('utf-8') print(type(te)) print(type(de)) print(te) print(binascii.hexlify(te)) print(de) print(binascii.hexlify(de)) print(te == de) tr = '???????' dr = tr.encode('utf-8') print(tr) print(binascii.hexlify(tr)) print(dr) print(binascii.hexlify(dr)) print(tr == dr) In Python 2, the t* values have type unicode and the d* values have type str. (Without the "from __future__" line the t* values would also have type str because Python 2 failed to distinguish sequences of (ASCII) characters from sequences of bytes.) 
    In Python 3, the t* values have type str and the d* values have type bytes.
     
    The output from this under Python 2 is:

    llc[49]$ python /tmp/x.py <type 'unicode'> <type 'str'> text 74657874 text 74657874 True ??????? Traceback (most recent call last): File "/tmp/x.py", line 20, in <module> print(binascii.hexlify(tr)) UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-6: ordinal not in range(128) In the first block, you see that the text and encoded versions have the same bit representation and compare equal, even though they have different types. 
    The second block blows chunks, because binascii.hexlify() can't operate on strings that have non-ASCII characters. If you comment out that line, you get a warning and the two strings are not equal.
     
    Under Python 3.4.2 you have to comment out the hex conversion of the text versions because it doesn't know how to convert Unicode to bytes, but doing that you get:

    llc[50]$ /usr/local/python-3.4.2/bin/python /tmp/x.py <class 'str'> <class 'bytes'> text b'text' b'74657874' False ??????? b'\xd1\x8d\xd0\xbd\xd0\xb5\xd1\x80\xd0\xb3\xd0\xb8\xd1\x8f' b'd18dd0bdd0b5d180d0b3d0b8d18f' False Even for the strings that are entirely ASCII, the text and the UTF-8 encoded text are not equal, because Python 3 distinguishes them by type. 
    This is why, in unit tests where I was checking whether the XML was right, it was necessary to know whether the XML was text or had been encoded in some way, e.g. for storage on disk. This arose in part because Python's standard library for converting Document Object Model (DOM) representations of XML into "XML" produces encoded text ready to be transferred to another system, not Unicode strings suitable for use in the application.
  12. Like
    pabigot got a reaction from enl in Convert char to integer issue   
    I'm gonna have to object, mostly because your statements on XML and programming for Unicode are unclear in a way that obscures my main point. So I'm going to expand on that main point and try to clarify XML along the way.
     
    There is a strict conceptual difference between text and representations of text by encoding schemes such as ASCII and Unicode, just as there's a difference between integers and representations of integers as two's complement, one's complement, or sign-magnitude. I'm trying to express two points: first, be aware of that difference; and second, take into account the possibility of alternative representations.
     
    UTF-8 only uses single-byte encodings for characters that are in the ASCII character set (U+0000 through U+007F). The representation of character U+0080 (integer value 0x80 = 128) in UTF-8 is a two-byte sequence hex "C2 80".
     
    The only time ASCIIZ and UTF-8 representations are equivalent is when the encoded text is ASCII and the UTF-8 indicates text length by a terminating null. While any standard ASCII C string is bitwise equivalent to its null-terminated UTF-8 encoded representation, many UTF-8 encoded strings cannot be expressed as ASCII strings.
     
    This means that "the content of a is ASCIIZ" tells you something very different about how a can be used than what you're told by "the content of a is null-terminated UTF-8". My original point: clearly document your data objects so the reader knows what they contain.
     
    XML (which I selected only as an example) by definition uses characters from Unicode. If you're using a language that has a Unicode text data type (unicode in Python 2, str in Python 3, std::wstring in C++ 11) then you will operate on XML text as Unicode characters, using length, copy, catenation, and other functions that manipulate Unicode data and that are distinct from the corresponding narrow-character C functions. You would not operate on them as text in their encoded form with those narrow-character C functions.
     
    Encoding comes into play when you need to transfer the text to another system (via storing in a file or sending it over a network). Then you can encode it as UTF-8, UTF-16, UTF-32, shift_jis, or whatever. This representation is not text: it is a sequence of integral values representing code points. If the values are not 8-bit then you also need to know byte ordering before you can treat it as a sequence of octets. For XML, absence of an encoding declaration requires that the content be UTF-8 or UTF-16. (This may be what you meant by "disagree on storing xml text being different from storing xml UTF-8 encoded...the default text encoding is UTF-8".)
     
    My whole point was: In early PyXB I mistakenly assumed text and data were identical, because by chance that happened that everything I encountered was the ASCII subset of Unicode. This was an error, demonstrated when somebody used PyXB for (non-romaji) Japanese; I certainly never intended PyXB to only support English. After a fair amount of gratuitous rework, PyXB is very robust for languages where text can't be represented in ASCII. The lesson: Unless you know absolutely that you're dealing only with ASCII text in C, keep in mind from the beginning that text and the representation of text as data are two distinct things.
     
    I don't know what Windows does, but in modern POSIX systems like Linux the default text encoding is specified by the "C" or "POSIX" locale which uses the POSIX portable character set which is a subset of ASCII. UTF-8 is what POSIX calls a "state-dependent encoding", and is not the default. On my Linux systems I have to specifically override the environment variable LANG to en_US.UTF-8 to enable UTF-8 encoding to see non-ASCII content.
  13. Like
    pabigot got a reaction from Fred in Convert char to integer issue   
    Being pedantic (which I am...):
     
    Ignoring the past-end-of-array error: In C, the macro NULL as defined in various headers denotes a null pointer constant, nominally compatible with type void * but not necessarily of pointer type. In C++ NULL is an integral constant expression that evaluates to zero; it cannot be a pointer type. In C++11 the literal nullptr replaces NULL.
     
    The end-of-string terminating character in C and C++ is the ASCII code NUL, which is character value '\0' or equivalently an integral value zero.
     
    One is a pointer, the other is a character. You shouldn't mix those concepts: you'll just confuse yourself and whoever has to maintain your code. (If you use a compiler where NULL is equivalent to ((void*)0) and you enable all warnings (which you should do) you'd get complaints about assigning a pointer to a non-pointer object.)
     
    In situations like the above, assign either '\0' or simply 0 to a char that marks the end of the string.
     
    (When documenting an array like a that holds a NUL-terminated array of characters, you might specify to its content as "ASCIIZ" denoting a zero-terminated ASCII encoded string, as opposed to a length-plus-characters encoding used in some other languages. There are cases where the object would intentionally exclude the terminating zero, but that's a topic for another day.)
  14. Like
    pabigot got a reaction from cde in Convert char to integer issue   
    Being pedantic (which I am...):
     
    Ignoring the past-end-of-array error: In C, the macro NULL as defined in various headers denotes a null pointer constant, nominally compatible with type void * but not necessarily of pointer type. In C++ NULL is an integral constant expression that evaluates to zero; it cannot be a pointer type. In C++11 the literal nullptr replaces NULL.
     
    The end-of-string terminating character in C and C++ is the ASCII code NUL, which is character value '\0' or equivalently an integral value zero.
     
    One is a pointer, the other is a character. You shouldn't mix those concepts: you'll just confuse yourself and whoever has to maintain your code. (If you use a compiler where NULL is equivalent to ((void*)0) and you enable all warnings (which you should do) you'd get complaints about assigning a pointer to a non-pointer object.)
     
    In situations like the above, assign either '\0' or simply 0 to a char that marks the end of the string.
     
    (When documenting an array like a that holds a NUL-terminated array of characters, you might specify to its content as "ASCIIZ" denoting a zero-terminated ASCII encoded string, as opposed to a length-plus-characters encoding used in some other languages. There are cases where the object would intentionally exclude the terminating zero, but that's a topic for another day.)
  15. Like
    pabigot got a reaction from spirilis in Convert char to integer issue   
    Being pedantic (which I am...):
     
    Ignoring the past-end-of-array error: In C, the macro NULL as defined in various headers denotes a null pointer constant, nominally compatible with type void * but not necessarily of pointer type. In C++ NULL is an integral constant expression that evaluates to zero; it cannot be a pointer type. In C++11 the literal nullptr replaces NULL.
     
    The end-of-string terminating character in C and C++ is the ASCII code NUL, which is character value '\0' or equivalently an integral value zero.
     
    One is a pointer, the other is a character. You shouldn't mix those concepts: you'll just confuse yourself and whoever has to maintain your code. (If you use a compiler where NULL is equivalent to ((void*)0) and you enable all warnings (which you should do) you'd get complaints about assigning a pointer to a non-pointer object.)
     
    In situations like the above, assign either '\0' or simply 0 to a char that marks the end of the string.
     
    (When documenting an array like a that holds a NUL-terminated array of characters, you might specify to its content as "ASCIIZ" denoting a zero-terminated ASCII encoded string, as opposed to a length-plus-characters encoding used in some other languages. There are cases where the object would intentionally exclude the terminating zero, but that's a topic for another day.)
  16. Like
    pabigot got a reaction from abecedarian in CC3200 WiFi Channels Limitation Info   
    Note that channels 12-14 are not legal for full-power use in North America, so depending on your location the fault may be in the configuration of your AP. (I haven't unboxed my CC3200, but I'd be pretty surprised if it couldn't handle at least channels 12 and 13, but not surprised if its default configuration excludes them.)
  17. Like
    pabigot got a reaction from spirilis in CC3200 WiFi Channels Limitation Info   
    Note that channels 12-14 are not legal for full-power use in North America, so depending on your location the fault may be in the configuration of your AP. (I haven't unboxed my CC3200, but I'd be pretty surprised if it couldn't handle at least channels 12 and 13, but not surprised if its default configuration excludes them.)
  18. Like
    pabigot got a reaction from spirilis in New Launchpad just dropped   
    You're right. I misread "formally called CPUXV2" as "formerly called CPUXV2". (BTW: If you reference a SLAU, please specify which version. SLAU208M page 183 documents the UCSCTL8 register, and doesn't have the "formally" phrase added in SLAU208N.)
     
    What I was afraid of was that the preprocessor symbols were going to change, which would be bad. (Still reeling from TI changing the version register values in the CC110x radio line because they went to a different manufacturing process but kept the same functionality.)
     
    TI should check the headers though. The msp430fr4131.h header from msp430-elf has #define __MSP430FR5XX_6XX_FAMILY__ which can't be right.
     
    AFAICT there are three sub-families in FRxx now: the original FR57xx (slau272c), the Wolverine FR58xx/FR59xx/FR68xx/FR69xx (slau367e), and the new FR4xx/FR2xx (slau445). The FR57xx isn't mentioned on the FRxx overview page any more, but apparently still exists.
  19. Like
    pabigot reacted to spirilis in New Launchpad just dropped   
    Are you sure that's what's happening?  I thought CPUXV2 was well established from the F5xxx/6xxx, FR5xxx/6xxx lines...
    That exact same wording is employed in the F5xxx/6xxx user's guide as well (SLAU208) page 183.
  20. Like
    pabigot reacted to greeeg in Wolverine Launchpad update for early purchasers   
    Works fine if you use an older FET firmware. Easy to do with mspdebug, harder/not feasible with CCS. 
    And you can't forget the all important JTAG shroud color change between versions, the new one is black. Old one was grey
  21. Like
    pabigot got a reaction from timotet in Exercise: Robust Digital Edge Detection   
    Alright, the problem's been identified and we're now down in the weeds. Rather than point out issues in solutions-in-progress, below is my explanation of the flaw and how I'm solving it. Please point out anything that seems wrong or doesn't satisfy the requirements as stated. Those who want to keep trying should not read this post yet.
     
    The key phrase, as is so often the case, is: "race condition"
     
    The MSP430 features that impact it are (1) the PxIFG register records only edge transitions, not signal level, and (2) disabling interrupts only delays notification of a detected event, it does not delay recording the event.
     
    What's wrong here is: At any point after (1) the pin state may change. If it changes an odd number of times before (3), we'll be configured to detect the wrong edge, and the state will always be opposite the real value.
     
    This has two problems.
     
    First, if the transition occurred during a long period with interrupts disabled, the state may have changed back prior to the interrupt handler being invoked, so again we get out of sync. One might argue that such a change should be characterized as "transient" even if it exceeded the arbitrary 20-cycle limit. From the perspective that leaving interrupts disabled that long is poor design I'd probably concede the point, so this quibble is weak.
     
    But second, as with configure the state may change between (1) and (4), again desynchronizing the state and the edge being detected.
     
    The following approach solves both problems. First, we need to be absolutely sure that the edge we're looking for will detect a change from the state we think we're in. Use the following function:
     

    static __inline__ void configure_state_detection () { do { state = PxIN & BITn; // 1 : record state if (state) { PxIES |= BITn; // 2a : High, detect falling edge } else { PxIES &= ~BITn; // 2b : Low, detect rising edge } PxIFG &= ~BITn; // 3 : Reset interrupts from past changes } while ((PxIN & BITn) != state); // 4 : loop if state changed } This is much like the original naive configuration, but skips the PxIE setting and makes sure that the state after configuring the edge matches the state for which the edge was configured. So we know we're looking for the right edge, even if it changed while we were setting things up. 
    Then use that in these contexts:
     
    Configure:

    __disable_interrupt(); configure_state_detection(); // 1 : safe set state and edge detection PxIE |= BITn; // 2 : Enable interrupts on state change, may fire right away __enable_interrupt(); Nothing special for configure. Monitor changes a lot: 
    Monitor:

    int in_state = state; configure_state_detection(); // 1 : safe set state and edge detection if (in_state != state) { if (state) { // 2a : emit rising edge event } else { // 2b : emit falling edge event } } This makes sure that state has actually changed within the resolution of our ability to detect it. (As with the rejected quibble above, we'll assume that any even number of changes between the original edge and the execution of the handler are ignorable within the bounds of "transient". If they had to be reported, interrupts were disabled too long and there is no viable solution.) 
    Eagle-eyed folks will notice that there is a race condition in configure_state_detection: If the state changes a positive even number of times between (3) and (4) we're going to get a spurious interrupt as soon as Configure re-enables interrupts. However, this doesn't matter because the Monitor implementation will detect that no effective change occurred and will not generate an event.
     
    Because this safely resynchronizes the state and the PxIES configurations each time the state is inspected, and we only generate events when the state actually changed, it's also safe against the subtlety that setting PxIES can produce spurious interrupts.
     
    Addendum 2014-09-29T11:50: I should make clear that the state variable must be marked volatile as it's a global that is mutated within an ISR and read outside the ISR. Technically some hyper-optimizing compiler might otherwise allow the application to process an event while looking at an out-of-date value of state.
  22. Like
    pabigot got a reaction from zeke in Exercise: Robust Digital Edge Detection   
    Alright, the problem's been identified and we're now down in the weeds. Rather than point out issues in solutions-in-progress, below is my explanation of the flaw and how I'm solving it. Please point out anything that seems wrong or doesn't satisfy the requirements as stated. Those who want to keep trying should not read this post yet.
     
    The key phrase, as is so often the case, is: "race condition"
     
    The MSP430 features that impact it are (1) the PxIFG register records only edge transitions, not signal level, and (2) disabling interrupts only delays notification of a detected event, it does not delay recording the event.
     
    What's wrong here is: At any point after (1) the pin state may change. If it changes an odd number of times before (3), we'll be configured to detect the wrong edge, and the state will always be opposite the real value.
     
    This has two problems.
     
    First, if the transition occurred during a long period with interrupts disabled, the state may have changed back prior to the interrupt handler being invoked, so again we get out of sync. One might argue that such a change should be characterized as "transient" even if it exceeded the arbitrary 20-cycle limit. From the perspective that leaving interrupts disabled that long is poor design I'd probably concede the point, so this quibble is weak.
     
    But second, as with configure the state may change between (1) and (4), again desynchronizing the state and the edge being detected.
     
    The following approach solves both problems. First, we need to be absolutely sure that the edge we're looking for will detect a change from the state we think we're in. Use the following function:
     

    static __inline__ void configure_state_detection () { do { state = PxIN & BITn; // 1 : record state if (state) { PxIES |= BITn; // 2a : High, detect falling edge } else { PxIES &= ~BITn; // 2b : Low, detect rising edge } PxIFG &= ~BITn; // 3 : Reset interrupts from past changes } while ((PxIN & BITn) != state); // 4 : loop if state changed } This is much like the original naive configuration, but skips the PxIE setting and makes sure that the state after configuring the edge matches the state for which the edge was configured. So we know we're looking for the right edge, even if it changed while we were setting things up. 
    Then use that in these contexts:
     
    Configure:

    __disable_interrupt(); configure_state_detection(); // 1 : safe set state and edge detection PxIE |= BITn; // 2 : Enable interrupts on state change, may fire right away __enable_interrupt(); Nothing special for configure. Monitor changes a lot: 
    Monitor:

    int in_state = state; configure_state_detection(); // 1 : safe set state and edge detection if (in_state != state) { if (state) { // 2a : emit rising edge event } else { // 2b : emit falling edge event } } This makes sure that state has actually changed within the resolution of our ability to detect it. (As with the rejected quibble above, we'll assume that any even number of changes between the original edge and the execution of the handler are ignorable within the bounds of "transient". If they had to be reported, interrupts were disabled too long and there is no viable solution.) 
    Eagle-eyed folks will notice that there is a race condition in configure_state_detection: If the state changes a positive even number of times between (3) and (4) we're going to get a spurious interrupt as soon as Configure re-enables interrupts. However, this doesn't matter because the Monitor implementation will detect that no effective change occurred and will not generate an event.
     
    Because this safely resynchronizes the state and the PxIES configurations each time the state is inspected, and we only generate events when the state actually changed, it's also safe against the subtlety that setting PxIES can produce spurious interrupts.
     
    Addendum 2014-09-29T11:50: I should make clear that the state variable must be marked volatile as it's a global that is mutated within an ISR and read outside the ISR. Technically some hyper-optimizing compiler might otherwise allow the application to process an event while looking at an out-of-date value of state.
  23. Like
    pabigot reacted to spirilis in Exercise: Robust Digital Edge Detection   
    Nice!  This one's getting bookmarked.  That's some thorough checking.
  24. Like
    pabigot got a reaction from spirilis in Exercise: Robust Digital Edge Detection   
    Alright, the problem's been identified and we're now down in the weeds. Rather than point out issues in solutions-in-progress, below is my explanation of the flaw and how I'm solving it. Please point out anything that seems wrong or doesn't satisfy the requirements as stated. Those who want to keep trying should not read this post yet.
     
    The key phrase, as is so often the case, is: "race condition"
     
    The MSP430 features that impact it are (1) the PxIFG register records only edge transitions, not signal level, and (2) disabling interrupts only delays notification of a detected event, it does not delay recording the event.
     
    What's wrong here is: At any point after (1) the pin state may change. If it changes an odd number of times before (3), we'll be configured to detect the wrong edge, and the state will always be opposite the real value.
     
    This has two problems.
     
    First, if the transition occurred during a long period with interrupts disabled, the state may have changed back prior to the interrupt handler being invoked, so again we get out of sync. One might argue that such a change should be characterized as "transient" even if it exceeded the arbitrary 20-cycle limit. From the perspective that leaving interrupts disabled that long is poor design I'd probably concede the point, so this quibble is weak.
     
    But second, as with configure the state may change between (1) and (4), again desynchronizing the state and the edge being detected.
     
    The following approach solves both problems. First, we need to be absolutely sure that the edge we're looking for will detect a change from the state we think we're in. Use the following function:
     

    static __inline__ void configure_state_detection () { do { state = PxIN & BITn; // 1 : record state if (state) { PxIES |= BITn; // 2a : High, detect falling edge } else { PxIES &= ~BITn; // 2b : Low, detect rising edge } PxIFG &= ~BITn; // 3 : Reset interrupts from past changes } while ((PxIN & BITn) != state); // 4 : loop if state changed } This is much like the original naive configuration, but skips the PxIE setting and makes sure that the state after configuring the edge matches the state for which the edge was configured. So we know we're looking for the right edge, even if it changed while we were setting things up. 
    Then use that in these contexts:
     
    Configure:

    __disable_interrupt(); configure_state_detection(); // 1 : safe set state and edge detection PxIE |= BITn; // 2 : Enable interrupts on state change, may fire right away __enable_interrupt(); Nothing special for configure. Monitor changes a lot: 
    Monitor:

    int in_state = state; configure_state_detection(); // 1 : safe set state and edge detection if (in_state != state) { if (state) { // 2a : emit rising edge event } else { // 2b : emit falling edge event } } This makes sure that state has actually changed within the resolution of our ability to detect it. (As with the rejected quibble above, we'll assume that any even number of changes between the original edge and the execution of the handler are ignorable within the bounds of "transient". If they had to be reported, interrupts were disabled too long and there is no viable solution.) 
    Eagle-eyed folks will notice that there is a race condition in configure_state_detection: If the state changes a positive even number of times between (3) and (4) we're going to get a spurious interrupt as soon as Configure re-enables interrupts. However, this doesn't matter because the Monitor implementation will detect that no effective change occurred and will not generate an event.
     
    Because this safely resynchronizes the state and the PxIES configurations each time the state is inspected, and we only generate events when the state actually changed, it's also safe against the subtlety that setting PxIES can produce spurious interrupts.
     
    Addendum 2014-09-29T11:50: I should make clear that the state variable must be marked volatile as it's a global that is mutated within an ISR and read outside the ISR. Technically some hyper-optimizing compiler might otherwise allow the application to process an event while looking at an out-of-date value of state.
  25. Like
    pabigot got a reaction from spirilis in Exercise: Robust Digital Edge Detection   
    Hah. Full points, and the clarifications in the first post updated to disallow this solution.
     
    Remember, I'm a software guy: I don't get to change the hardware so the radio pins show up in multiple places.
×
×
  • Create New...