Jump to content
43oh

tripwire

Members
  • Content Count

    193
  • Joined

  • Last visited

  • Days Won

    14

Posts posted by tripwire

  1. On 10/28/2018 at 5:56 PM, enl said:

    The few times I have needed to do this, I used interrupt handlers that did nothing but set or reset a bit to reflect the state.

    Yes, that's the method described in the blog post I linked. I just posted this as an alternative because I've not seen it before. Indeed, I've seen it claimed that the interrupt handler is the only possible way to track the reset button state.

    12 hours ago, Rei Vilo said:

    Wouldn't be easier to buy an external switch and connect it to any GPIO available?

    You could do that if you have a spare GPIO and don't mind having a loose switch. Even then, it's only four lines of code to poll the reset button...

  2. I've been working on a project for MSP-EXP430G2 which requires two buttons and hit the problem that S1 is set up as a reset button. Unlike S2 it's not connected to a regular GPIO pin; instead it's connected to the nRST/NMI pin.

    I looked for information on using the reset button as a user input and found some examples, but they all used the NMI ISR to track rising and falling edges. For example, see this post from the 43oh blog: Using the MSP430G2xx1 nRST NMI Pin as an Input.

    I really wanted some way to sample the current state of the button directly, which is tricky because there's no PxIN equivalent for the NMI input. Here's a snippet demonstrating the method I ended up using:

    #include <msp430.h> 
    
    int main(void)
    {
        WDTCTL = WDTPW | WDTHOLD;
    
        P1DIR = BIT0;
        P1OUT = 0;
    
        while(1)
        {
            // Poll S1
            WDTCTL = WDTPW | WDTHOLD | WDTNMI;                      // Detect NMI rising edges
            IFG1 &= ~NMIIFG;                                        // Clear latched value
            WDTCTL = WDTPW | WDTHOLD | WDTNMI | WDTNMIES;           // Detect NMI falling edges
            unsigned char buttonPressed = (IFG1 & NMIIFG) ? 1 : 0;  // Read latched value
    
            if(buttonPressed)
            {
                P1OUT |= BIT0;
            }
            else
            {
                P1OUT &= ~BIT0;
            }
    
            __delay_cycles(20000);
        }
    }

    The nice thing about this is that the polling code is self-contained and can be dropped in wherever you like.

    How it works

    The schematic for the reset/NMI pin in the G2 family looks like this:

    RST_schematic.png.9ac93144f1899b2fae42a7f4e688927f.png

    The output of a rising-edge detector is latched to give the value of NMIIFG. If the WDTNMIIES bit is set an inverter is switched in ahead of the edge detector so it detects falling edges instead.

    To poll the state of the nRST/NMI pin WDTNMIES is cleared, followed by NMIIFG. Next WDTNMIES is set high and finally NMIIFG is read.

    When read, the NMIIFG bit will be set if:

    • S1 was pressed between when WDTNMIES went high and NMIIFG was read
    • S1 was released between when NMIIFG was cleared and WDTNMIES went high (ie it was pressed but you just released it)

    The internal switch controlled by WDTNMIES is used to force an edge transition. This is what lets the edge detector output high even if the button was held down for a long time. Note that NMIIE is not set, so the NMI interrupt is never requested.

    Warnings

    These warnings apply to any use of the nRST/NMI pin for user input:

    • S1 cannot be used while SBW debugging is active. You need to disable or "free run" the debugger so the two functions don't interfere with each other
    • If S1 is held down and the MCU crashes or resets it will be held in reset until you release the button.
  3. Another thing that might make this code unreliable is that you aren't setting P1OUT bit 3 high. Setting a bit in PxREN connects the pull resistor to the corresponding pin, but the pull direction is given by the value of the relevant PxOUT bit.

    The value of PxOUT is unspecified on startup and power-on clear, so you might be getting a pulldown resistor instead of pullup.

  4. Hi,
     
    I'm hoping some of you might have had experience of waterproofing sensor nodes or individual sensors, and can make suggestions.

     

    I have a project using the SensorTag to measure altitude, and need to seal it up enough that it doesn't fail after the first rain shower. It doesn't need to be completely watertight as it should never be submerged, but it needs to resist heavy and persistent rain.

     

    It can't go in a hard-shelled airtight container as that would interfere with the pressure measurements. It would probably be a bad idea to get conformal coating in the sensor port too. I'm not too sure whether conformal coating would protect the coin cell and holder adequately either.

     

    To make it even more difficult I have a wire running to an external reed switch that counts wheel revolutions, so that entry point needs sealing too.

     

    My current "plan" is to mummify it in plumber's PTFE tape and hope for the best, unless there are any better ideas out there :)

  5. This is down to the subtleties of integral promotions in C. The standard says this for the bitwise shift operators:

     

    "The integral promotions are performed on each of the operands.  The type of the result is that of the promoted left operand.  If the value of the right operand is negative or is greater than or equal to the width in bits of the promoted left operand, the behavior is undefined."

     

    Integral promotion can "widen" a type as far as unsigned int, which is 16 bits on MSP430. The RHS of the shift is 16, so that means undefined behaviour; in this case that seems to mean (100 << 16) == 100.

  6. I'm pretty sure that the first const goes before char.

     

    Either way works (char const * const is equivalent to const char * const). That cdecl site only accepts the latter, however.

     

    PS: As a string literal ("xyz") is already a const char* (i.e. stored in ROM), I don't think the second const is needed. But I'd have to test that to be sure.

     

    The const after the * is the one that makes the pointers in the array const. You could get away with omitting the first const in this case, but I tend to just use both.

  7. myMap := make(map[string]int)
    myMap["zero"] = 0
    myMap["one"] = 1
    myMap["two"] = 2
    myMap["three"] = 3
    myMap["four"] = 4
    myMap["five"] = 5
    
    for k, v := range myMap {
        fmt.Printf("Value: %d\n", v);
    }
    
    This code will not produce a neat order of 0, 1, 2, 3 .... It will be shuffled/randomized.  The Go designers did this intentionally to break programmers' code when they try to rely on assumptions based on consistent undefined behaviors.  As a result you are forced to convert those sorts of things into a list and sort them or whatever ... something better-defined anyhow.

     

     

    If anything I'd have expected: 5, 4, 1, 3, 2, 0 (sorted by key rather than value or insertion order). That's what you'd get out of a std::map in C++. The C++ standard library designers originally avoided the issue of unspecified map iteration order by only permitting ordered maps, and hash-based maps were not officially included until C++11.

     

    In terms of performance, it looks like the randomisation in go is a random start offset rather than a complete shuffling of the iteration order. That's just enough to prevent anyone from relying on the order from one iteration to the next.

     

    They might also have some random input into the hash function which would jumble the ordering between program runs. Apparently that can also be implemented to defeat hash collision DoS attacks when user-provided values are used as map keys.

  8. It's really easy for anyone to not know everything they need to in order to design a secure device. I was in the security sector for years as a private consultant, and it was not easy keeping up. Just on security flaws. Not to mention everything else an embedded designer would need to know . . .

     

    Agreed. That's why I'm not delighted by the prospect of IoT-mania encouraging a proliferation of cheap internet-connected devices.

  9. Well am I the only one who does not relay see this purported threat ? One has an internal network, zigbee, low power RF, whatever, to a Linux, BSD, or something else NOT windows server. Which deals with all the security details as we already know them. By this of course, I mean as we already know how to deal with them the best we can. Because nothing is ever perfect . . . if someone wants into a remote system badly enough chances are pretty good they'll find a way in. *If* they're smart.

     

    That works against remote attacks, but the linux/BSD/non-windows server protecting the wireless device can be bypassed if you're in the vicinity. Then the unsecure wireless device can be exploited to leak your wireless key (for example).

     

    The scale of that approach is greatly limited by the need to be near the target, but it means you can't assume a secure router will protect you if the devices are unsecure.

  10. So, yeah: "Designing an internet-facing server with security protection can be a very challenging task" is fair enough. It's not so much that the original title implies the world will end if you don't attend, more that it implies the content is somehow specific to IoT when it's not.

     

    What would be nice is if someone came up with a way to deal with the security holes left in the many internet "things" abandoned by their manufacturers without ongoing firmware updates...

  11. Good news! This issue is fixed in the TI Emulators 6.0.228.0 package, which contains the version 2.3.0.1 firmware for XDS110.

     

    TI have added support for 2-wire cJTAG debugging, which only uses the TMS and TCK lines. In the 2.3.0.1 firmware they also stopped the emulator from driving TDO, which was blocking access to the SPI. It looks like 2-wire cJTAG is the default mode now too, so debugging SPI flash code should just work like you'd expect.

  12. I've started this week using CCS to debug and analyse power consumption of my battery powered device using the Energia framework. CCS/EnergyTrace is a powerful tool when it works, however, I have one really really annoying problem with it: debugging does not work most of the time.

     

    [...]

     

    Nothing seems to help consistently, but detaching/attaching during debug and then performing a soft reset is most consistent to resolve the breakpoint problem for one debug session, but then EnergyTrace still does not work.

     

    This is one of my pet hates too. I find energy profiling to be an incredibly powerful debugging tool, it's like an ECG for your MCU! Unfortunately the UI in CCS has a lot of bugs and missing features that make it inconvenient to use and a lot less reliable than I'd like.

     

    I used to have a long list of requests for features that would make EnergyTrace more informative, now I just be happy with CSV export of the recorded data for offline analysis.

     

    This week I looked into using DSS scripting which does support CSV output, but the EnergyTrace DVT API is completely undocumented. I think my next step may be to forget about the EnergyTrace support in CCS and try this instead: http://forum.43oh.com/topic/9674-casio-watch-rebuild-w-msp430/#entry72969

  13. I've done this little mod on a MSP432 launchpad so I can program the CC2650 SensorTag with it (and use energytrace too).
     
    The 1x7 0.05" headers aren't the easiest to get hold of, so I just took a standard 10-pin cortex debug cable, cut it in half and soldered it directly to J103. The connections needed are (LP -> Cortex debug connector):
     
    GND -> GNDDetect (pin 9)
    RST -> nRESET (pin 10)
    SWCLK -> SWDCLK/TCK (pin 4)
    SWDIO -> SWDIO/TMS (pin 2)
    3V3 -> VTref (pin 1)
     
    Pin 1 is marked by the red stripe on the cable linked above. Apart from making sure to read the pin numbers the right way round, the only fiddly bit is crossing over the GND and reset wires in limited space.

     

    The ribbon enters the connector opposite the key at one end and next to it at the other. It's worth checking both halves of the cable to see which gives the best cable routing for your target board.

     

    To test you can remove the jumpers from the isolation block and set the JTAG switch to external, then connect the cable to the Ext Debug header on the launchpad and try to program the MSP432 target.

  14. I like the idea of making it compatible with the kentec touchscreen boosterpack, but stacking one of those above your board would block access to the buttons. If instead used with the onboard OLED display or headless you'd have pushbuttons immediately next to a set of upward pointing male boosterpack headers.

     

    Have you thought about using right-angle tactile switches pointing off the edges of the board? TI has just started to use them on their launchpads: http://www.ti.com/ww/en/launchpad/launchpads-connected-launchxl-cc2650.html#tabs

  15. Recently I took a trip to the US, which offered a good opportunity to test my altitude logger by recording a profile of the whole journey there. The trace revealed some interesting details about the flights I took, and airline operations in general.

    Here's the profile for the entire trip:
     
    post-30355-0-23173800-1464732984_thumb.png
     
    The x-axis shows elapsed time in minutes. The altitude is shown in metres, measured relative to the start of the trace (not too far above sea level). Despite that I'll be using feet as the unit of altitude here, since that's the standard used in aviation. Because the logger calculates altitude based on air pressure, it is affected by cabin pressurisation. Instead of recording the true altitude of the aircraft it gives a trace of the effective altitude inside the cabin.

    The first big peak at the blue cursor is a flight from Edinburgh to London Heathrow. Comparing the cabin altitude trace against real altitude data makes it easier to pick out the main features, so here's a chart showing this flight's altitude as broadcast over ADS-B:
     
    post-30355-0-77564500-1464733005_thumb.png
     
    And this is a closeup showing what my altitude logger recorded for the same flight:
     
    post-30355-0-99639700-1464733027_thumb.png
     
    The cursors mark where I think the flight started and finished, based on the fact that the plane was in the air for 70 minutes. From takeoff the pressure falls steadily until the effective altitude in the cabin is about 7000ft, at which point the aircraft is actually at 37000ft. After cruising there for 12 minutes the plane descends and cabin pressure steadily increases.

    The cabin pressure reaches ground level before the plane actually lands, so the trace stays flat for the next 12 minutes. In fact, this section of the trace is effectively below ground level while the plane approaches landing. The plane's environmental control system has deliberately overshot and pressurised the cabin to higher than ambient pressure at the destination. At the orange cursor marking the end of the flight you can see a slight increase in altitude. This is when the flight is over and the controller opens the pressurisation valve to equalise with the external air pressure.

    It seems this extra pressurisation is done before takeoff and landing to help the system maintain a steady pressure. There's a detailed explanation of the reasons for this here: http://aviation.stackexchange.com/questions/16796/why-is-cabin-pressure-increased-above-ambient-pressure-on-the-ground

    Now on to the second flight, which was from Heathrow to Dallas Fort Worth. First the ADS-B trace:
     
    post-30355-0-14331200-1464736930_thumb.png
     
    And the altitude logger's version of events:
     
    post-30355-0-14710700-1464736943_thumb.png
     
    Again, the cursors mark the start and end of the flight and line up with the reported duration. The "steps" along the top of the trace match up with changes in cruise altitude from 32000?>34000?>36000ft. Maximum effective cabin altitude is about 5500ft, lower than the first flight even when the lower cruise altitude is taken into account. I think that's down to the use of a newer 777 on the international flight compared to the A319 on the domestic route. Modern planes are increasingly designed to offer lower effective cabin altitudes for passenger comfort.

    The stepped flight profile is used to maximise fuel efficiency. Flying higher reduces losses to air resistance, but early in the flight the aircraft is heavy with fuel and climbing is expensive. As the fuel is burned off the optimal cruise altitude increases, so ideally the plane would climb to match. In fact the plane can't climb gradually because modern air traffic control regulations restrict aircraft to set flight levels. The best option under these restrictions is to perform a "step climb" up to a higher level when it's more fuel-efficient than the current one. The flight levels are multiples of 2000ft for flights from the UK to the US, which is why the steps are 32000->34000?>36000ft.

    Wrapping up, one of the things I hoped to test by recording this journey was high rates of altitude change. The altitude logger can currently handle rates of change up to

  16. That information on the Energia site looks a bit confusing to me. The MSP-EXP430FR5969 board has two different version numbers: One for the eZ-FET and another for the entire board.

     

    The version 2.0 board is the one that features EnergyTrace, but the version of the eZ-FET on that board is 1.2. I think the Energia site is saying that if you have the board that says "eZ-FET Rev 1.2 with Energy Trace" then it's probably a V2.0 board. When that documentation page was written Energia didn't support this new board, only the old one without EnergyTrace.

     

    This means there isn't a upgrade path as such, because the board with the V1.2 eZ-FET is the latest version.

     

    I don't know how long ago that documentation was written, so I'm not sure if it's still correct. Have you tried using Energia on the board you have?

  17. The package is BGA, but only one row around the edge with 0.45mm pitch. Should be doable with regular PCBs from OSHPark et al.

     

    Then comes the really tricky bit, optical alignment... the lens assembly needs to be centred over the sensor array (which is not centred within the package).

     

    Also I just noticed in the datasheet that the mounting isn't just at the BGA pads. You're expected to use a non-conductive underfill to thermally bond the bulk of the package to a copper pour for heatsinking and stabilisation.

  18. Let me ask, what is the difference between these two statements

     

    1. String aStr = String("abc");

    2. String *aStr = new String("abc");

     

    Is this this: In stmt 1 when aStr goes out of scope ~String is called and the object is deleted/freed. In stmt 2 when aStr goes out of scope the object remains allocated and no destructor call?

     

    Pretty much, yes. For statement 1 I wouldn't say the object is deleted/freed because it was never new'd or malloc'd, but it does get destroyed by the implicit call to ~String().

     

    For information, statement 1 could be written as just "String aStr("abc");", which has the same end result but avoids constructing a temporary string and then copying it to aStr. I wasn't too sure whether the compiler would perform this optimisation itself, so I checked wikipedia: https://en.wikipedia.org/wiki/Copy_elision. It looks like this case is commonly optimised, but writing String aStr = String("abc"); does mean that String needs an accessible copy constructor to compile successfully.

  19. The 2nd generation Kinect that shipped with the Xbox One uses time of flight to calculate depth. There's a nice summary of the method here: http://www.gamasutra.com/blogs/DanielLau/20131127/205820/The_Science_Behind_Kinects_or_Kinect_10_versus_20.php. It also explains the structured light system used in the original Kinect and compares the pros and cons of each system.
     
    Regarding the sensor and controller being separate chips, I think that may be unlikely to change. The sensor is made using a "chip on glass" process which I suspect isn't ideal for the controller. Also, keeping the controller off the sensor die avoids any problems with it heating the sensor (which would increase the noise level).
     
    EDIT: About 30 seconds after posting this I found the OPT8320, which does integrate the controller and framebuffer memory into a single CoG package. That one's only 80x60 pixels, however.

×
×
  • Create New...