Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Posts posted by tripwire

  1. Ah, I guess the debugger's using the RST line for JTAG comms, so that stops S1 from working until you disconnect...


    What happens if you pull the USB cable during debugging, then exit the debugger and plug the USB back in? Additionally, try another test run but pulling the cable after disconnecting the debugger.


    To be honest I'm just clutching at straws here, I can't think of anything that would cause the symptoms you're getting. If you don't mind sharing your code, I'd be happy to try it out and see if I can get the same issue to occur on my kit.


    (BTW, it's probably best to attach the files as the forum's source formatter likes to eat for( ; ; ) loops!)

  2. This is an interesting puzzle! :) (though probably infuriating for the OP...)


    MRose, what happens if you reset the launchpad by pressing S1? Try this with the debugger attached while it's running at the right speed and also just after disconnect. Does it run with the 1 second delay like it's supposed to after resetting in both cases?

  3. The JSF's pretty heavy on software as far as I know. I recently stumbled on this interesting bit of trivia about it - not only is the JSF's software written in C++, but they got Bjarne Stroustrup to work on the coding standards document! :smile:


    Here's the document in all it's sleep-inducing glory ;-):



    And here's a presentation giving the overview of why they chose C++ and how they went about trying to make it safe:



    EDIT: As for the "10,000 lines of code", I think he simply meant to say 10 million (onboard, with another 10 million offboard). A quick google suggests that's the correct number.

  4. Hi,


    Since I started programming for MSP430 I've been looking for the MSP equivalent of "__asm int 3" (aka DebugBreak). I've come up with this fragment, tested on CCS 5.3.0:

    #ifndef NDEBUG
        // If debugger is attached and software breakpoints are enabled, DEBUG_BREAK will pause execution
        #define DEBUG_BREAK _op_code(0x4343)
        // Strip out DEBUG_BREAKs in final/release builds
        #define DEBUG_BREAK


    Put this in a header file and you can then embed breakpoints in your code with a DEBUG_BREAK; statement. To reiterate what it says in the comment above: DEBUG_BREAK only halts if the debugger is attached and software breakpoints are enabled!



    For an example of where it's useful, consider the DCO calibration constants check used in a lot of example programs:

    if(CALBC1_1MHZ == 0xFF || CALDCO_1MHZ == 0xFF) // Check calibration constants weren't erased
        DEBUG_BREAK; // Halt program (if debugger is connected)
        while(1);    // Loop forever (as a backup) to prevent use of erased calibration data


    As it normally appears (without the DEBUG_BREAK) the CPU just enters an infinite loop if the calibration data is missing. You have to notice that the program isn't progressing and manually pause to see what the problem is. With the addition of a DEBUG_BREAK the program pauses immediately so you can see what went wrong!


    They're handy for use in trap ISRs, to pause when an unexpected interrupt fires so you can find out what caused it. In that case you may want to leave out the infinite loop so the program can be resumed afterwards. They're useful for general debugging too, since embedding breakpoints like this has some advantages over breakpoints set in the debugger. More on that later...



    To understand how it works you need to know some details about how hardware and software breakpoints work. Time for a crash-course in breakpoints :smile:


    Placing a hardware breakpoint tells the MSP's Embedded Emulation Module (EEM) to halt the CPU whenever an instruction is fetched from the breakpoint's address. The EEM contains trigger modules that spy on the address and data buses inside the chip, and fire when they see a specified value. Once the trigger is set up the MSP runs code as normal until the trigger fires, which makes the EEM halt the CPU and stop all the clocks. This is great because it means hardware breakpoints don't mess up the internal state of the MSP. Unfortunately the EEM has a limited number of triggers (just 2 on the value-line chips), so you can only have two breakpoints!


    Software breakpoints work around that limitation by using one of the EEM triggers in a different way. Instead of halting when an instruction is fetched from a specific address, it halts when a specific instruction is fetched from any address. That means you can have as many breakpoints as you like, as long as they're all on the same instruction opcode. The debugger uses 0x4343 (a type of NOP) as its breakpoint opcode. When you place a software breakpoint the debugger first makes a record of the original opcode at that address. Then it overwrites it with 0x4343 in the MSP's flash memory. As before, the MSP runs code as normal until the trigger module detects an instruction fetch of the 0x4343 opcode. The EEM halts the CPU and stops the clocks. Now the debugger has to restore the opcode that it overwrote earlier and make sure it's the next instruction to execute. After you step or resume it needs to put the breakpoint opcode back again, assuming you didn't disable it in the meantime.


    All this writing to flash has an overhead, and the bad news is that the clocks need to be restarted during the process. That means that setting and hitting software breakpoints can make the on-chip timer peripherals go haywire.


    DEBUG_BREAK makes use of the debugger's trigger setup for software breakpoints, but it doesn't require any runtime flash memory overwrites. The breakpoint is entirely handled by the MSP's EEM like a hardware breakpoint, but you can have as many as you like. The only disadvantage is that you need to rebuild to move them ;-)

  5. Hi,


    Although I don't program 430's in assembly (yet ;-)) I was particularly interested in this post about naken430asm in the tips section. Naken430util's disassembly listing with per-instruction cycle counts looked like a useful optimisation tool, and a good way to keep tabs on the compiler. I tried it out and liked it so much that I wanted to integrate it into my future CCS projects as standard.


    After poking around in the CCS install folder and reading TI's instructions I managed to make a plugin that adds a new project template. It's based on the standard "Empty project with main.c" template, but adds a post build step that generates a .hex file and naken430util cycle listing.


    To set up the plugin:

    1) Download naken430asm and install wherever you like

    2) Download ProjectTemplate.zip

    3) Copy the com.ti.ccstudio.custom.project.templates_1.0.3.201302072330 folder from the .zip into the ccsv5\eclipse\plugins folder of your CCS install

    4) Run CCS. You need to be signed in to a user account that can write to the CCS install folder! (CCS writes there when it finds a new plugin)

    5) Select the Window>Preferences menu option and navigate to C++>Build>Build Variables in the Preferences window

    6) Add a new build variable called CG_TOOL_NKA_UTIL, select "File" type and enter the path to naken430util.exe


    Once you've done all that you should have a new template in the "Empty Projects" template group when you try to make a new CCS project. Projects based on this template will save <ProjectName>.hex and <ProjectName>_cyc.txt to the output folder after every build.


    Hope that's of use/interest to someone!

  6. Hi,


    It took me a while to figure this out myself. What's supposed to happen is that pressing the button reads the current temperature and that is then used as a reference. If the chip gets hotter the red led comes on and brightens as the difference between the reference and current temperature increases. If the chip gets cooler the green led behaves likewise. You can press the button again to recapture the reference temperature.


    Just putting your finger on top of the chip may not be enough to make a big difference to the temperature. Rubbing your finger on something to heat it up a bit first can help. If you've got canned air then that can be used for cooling.


    There's also a PC application which listens to the serial comms from the launchpad and displays a numeric temperature readout. You can download it by going to TI's launchpad page and clicking the "Get Software" button. That downloads a zip containing the source for the example on the launchpad and the GUI executable (bin\LaunchPad_Temp_GUI\LaunchPad_Temp_GUI.exe).


    The GUI is not very user friendly, the first screen lists all the COM ports on your PC and you need to know which the launchpad is connected to. To select that COM port you need to type its row number (shown in square brackets) and then press enter. All being well you should then get the temperature readout.


    Once you've got bored of that (shouldn't take long!) you need to get set up with the development tools of your choice (CCS, IAR or gcc) and start coding :smile:


    I found the launchpad workshop PDF helpful when I was getting started. If you search on the TI site they also have a ZIP file with all the workshop code.



    EDIT: Here's a video from one of the engineers at TI demonstrating all the demo program features:

  7. Hi,


    Thanks to the tips on 43oh.com I just managed to use my launchpad to program a 2553 on a breadboard.


    My code deployed fine and ran without issues, but I noticed something slightly different when I was finished testing. The chip stopped running the code as soon as I terminated the debug session.


    Normally, when the debugger is connected to a chip on the launchpad's emulation section, terminating just detaches the debugger. Assuming the debugger wasn't paused or on a breakpoint the chip continues to run the code uploaded to it.


    Could I have done something on my breadboard that's causing the chip to reset as soon as the debugger detaches? I have a 47k pullup resistor and 1nF pulldown cap on RST, and am connecting the launchpad to the breadboard with wires from J3 and J6.




    EDIT: Ignore me! I just realised that I'd connected the decoupling capacitors all kinds of wrong. I think the 2553 must have been running on parasitic power from one of the programming pins!

  8. And I've found a solution - ignore the bit of BCL12 that says if RSEL is 15 you should drop it to 7 before modifying DCOCTL (this advice conflicts with that in the user guide).


    By following the BCL12 advice I had one chip out of four fail after clock setup. Following the user guide (as posted by Rickta59 and roadrunner84 above) caused no failures.

  9. What are you doing that you are messing with the DCO? I've never had a problem setting the clock or seen any glitch or hang.


    I'm working on a program that measures the DCO clock frequency for all valid combinations of RSEL, DCO and MOD bits. 


    It works fine on two chips I've tried it on (a 2553 and a 2001), but fails when I try it on my 2452. I suspect it's just down to tolerances in the chip, so most will work fine.


    The problem occurs when trying to reach the higher DCOCTL settings with RSEL set to 15. Following the old BCL12 advice I was able to reach DCOCTL = 0xCF, BCSCTL1 = 0x8F, but going any higher caused a hang. The new advice lets me reach DCOCTL = 0xD7, but it should be possible to go all the way up to 0xE0.

  10. Hi Oliveira,


    oPossum's uint32hex function is indeed a good way to convert a number to a hex string, but I've been looking at your code and it doesn't appear that you need to do this. As cubeberg said: "The int value itself isn't specifically decimal or hex - those are just ways of displaying a certain value."


    If I have a function like this:

    void test(int i)
       // insert code here...

    I can call it in any of these ways:

    test(12345);   // decimal integer literal
    test(0x3039);  // hexadecimal integer literal
    test(030071);  // octal integer literal (!)
    int number = someFunctionThatReturns12345();
    test(number);  // integer variable

    All with the same result. So as long as analogRead(A3) returns an int in the range 0-1023 the code you posted above should work correctly. That is, unless there's some other function that you want to pass the potentiometer value to which is expecting hexadecimal in a character string.

  11. I added some delays to the RSEL stepping loop, but it didn't help. As I was testing the change I realised that I've been stepping through the code in the debugger anyway, which gives the DCO more than enough time to settle.


    I also tried clocking the CPU with VLO during the DCO setup. This allowed the CPU to continue running past the instruction that was previously killing it (setting RSEL to 15). Unfortunately when I then tried to switch the CPU back to DCO it hung. It looks like the bug might cause the DCO to completely stop oscillating if you're really unlucky.

  12. This is for the DCO, so there's no flag to wait for - AFAIK only the crystal oscillator has a fault flag. Normally the DCO is what's clocking the CPU, so I think changing the DCO settings makes the CPU pause until the DCO is stable.


    Having said that, maybe you're right and I do need to wait between steps so the DCO can settle. I tried the updated workaround on a problematic chip last night, and while it improved matters it didn't fully resolve the issue. I'll see if adding some delay cycles to my RSEL-stepping loop helps any.

  13. But they do not describe in what pace, would it really be required to wait for the oscillator to stabilize (there are flag for that, right?) and then step to the next value. I mean, startup code will get quite a bit bigger now :-(


    This is for the DCO, so there's no flag to wait for - AFAIK only the crystal oscillator has a fault flag. Normally the DCO is what's clocking the CPU, so I think changing the DCO settings makes the CPU pause until the DCO is stable. At least, that's what it does when restarting the DCO after LPM.


    As for the effect on code size, I'm wondering whether the new advice actually simplifies things. If you just use a loop to step RSEL to the desired value then you never need worry about rules 1 and 2. On startup, rule 3 can be avoided if you set DCOCTL before ramping RSEL up to 15 (if that's what you want to do).


    I expect the main impact will be on code that changes DCO settings after startup. If you're changing between values that aren't known at compile time you'd need a lot of extra code to check for all these different conditions.

  14. This is another errata than the one stating you should set RSEL to 0 in between?


    Hi, I'm not sure which erratum you're describing there. BCL12 used to be:


    1) If changing RSEL from >13 to <=12, switch to 13 first

    2) If changing RSEL from <=12 to >13, switch to 7 first

    3) If changing DCOCTL when RSEL is 15, set RSEL to 7, modify DCOCTL and then reset RSEL to 15


    The updated version says that these original rules fix most cases, but a more reliable method is to always change RSEL step-by-step. By that I think they mean increment or decrement RSEL by 1 until you reach the target value.

  15. Hi guys,


    I just noticed that the errata files for the value line were updated on 17th January. The new files contain a revision to the BCL12 text, adding the following advice:


    "In the majority of cases switching directly to intermediate RSEL steps as described above will prevent the occurrence of BCL12. However, a more reliable method can be implemented by changing the RSEL bits step by step in order to guarantee safe function without any dead time of the DCO."


    Might be of interest if you've been getting DCO lockups as I have recently!
  • Create New...