Jump to content
43oh

nickds1

Members
  • Content Count

    22
  • Joined

  • Last visited

  • Days Won

    5

Posts posted by nickds1

  1. "...or tear more hair out..."

    May i be so bold as to suggest not using Energia in this case?

    For true micropower applications, total control of all aspects of the device's operation is essential. Having a framework doing stuff you don't know about in the background is not helpful when calculating the energy budget or predicting performance.

    I'm doing an FR5969 IoT project at the moment and it's coded from scratch. By careful design and fine tuning I've managed to get the power consumption so low that it'll run off a CR2032 for at least a year - every little bit of the code is tuned - I couldn't do that if there was a 3rd party framework behind it. EnergyTrace++, LPM modes and interrupts are your friends.

    Energy harvesting is next up.

  2. Late into this thread, and I haven't read it top to bottom, but there seems to be a slight conceptual gap developing regarding what C & C++ really are.

    They are languages, not environments. The C language was originally designed for telephone exchange testing & control. Subsequently, it was used to implement the early UNIX kernels. There is a "C Standard Library", which is distinct from the language proper - this is where printf etc. come from.

    The story is similar with C++ - the language is distinct from its support libraries, e.g. stdlib, STL, and the various boost libraries.

    A huge amount of apocrypha and mis-information surrounds these languages and the pros and cons of their various implementations - most of the arguments are bogus and ill-informed. The truth is, IMHO, far more boring - that they each have pluses and minuses.

    The main differences are that C++ is geared to a more object-orientated design methodology, and generally, the mindsets used are different, which is why some feel that teaching people C, then C++ is a bad plan.

    When mis-used, C++ can be memory-hungry, which is one some decry its use for embedded work - I feel that this is a fallacy - C++ is absolutely fine for embedded work (I use it all the time), if you understand the consequences of your actions - C++ is a far more powerful & capable language than C, but with great power comes great responsibility (*) - blaming the language for your poor understanding of the consequences of your actions is not an excuse. C++ is easy to abuse, C can be plain dangerous...

    An analogy I like to use is the difference between "mathematicians" and "people who do mathematics".

    A mathematician has an intuitive grasp of numbers and mathematics - they can visualize the problem they are working on and come up with novel ways of solving it; further, when presented with a left-field/previously unseen problem, they will see ways of dealing with and solving that.

    Someone who does maths, OTOH, knows how to follow rules and can solve problems that are related to problems that they have encountered before, but may have real issues dealing with a novel puzzle..

    Same with programmers. There's a world of people who can do C or C++ and have used the support libraries - that does not make them good or even half-decent programmers.

    In my career, I have found very very few genuinely good programmers - people who understand architectural issues, who can see solutions, who can visualise novel approaches and solutions, who understand and then, very importantly, can implement with excellence - it's the difference between a programmer (rare beast), and journeymen who know how to program (common as anything).. 

     

    Note: I was involved in the ANSI C process and have been an editor or cited as a contributor to various C++ related books, including Scott Meyers' "Effective STL" etc Spent 30 years in the City of London and elsewhere as a CTO, designing, developing and implementing some of the world's largest real-time equity, derivative & FX exchange trading systems, mostly in C & C++.

    (*) Attributed to Ben Parker, Peter Parker's uncle...

  3.  I highly doubt that this is an HW issue, though it's possible.... lets start with the code...

    Is the Watchdog timer still running? You've not stopped it (though I don't use Energia and don't know if it's stopped by default)..

    The code you provided is a bit messy, but is essentially the same as in the driverlib manual, and looks OK, except the WDT is on.

    As this seems to be a timing-sensitive issue, I'd start by stopping the WDT.

  4. Hi,

    I needed a way to see how much of my C++ stack was being consumed in my MSP application - the traditional way is to "poison" the stack with a known pattern, and then to see how much of it gets burnt away.

    So I wrote the following - hope folk find it useful:

    The following code allows you to simply do this and to check at any point how much of the pre-allocated stack was consumed during peak usage, i.e. how close your app got to the bottom of the stack, or indeed, whether it over-ran. The TI CCS documentation is completely wrong in the names it gives for the global symbols that define the size and start of the stack - needs to be updated,

    Stick this code (or similar) wherever you want to report on/check stack usage <smallest number of byes left free on the stack since initialisation>/<configured size of the stack>.

    #if defined(STACK_CHECK)
    	std::printf( "Stack: %d/%d\n", stackMinFreeCount(), stackMaxSize() );
    #endif
    and then, in your main code you need to poison the stack as early as possible and then define the reporting routines:
    // Define STACK_CHECK to include stack usage diagnostics
    #define STACK_CHECK
    
    #if defined(STACK_CHECK)
    #define STACK_INIT  0xBEEF				// Pattern to use to initially poison the stack
    extern uint16_t  _stack;                // Start of stack (low address)
    
    uint16_t stackMinFreeCount(void);
    uint16_t stackMaxSize(void);
    #endif
    
    #if defined(__cplusplus)
    extern "C"
    {
    #endif
    #if defined(__TI_COMPILER_VERSION__) || \
    	defined(__GNUC__)
    int _system_pre_init( void )
    #elif defined(__IAR_SYSTEMS_ICC__)
    int __low_level_init( void )
    #endif
    {
    	//... stuff...
    
    #if defined(STACK_CHECK)
    	//
    	// Poison the stack, word by word, with a defined pattern
    	//
    	// Note that _system_pre_init is the earliest that we can
    	// do this and that it may not be possible in TI-RTOS
    	//
    	// When we call the __get_SP_register intrinsic (same on IAR & CCS), it will return the address
    	// of the RET address for the caller of this routine. Make sure that we don't trash it!!
    	//
    	register uint16_t *stack = &_stack; // Address of lowest address in .stack section
    	register uint16_t *stack_top = reinterpret_cast<uint16_t *>(__get_SP_register());
    
    	do {
    		*stack++ = STACK_INIT;			// Poison stack addresses
    	} while (stack < stack_top);		// Stop before top of stack to leave RET address
    #endif
    
    	return 1;
    }
    #if defined(__cplusplus)
    }
    #endif
    
    #if defined(STACK_CHECK)
    /**
     * Check how deep the stack usage has been
     *
     * \return	\c uint16_t		Minimum number of bytes to bottom of stack
     */
    
    extern uint16_t	__STACK_END;		// End of data
    extern uint16_t	__STACK_SIZE;		// Linker-set size of stack 
    
    uint16_t stackMinFreeCount(void)
    {
    	const uint16_t *stack = &_stack;
    	uint16_t freeCount = 0;
    
    	while (*stack == STACK_INIT && stack++ <= &__STACK_END)
    	{
    		freeCount++;
    	}
    	return freeCount << 1;
    }
    
    /**
     * Return size of C++ stack
     *
     * Set by the linker --stack_size option
     *
     * \return	\c uint16_t		Configued maximum size of the stack in bytes
     */
    uint16_t stackMaxSize(void)
    {
    	return static_cast<uint16_t>( _symval(&__STACK_SIZE) );
    }
    #endif
    
    
    int main(void)
    {
    ... stuff
    #if defined(STACK_CHECK)
    	std::printf( "Stack: %d/%d\n", stackMinFreeCount(), stackMaxSize() );
    #endif
    ...stuff
    }

     

  5. Just wondering what RTOS people have been using (if any) that can take advantage of the MSP430[x] ULP modes?

     

    I've been having a look at TI-RTOS, FreeRTOS, TinyOS & Contiki etc.

     

    Experiences & thoughts welcome,

     

    Thanks

  6. Bit new to the MSP430 - I have the new FR5969 launchpad and was playing with RTC_B.

     

    Ideally, I'd like to protect the RTC over a main power failure, i.e. keep it ticking but stop the rest of the processor (think clock that doesn't lose track during a power outage).

     

    Is there a way to do this without an external RTC (e.g. DS3231 etc.)? Some other MSP430s have a "VBAT" pin that helps in this, but not the FR5969...

     

    Thanks

     

    Nick

  7. @nickds1:

     

    you are 100% correct with the cap. I would need an X (or even Y, can't remember which one is which), if I would even dream

    about making it a "product" and try to certify it. The same applies for the MOV.

     

    To be honest, I wasn't thinking so much about productisation, more about your safety. Even for personal use, I'd never use anything other than an X2 (you don't need X1 or Y-class) in this situation - you have a genuine fire hazard otherwise  :smile:

     

    An X2 cap and a MOV are just a few cents, and could save you a world of pain !

     

    Even better, use a Fairchild FSAR001B http://www.fairchildsemi.com/ds/FS/FSAR001B.pdf - cheap, small, no worries about capacitor types and does the job properly!

  8. I think that it is great that people are using the forum to solicit job offers but I 200% agree that the hiring party should pay for agree that there should be a fee for the privilege to hire on the site. A recruiter gets 30% of the first year salary. I don't know how much LinkedIn, Indeed or Monster get but I'm sure that it is a few hundred.

     

    In London UK, recruiters get 18% or thereabouts - they ASK for 30%, but you'd have to be a pretty naive company to actually pay that - I've been recruiting staff for my teams for 30 years - we never pay more than 20%.

  9. I just love clocks (especially cold-cathode and similar...)... Nice project...

     

    And I use a capacitor I can trust, a shorted cap would be a baaad idea. I use one of those, 1000V version:

    http://www.wima.com/EN/WIMA_MKP_10.pdf

    And last but not least, we do have an RCBO.

     

    That's a good capacitor but not rated for this application - you should really be using an X1 or X2-rated cap for safety... not just for shock, but fire hazard etc. "X" capacitors are flame retardant and mandatory in Europe - don't know about US but both the Microchip App note and TI blog above also specify an X2-rated cap (as they should too!).

     

    A good one here would be VISHAY - MKP3382 X2, 4n7 @ 630VAC, p/n BFC233820472, Farnell p/n 121-5463

     

    Also, you should probably have a MOV across the mains on the hot side of the cap - the cap will do its work with a nice 50Hz (and 60Hz) sinusoidal waveform, but a mains-borne transient from, say, a motor (fridge/washingmachine/nearby lightning etc.), has far higher dV/dT and will cause a DC spike, quite possibly in the 100+V region. A MOV won't be 100% effective in all cases, but its a good move.

     

    Have you measured or 'scoped the final DC voltage - is the zener doing the clipping or is the voltage below 3.9V? The two smoothing caps will also sink current - typically between 50 and 100uA each - with such a low delivery current to the uP & LEDs, these parasitics become very significant and can drop the final DC value significantly - maybe consider upping the 4n7 to 5n6 or 6n8?

     

    You can also lose one diode (D2) by replacing the lower two diodes in the bridge by 4.2V zeners and changing R11 to about 470R. With C3 at 22n you'll get a nice steady 3.6V with about 20mV ripple...

  10. Using templates allows the compiler to do compile time optimization that is not possible with C or C++ classes. It knows the pins and ports won't change at run time, so it can generate optimal code for I/O. This results in code that is smaller, faster and uses less RAM.

     

    The compiler will in-line optimise the methods if required (helps if declared "inline") - if the ports definitions are const (which they are) the optimisations should be exactly the same as a templated class - you instantiate a static instance of the LCD class anyway... not sure what, if anything, is gained from using templates here other than a whole bunch of typing... just wondering...

     

    Further, using an initialiser list in the LCD constructor would allow class-local constants to be fixed during construction which is somewhat neater than using blank namespace or static consts and creates better encapsulation (compile -O3 or -finline-functions).

     

    There should be no extra costs in memory/stack/efficiency in using this method, but there'd be a bunch less text and the code would arguably be a load clearer   :smile:

     

    Nice project, BTW.

  11. Normally, wafers like this are stepped as the imagers can't maintain the super-fine focus over the whole wafer (this is certainly the case for fine-resolution chips such as modern CPUs).

     

    The wafer is stepped so that each die is imaged in turn - the tables cost an absolute fortune as they have to have phenomenal repeatability and the design of the lens is an enormously specialised art - only a few people in the world do it. 

     

    Quoting a friend in another forum (who is a lens designer for these machines):

     

     

     ICs are produced by depositing layers of material onto a silicon substrate, coating it with photo-resist, exposing the resist in a thing like a glorified slide projector, then developing it and etching away the original material (or implanting the whole thing in an ion implanter to dope the exposed area of silicon). Anyone who's had a go at making their own PCBs will understand the principal. It's the detail that's astonishing. First of all, as mentioned above, the smallest feature printed on the silicon can be as little as 40nm across. Lets get that in perspective. A human hair (the universal indicator of smallness in the same way that football pitches and double-decker buses are the universal bigness indicators) is about 80 microns in diameter, so one micron is one 80th of a hair. 40nm is four hundredths of one micron, so about 1/2000th of a hair. Features that small have to be printed with perfect definition across a field up to 30mm square. Since the silicon substrate (or wafer, as they're known) we're talking about is up to 300mm in diameter, a grid of exposures is made, with the wafer being moved on a stage under the lensfrom step to step (hence stepper) until the whole wafer is covered. 

    That's the easy bit. 

    Chips are made up from up to thirty layers of material, each one with its own pattern, which of course has to be aligned to the one below to an accuracy of about .01 microns. Think about that. The wafer is 12" in diameter, and is sitting on a stage made of quartz about 15mm thick. That in turn sits on piezo feet that keep the image in focus (depth of field is around 1 micron). This whole assembly weighs about thirty kilos, and has to be aligned under the lens, focussed, exposed, then moved to the next image, aligned again to .01 micron, focussed and exposed in a cycle that takes around one second. To achieve this, the stage sits on an air cushion on top of a lump of granite that weighs around half a ton, and is driven in x and y by a couple of hefty great linear motors, position being measured by laser interferometers. This one-second cycle covers a wafer in about thirty five shots, so a wafer goes through in about 45 seconds, hour after hour, day after day. Astonishing.
  12. On some series of the MSP430, as we all know, there is a chip-series-unique serial number that can made from the lot/wafer ID plus the die X & Y positions - a total of 8 bytes.

     

    How are the X & Y positions & lot/wafer ID (and the other die-specific data) changed for each die - the masks can't be changed easily, so I would assume that some sort of fusible link was used that is programmed in after testing and before its sawn into individual die... are they fusible links or some other mechanism? e.g. the DS2401 uses a laser-programmed ROM.... 

     

    Thanks

  13. Yes - that was a mistake of my part, I meant MSP430Ware....

     

    I really appreciate the work you've done on BSP430 & mspgcc. :) As a professional programmer myself (though not an embedded one) you can see the amount of effort that has gone into those projects...

     

    Just as a thought, do you know of any plans to produce a clang/LVVM MSP430 back end? I've worked on the gcc compiler in the past, and to say that its a challenge would be a gross understatement - its a venerable old warhorse that is still extraordinarily useful, but there was a very good reason that the clang project was started....

  14. HI,

     

    Getting along merrily with the Launchpad (EXP430F5529LP) - love the MSP430....

     

    Wanting to do stuff properly, I'd like to use a BSP - I'm using the free version of CCS at the moment...

     

    Which is the "best" BSP? (contentious question!) - I'll be building fairly large apps and targeting several chips (not decided which yet)...

     

    Currently I seem to have 2 choices: BSP430 (http://pabigot.github.io/bsp430/index.html) or the one lurking in the TI SimpliciTi BSP folders and which has some porting instructions in their Wiki (http://processors.wiki.ti.com/index.php/MSP430_SimpliciTI_Porting_Guidelines)...

     

    I'd appreciate comments from those far more in the know...

     

    Thanks

  15. Hi - I've been trying to find the source of the RF-BSL used by the Chronos watch - probably looking in the wrong place as I'm a bit new to this...

     

    Anyone know if it was released and if so, where can I find it? (the source, that is...)

     

    Thanks

     

    Nick

     

  16. Hi - new to MSP430s though an industry veteran - background as an EE but spent (too much of) my life building large & complex real-time s/w systems...

     

    Worked a lot with PICs and AVRs, but fed up with their limitations. Now doing a personal remote sensing & control project so the MSP430 naturally came into the frame. 

     

    More I read, more I like it - got my dev boards on order from TI & Olimex - looking forward to getting stuck in.

     

    Main concern at the moment is developing an OTAP environment (my nodes are very remote), so any examples of MSP430 OTA programming or proven strategies would be extremely welcome - I'm using GPRS modems and a 2.4GHz local link.

     

    Cheers

     

    Nick

×
×
  • Create New...