Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Reputation Activity

  1. Like
    enl reacted to roadrunner84 in What's more precise: UART via crystal or DCO?   
    The lower your source clock frequency, the less accurate you can tune your baud rate. So using a 32kiHz source would be less favourable.
    But the crystal is more stable than an RC oscillator (like the DCO), even if the RC oscillator is temperature compensated.
    So using a high frequency source (like the DCO) will help you get a more accurate clock, but it would be less stable than using the crystal.
    The best way would be to regularly tune your internal oscillator to a stable external crystal clock source. This is where PLL comes in to play.
    I think you can use the LFXT as a PLL source to your DCO, but I'm not sure.
    As you may have noticed, I kept talking about stability and accuracy, not about reliability. If you want your UART to be reliable, you would prefer the baud rate to be as close to the desired baud rate as possible, with as little jitter or drift as possible.
    For real world scenarios, that drift and jitter is barely a problem when using a baud rate as low as 9600Bdps.
    Also note that in the case of the msp430g2 Launchpad, you cannot go higher without using an external UART to your PC or other peripheral, because the emulator does not support higher baud rates.
    I'd err on using the DCO over the LFXT, because you can get a better approximation of the desired baud rate.
  2. Like
    enl got a reaction from yyrkoon in Personal CNC PCB routers   
    I have been through a number of generations of in house prototype and hobby scale small volume, and if I can avoid it, I don't do it myself. That said, I don't make a lot of boards anymore, so there is a large grain of salt involved when I say that, with a little care, prototyping on a mill can be viable. It is what I do most of the time these days, as I have the equipment. It isn't as fast as photo, or even toner transfer. There is a mess involved unless dust collection is dead on-- handling the fibreglass dust is a different league than chips and dust from pretty much any other material. What follows is off the top of my head and based on my (probably somewhat outdated) experience.
    Positives include consistent trace width-- UV is also good, but requires good technique or results can get pretty bad--, no shorts or opens like plague toner transfer and sloppy UV, lower cost than UV if the equipment is already in house, the afore-mentioned alignment positives, no second setup for drilling, and no wet chem. Double sided isn't a big deal with proper use of alignment holes or fixtures, and other in house methods have similar issues.
    Negatives are dust control, through hole plating (which is the same for most other in house methods), board properties, workholding issues, and leveling.
    The key to good results is flat and level. A vacuum hold down on a flat bed it pretty much a dead requirement for good results, and accepting that the bed is going to need replacement periodically due to drill damage is a part of it.. If the bed is dead on, then the feature size can be quite good using a 90 degree point tool for the fine work-- depth controls width between close features. I run two tools for clearing: a 90 deg point for outlines and separation of close features (depth controls cut width), and a 0.75mm bullnose for larger area clearing and wide clearance, followed by a drill bit. If the board isn't held dead flat and level, the results will be bad, with nonuniform feature widths and variable substrate thickness between traces. I use a plugin for Eagle to generate the G-code.
    I would say that the biggest drawback to milling prototypes involving high frequency devices is the change in substrate properties due to the substrate removal, both due to dielectric loss and due to increased moisture pickup. I have never had a major issue, since I have never milled when I anticipated an issue, but I have seen the effects in a few cases and needed to adjust component values to compensate. The key thing is that the prototype board may have characteristics very different from the production board.
    That said, I don't recommend milling unless there is a compelling reason. For the few boards I do that way, it is ok, and 0.2mm traces on 0.5mm centers is very achievable. I prefer to use a service since I can usually wait. In a pinch, for something simple, I might even use a sharpie and etch, though that is last resort. I can do 0.5mm trace on 1.2 centerlines that way, which is fine for a lot of one-off, since I still use as much 0.1" (2.54mm) lead space devices as I can. I'm old and have poor eyesight. 10 sec with pliers and through hole devices are surface mount.
  3. Like
    enl got a reaction from spirilis in RANT: Cloud of this, IoT of that . . .   
    It already is. They go hand in hand.
  4. Like
    enl reacted to Fmilburn in Audio/voice modules   
    I have had a project in mind that requires speech for some time but haven't progressed it much. I would be interested in what you come up with.
    I have considered this which uses the VS1000: https://www.adafruit.com/products/2342. The tutorial at adafruit includes documentation on using it with an arduino. Their design is open source if you want to make your own PCB.
  5. Like
    enl got a reaction from yyrkoon in GPIO interrupts in both directions   
    That works fine. Eats a couple cycles, but probably about the same as if you needed to test state (and possibly be bamboozled by a short pulse) in the ISR. This is how I coded a quadrature decoder a few years ago for a positioner.
  6. Like
    enl got a reaction from tripwire in Wanted: DIY sensor waterproofing ideas   
    Basic guidelines would dictate that if it can't be hermetically sealed, then it needs to shed water and breath. If attitude can be guaranteed, then an open or screened/perforated bottom should do.
    As a totally different thought, that I have not tried yet, a full hydrophobic coating might do the job, like one of the relatively new hydrophobic coatings for driveways/sidewalks.
    Or, go old school. Tie the thing up in a condom. The urethane ones hold up well over time, better than the latex ones, and are a bit tougher. A dab of silicone on the sensor wire at the appropriate location before a zip tie around the opening. As long as it is not drum tight (some slack) and has little air in it, it shouldn't effect pressure readings. The Sensortag may be a little too big for this, but similar solutions can be worked out with other schema, like a silk/rayon bag coated with tent waterproofing spray, whatever that is called these days (I don't camp much anymore, so it has been a number of years since I had to use the stuff). Again, if there is a bit of slack, it shouldn't significantly affect pressure readings.
  7. Like
    enl reacted to chicken in Compiler optimization traps for the unaware   
    Here's a very interesting presentation about how modern compiler optimization may lead to unexpected results. This goes way beyond the failure of naive delay loops.
    If you ever relied on buffer indices wrapping around (integer overflow), this is a must read. There are many other scenarios discussed.
    For example I'm pretty sure I fell for this trap myself:
    volatile int buffer_ready; char buffer[BUF_SIZE]; void buffer_init() { for (size_t i = 0; i < BUF_SIZE; i++) buffer[i] = 0; buffer_ready = 1; } It probably works today. But it's a bug waiting to happen when I recompile with different optimization settings or a different compiler. (hint: buffer_ready=1 may be moved before the for loop because the loop does not affect any volatile location).
  8. Like
    enl got a reaction from tripwire in RANT: Cloud of this, IoT of that . . .   
    I don't follow that end of the universe at this point. I have worked in facilities with good, though AFAIK wired, network lighting control.
    The consumer technology seems to be predominantly gimmick. Historically, the early adopters drove technology, and by the time it made mass market, there was a reasonable level of reliability and utility achieved from the experience of the early adopters. I don't see that with most of the IoT devices, honestly including the lightbulbs. I see a few uses in the home, and in fact have wireless control (formerly X10, now RF keyfob hanging on the wall by the door) for lights in my basement, as I didn't want to run more wire for the switch. I do not trust a system that requires a smartphone to operate the lights in my house (lose or break the phone and SOL until replaced) via bluetooth (poor security)
  9. Like
    enl got a reaction from yyrkoon in RANT: Cloud of this, IoT of that . . .   
    I don't follow that end of the universe at this point. I have worked in facilities with good, though AFAIK wired, network lighting control.
    The consumer technology seems to be predominantly gimmick. Historically, the early adopters drove technology, and by the time it made mass market, there was a reasonable level of reliability and utility achieved from the experience of the early adopters. I don't see that with most of the IoT devices, honestly including the lightbulbs. I see a few uses in the home, and in fact have wireless control (formerly X10, now RF keyfob hanging on the wall by the door) for lights in my basement, as I didn't want to run more wire for the switch. I do not trust a system that requires a smartphone to operate the lights in my house (lose or break the phone and SOL until replaced) via bluetooth (poor security)
  10. Like
    enl reacted to yyrkoon in RANT: Cloud of this, IoT of that . . .   
    Networked lights are useful, and in fact have you heard of the DALI protocol ? https://en.wikipedia.org/wiki/Digital_Addressable_Lighting_Interface
    Networked, a DALI setup can be used to control / monitor lighting in a very large building. Which is very useful. As far as bluetooth lights go . . . I have one, and this one is not all that great, but I can see how bluetooth, or wifi lights can be very useful.
    Anyway, there is an article in this months electronic design magazine."The biggest Security Threats facing embeded designers", and much of it covers IoT. They propose that this can not be dealt with using software alone, but instead software, and hardware. I disagree. If a decent software protocol was in place, hardware would not matter. The problem *IS* all the major "consumer grade" networking protocols are garbage. Industrial networking protocols, I do not know all that well.
    However, if somehow we could move most or all of these remote sensors to a wired network. For any given situation. We would not be having this discussion. So what we really need, is a wireless networking protocol that is not so insanely flawed, that an 8 year old child could break into it . . .
  11. Like
    enl got a reaction from yyrkoon in RANT: Cloud of this, IoT of that . . .   
    The IOT thing has been around for long enough to become a cliche. As it stands now, in the consumer marketplace, it is a sales point for people that want the latest and greatest technology but have no clue how it works or what it is useful for. Roughly four years ago, I was shopping for a new refrigerator. The big store I went to had nothing that wasn't advertised as IOT (except dorm size), though only some of them were networkable. All of the networkable ones had features like temp setting through a web interface. All identified themselves readily with no security over the connection. A couple allowed Wifi connections direct to them (they acted as hubs if also connected to, say, a home network) 'for convenience'.
    I don't need to know the details to know that a) these devices are a big ol' security hole, there is no need for a network connection for a home 'fridge, and c) once it is set, I have never changed the temp setting on a fridge or freezer, and don't see the benefit to being able to via a web interface, and d) I want no part of a neighbor, or a neibor's annoying kid, being able to shut my fridge off when I go away for a couple days while there is food in it.
    I also don[t see the point of the same features (and basicly same interface and poor security) in a lightbulb. Or many other products. A toaster oven with wifi and web interface (they exist)? What on earth for?
    This is related to, but different from, the cloud push.
    There are things that can benefit from the 'cloud' storage (file server) and always connected models. In most cases, it is a gimmick or a way to rent-seek. Note where autodesk, for example, is going. Subscription and cloud storage, on their server, only. No net connection, no use. Saving backups locally is made awkward to impossible (awkward in autodesk's case). Drop the contract, and you no longer have access to your files. Since software doesn't wear out, it is a way to insure an income stream, and a better one, for the provider, than the last generation upgrade without downgrade path model that sold a new Autocad or Inventor license to most enterprise users every year (upgrade one machine, and all of the othrs in the organization can no longer work with projects touched on the upgraded machine), Given the market constraints and the need for the company to have an income stream if it is to remain solvent, I don't know how else they can do it, but that doesn't mean that I, as the little guy, like it or can afford it.
    I'll shut up now. <Pshhhh> And have an adult bevvie.
  12. Like
    enl got a reaction from Fmilburn in 4 x 6 cm Projects   
    I highly recommend it, if you can justify the cost. It isn't worth cheaping out. I use mine for a variety of things including fitted tool trays (need models for micrometer or planer gauge trays?), machine parts, dinguses for my teaching job, and a slew of other things, including a couple PC boards. It hasn't paid itself off yet in paying work, and I don't think it ever will, for me. I put the money aside over time because I wanted the tool.
    The learning curve wasn't too bad for me, as I have machining experience and CAD experience, though not much CNC prior to buying the mill. A wooden clock would be a dandy way to learn the basics of CNC and break in a small machine.
  13. Like
    enl got a reaction from tripwire in 4 x 6 cm Projects   
    Main reason for messing with CNC is no wait. Photo-resist etching, same thing. The drawbacks to both of these is the mess and the limitation to double-sided boards and no through plating.
    Sourcing from a commercial shop in China is days to weeks, depending on quantity, complexity, destination, and holidays.
    Quality of a commercially made board is likely to be higher, and generally has the bells and whistles of solder mask and screenprinting, but when you need it tomorrow, or today, you sometimes even go as far a sharpie and that sludgie bottle of Ferric Chloride that has been sitting in the cabinet since 1987. This is why I miss my pen plotter. Draw the board. Etch the board. Back in the plotter with a fine point to label the board. Assemble.
  14. Like
    enl reacted to Ekrem in cc3200 injects noise to my thermocouple reading   
    I attached two pictures. One shows the board with a red rectangle indicating the temperature measurement circuitry. On the other picture I placed a U-shaped copper piece on top of the temperature measurement circuitry. The copper piece floats, not connected to any potential because I haven't seen any change with noise level connecting or floating it.

  15. Like
    enl got a reaction from hmjswt in Flow Chart Template   
    I recently (within the last year) got rid of several of my templates, including flow chart and logic symbols, because with the youngest of them being 30+ years old, the plastic was starting to get `that smell', as it degraded. You all know the smell.somewhere between stale cheese and decaying animal,along with the white crust on the surface. I can't remember the source of the flowchart template (IBM? Digital? Data General? One of the big ones), but the logic symbol were the green TI and the blue National. I remember getting them at a recruiting fair in the early '80s.
  16. Like
    enl got a reaction from sanjy005 in fuel theft detection using msp430g2553   
    First thing would be more detail on the sensor and more detail about what you are trying to do. What do you mean 'control GSM and LCD according to' the sensor? Are you trying to display a fuel level? Are you trying to send an alarm message?
    To get you started, I'm going to guess the sensor is resistive with float, as they are quite common, or potentiometer with float, but there are other options. If it is, then the easiest way to read it is set up a voltage divider, one end to ground, the other to your processor supply, and the wiper to one of the analog capable inputs, and read the analog value using the ADC. The sample for reading the internal temperature of the MSP430 (see the LP documentation) is a good starting point for using the ADC, and there are a number of examples here in the archives, as well as lots of examples in the TI lit and around the web.
  17. Like
    enl got a reaction from hamada in How to control DC motor with ULN2003 and msp430?   
    Presuming the diagram you attache is correct, you have the LED connected to ground. As the ULN2003 switches the other lead to ground, the LE will not light. You need the anode to f the LED attached to the 3V rail, and a current limit resistor in series with it (maybe 100ohms)
    Again, the outputs switch to ground, not positive supply.
  18. Like
    enl got a reaction from gsutton in MSP430G2553 and viability   
    You will find a lot of projects on this forum, a few mine, many other peoples. I don't keep a heavy online presence, and many of my projects are either done because they need to be done NOW, so I don't really get around to posting anywhere, or, in a few cases, are client work.
    You'll see a few projects, including one of mine, at http://forum.43oh.com/topic/4511-ended-oct-2013-43oh-halloween-contest/?hl=halloween
    Another of mine (still on first set of AA after 3 years and as close to dead on time as it started, showing low power) is http://forum.43oh.com/topic/4068-year-clock/  Note that this isn't really possible with an arduino, due to the low power requirement. This wasn't a ton of code (and the code isn't real pretty...) so memory wasn't an issue. I might have actually used a 2452 rather than 2553. I'd have to pull it off the wall and use a mirror to look.
  19. Like
    enl got a reaction from gsutton in MSP430G2553 and viability   
    There are a number of Arduinos available with various capabilities, speeds, and memory capacities.
    The 2553 is comparable to the UNO in many respects, with max clock speed of the '2553 matching the UNO, but without the need for the crystal, but half the RAM and half the program memory, and comparable I/O capability.
    Advantages to the '2553 include lower power and fewer support components required than the ATmega, making it easier to use in your own system without the commercial board.
    The arduino boards have a broad base of daughterboards relative to the launchpads, but if building your own boards, there are advantages to the '2553.
    That said, I have used a good number of 2553's, and a couple of other low end MSP430's, in projects ranging from simple to moderately complex, without running into any issues that an arduino would have resolved other than a couple cases where the extra memory would have been handy.
  20. Like
    enl reacted to tonyp12 in tiny msp430 preemptive multitasking system   
    Tested on G2553 Launchpad with IAR, I recommend G2955 with 1K RAM if you want more than 3 task
    #include "msp430.h" #include "common.h" //=========================(C) Tony Philipsson 2016 ======================= funcpnt const taskpnt[]={ task1, task2, task3, // <- PUT YOUR TASKS HERE }; const int stacksize[tasks] = {28}; // a blank value defaults to 24 stack words //========================================================================= int taskstackpnt[tasks]; unsigned int taskdelay[tasks]; char taskrun; int main( void ) { WDTCTL = WDTPW + WDTHOLD; // Stop watchdog timer if (CALBC1_8MHZ != 0xff){ // erased by mistake? BCSCTL1 = CALBC1_8MHZ; // Set DCO to factory calibrate 1MHz DCOCTL = CALDCO_8MHZ; } int* multistack = (int*) __get_SP_register(); int i=0; while(i<tasks-1){ int j = stacksize[i]; if (!j) j = 24; multistack -= j; *(multistack) = (int) taskpnt[++i]; // prefill in PC *(multistack-1) = GIE; // prefill in SR taskstackpnt[i] = (int) multistack-26; // needs 12 dummy push words } WDTCTL = WDTPW+WDTTMSEL+WDTCNTCL; // 4ms interval at 8MHz smclk IE1 |= WDTIE; __bis_SR_register(GIE); asm ("br &taskpnt"); // indirect jmp to first task } //============= TASK SWITCHER ISR ============= #pragma vector = WDT_VECTOR __raw __interrupt void taskswitcher(void) { asm ("push R15\n push R14\n push R13\n push R12\n" "push R11\n push R10\n push R9\n push R8\n" "push R7\n push R6\n push R5\n push R4"); taskstackpnt[taskrun] = __get_SP_register(); if (++taskrun == tasks) taskrun = 0; __set_SP_register(taskstackpnt[taskrun]); asm ("pop R4\n pop R5\n pop R6\n pop R7\n" "pop R8\n pop R9\n pop R10\n pop R11\n" "pop R12\n pop R13\n pop R14\n pop R15"); } #include "msp430.h" #include "common.h" __task void task1(void){ P1DIR |= BIT0; while(1){ __delay_cycles(800000); P1OUT |= BIT0; __delay_cycles(800000); P1OUT &=~BIT0; } } #include "msp430.h" #include "common.h" __task void task2(void){ P1DIR |= BIT6; while(1){ __delay_cycles(1200000); P1OUT |= BIT6; __delay_cycles(1200000); P1OUT &=~BIT6; } } #include "msp430.h" #include "common.h" unsigned int fibo(int); __task void task3(void){ int temp = 0; while(1){ fibo(++temp); } } unsigned int fibo(int n){ if (n < 2) return n; else return (fibo(n-1) + fibo(n-2)); } #ifndef COMMON_H_ #define COMMON_H_ #define tasks (sizeof(taskpnt)/2) __task void task1(void); __task void task2(void); __task void task3(void); typedef __task void (*funcpnt)(void); #endif
  21. Like
    enl got a reaction from sven222 in Very accurate timer   
    Your error calc isn't correct. At 676,46 counts nominal and 676 counts actual, the error is at most half of (1/676)*100, which is 0,0014*100, or 0,14%, for a max error of 0,07% (this matches your calculation). At 121,2bpm, the error will be much less, as the actual rate with a count of 676 is 121,18. You will have a 0,1% worst case error at a freq of roughly 180bpm. At your 121bpm range, you have an accuracy of approximately +/-0.1bpm.
    If you need more accurate than this, you can jitter the count in a manner analogous to the way the DCO clock and the clock divider for the serial module do. In fact, if it was me (which it is not) I probably would. I would have the timer interrupt at count, and reset the terminal count in the interrupt handler. Each time, rotate a 16 bit word with the appropriate bits set for the nearest 1/16th of a count. Mask the low order bit and add that to the count so when there is a '0' to the low order bit, the count is the for higher BPM, when a '1' it is a cycle longer for the lower exact BPM. This will get your error to less than 0,01% (100ppm) for all rates up to approximately 1000BPM. The max error will be less than one half period of the highest audible tone (ok, a bit more than 1/2 period if the listener happens to be a newborn, or a cat, or a dog). This is probably sufficient for most music applications.

  22. Like
    enl got a reaction from tripwire in Very accurate timer   
    This device will not directly drive a high freq crystal, but there are a number of options as an external source can be used for internal timing:
    a) Use an external oscillator or clock generator module. Benefit: a programmable module will give you exactly the frequency you want (within error limit) Drawback: additional components
    calibrate the SCO to the 32KHz crystal. This will remain stable for a reasonable time, and the recalibration can be done periodically if needed. The DCO is rated at about 6% over the operating range of the device, but is quite stable over moderate term as long as temp and power supply are also fairly stable. In many cases, the calibration is more than stable enough for long term use.
    c) Use the 32KHz clock for the beat timing, and use the DCO for serial, deriving the serial parameters by comparison against the crystal.
    d) if you can count on the other device to talk first, use the crystal for beat timing, use the DCO for serial timing, and sync the serial to the other devices bitrate. This only works if you know you can count on the timing from the other end and you can force the other device to transmit first. I don't recall if the 2553 supports this in hardware, but it is pretty easy to set up in software.
    Of these, I would probably use © as a low cost, easy choice. The serial clock can be tuned quite tightly (fraction of a clock cycle), and the calibration is pretty straightforward in software. I think one or another of the application notes deals with this, but if not, and you need a hand on this, many people here can outline methods.
    That said, I have used the factory calibration values for the DCO with good success for serial timing. 8-bit serial needs stability of approx 3% to avoid losing bit frame registration (3% in opposing directions at opposite ends of a link is 6% total, which is about where the last bit frame will be out of timing bounds), which is reasonable for the DCO over most of the qualified operating range, though it is risky.
  23. Like
    enl reacted to yosh in MSP <--> EMULATOR explanation   
    Basically it's something like this:

    In this way you can "switch" the pins for HW or SW UART by just rotating the two jumpers.
  24. Like
    enl got a reaction from scampos in MSP <--> EMULATOR explanation   
    The dedicated hardware pins, on the processors that have a hardware UART, such as the 2553, are swapped from the pins used by the software serial on the non-hardware UART devices. One position is for use with hardware UART *or* software serial that uses the same Tx and Rx as the hardware UART. The other position is for  use with the TI software serial Tx and Rx.
    Why are they different? I don't know.
  25. Like
    enl reacted to dubnet in 2015 Black Friday/Thanksgiving Deal List   
    One more:
  • Create New...