Jump to content

First actual fried MSP430

Recommended Posts

I have done some silly things to these chips in the year or so I have been playing with them.  but they always came back, may have needed to reflash the program and recheck connections but they were durable and didn't die.


but now apparently in playing with motor driving, a transient spike somewhere has now fried the 430 on the board.  when I apply power, it doesn't respond to programming calls from CCS, and it heats up to the touch rapidly.  so something internal has gone and melted.


I guess this is a sort of milestone of sorts :smile:  now I have to desolder it and replace it. I just hope it didn't take out the OLEd screen at the same time, I only have 1 more of those left till I have to order more (have a whole stick of processors)


i get to look forward desoldering tomorrow i guess



as an update, the OLED screen was not killed, however the battery charging chip, and the voltage detect chip both were dead as well, so basically all the chips on the board died at the same time, im guessing that "ground" got pulled to negative voltage showing a large voltage spike to all the chips at the same time.


Link to post
Share on other sites

I once had a terrible bug to solve. We were using an MSP430 to do power control in a safety device. Some sensors were attached over I2C, but for some reason we could not access them. When toggling the lines connected to I2C things were fine. It took us two weeks (very expensive boards and hard to replace MSP430) to convince everyone that the only possible option was that the I2C driver on those pins was fried, but not the GPIO logic.

Using a second board and everything went smooth, since the I2C driver was functional in this board. :silent: :roll:

Link to post
Share on other sites

Not sure how prevalent, sane or amorale this is but back in 1979 (not a typo) being a young software engineer (no EE at all) I was tasked with writing code to control some things for Chicago Transit Authority. The custom board was in early stages of development and had some shorts. The gave a scope, a meter and sail go find the shorts. After a day or two we all agreed it was not going to happen soon on this complex board and we were under a very tight time line. Since it was an early stage board we decided to use the "hydro test" (this was Ontario). We simply plugged it in the 120v and looked for the fireworks, I learned a lesson, sometimes brute force is the best solution.

Link to post
Share on other sites

Back in the days of large cards covered with neat rows and columns of DIP IC's, mostly 7400 series and the occasional MSI or LSI (almost 50000 transistors! On one IC!!!), the card looked like little industrial cities when laying on the test bench, full of warehouses and factories. Power supplies were commonly pretty beefy-- my 5V bench supply is an old 30A Delta, with a big crowbar bolted to the outside (from the factory) and remote voltage sense lines-- and a failure could turn that nice clean postcard-picture factory city looking board into 1950's Pittsburgh in an instant. Stunk like crazy, but actually kinda neat seeing all of those little factories blasting smoke out of their roofs as turn silicon into slag.


No pics (pre digital cam era last time I saw this), just memories.


And we can't forget that when testing using a variac, plug the variac into the isolation transformer, not the bench outlet. Fortunately, GFCI for the bench, so no smoke, but it took me about an hour to figure out why hooking up the scope ground lead was popping the GFCI monday night. Bench computer was plugged into the iso tranformer outlet.....

Link to post
Share on other sites

Back in the days I worked for a electronics repair unit for the Army in my country.

My "fireworks" memory: Two colleagues got to fix the "battery conditioner", a purpose built charge / discharge and measurement unit for all kinds of rechargeable batteries. This was a large rack-mounted unit, with 15 -20 or so "outputs" to connect batteries to. The control logic was all TTL, on many cards (probably Eurocard-sized). While doing fault-finding, they had the control unit partly disassembled on the workbench. They also had a few batteries there (probably to load the outputs). For some reason, they had the battery voltage from one battery on a test lead with a measuring probe on the end, this voltage was +55V dc... when one of them lost the probe, it dropped onto the +5V on one of the boards... there was a few seconds of fireworks, the there was black plastic bits (tops from TTL chips) all around several meters away, some of them burning.


To avoid putting all the blame on someone else, my "finest moment" in those days was this: we had just received a new instrument from one of our suppliers, a fancy (and expensive) oscilloscope, logic analyzer or something. Me and a colleague was unpacking it, we couldn't get it out of the box fast enough, we wanted to plug it in and start playing with it. When it was out of the box and on the workbench, I grabbed the power cable, noticed it had the wrong plug (a UK plug), and simply grabbed another power cable, and plugged it in, without thinking about checking the 120 VAC / 230 VAC switch that all instruments had in these days. Naturally, the instrument lasted only two seconds after being turned on. The only thing to do was to pack it into the box again and return it to the supplier. (Of course the supplier should have changed this switch before sending us the instrument, but finding a power cable with a UK plug in the box should have warned me).

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...