• Announcements

    • bluehash

      Forum Upgrade   03/11/2017

      Hello Everyone, Thanks for being patient while the forums were being fixed and upgraded. Please see details and report issues in this thread. Thanks!

yyrkoon

Members
  • Content count

    1,369
  • Joined

  • Last visited

  • Days Won

    24

Everything posted by yyrkoon

  1. I think it's important to keep OS and data separate always. So maybe once, after setting up a fresh install, make a bootable image of the OS partition. But after that, always keep your data separate from the OS. With Linux, I usually mount /home or /home/username on a separate partition. In windows I create an OS partition( ~ 500GB or something ) and partition the rest as a pure data drive.
  2. The company I work for uses a private github account, and it seems to work great. I also think the guy I work with who manages the account says it only costs like $20 a month too. But this is for business. For personal use, I think it's a lot less. Couldn't hurt to check. In either case. For a solid backup strategy, it's always a good idea to have offsite backup's too. In addition to local redundancy.
  3. Ok, so in that case, all you really need to do is rsync your git repo to another location. Some place *maybe* that is only mounted while backing up, and unmounted while not. This way perhaps you would avoid corrupting your backup. I'd have to think on this some more, to see if there is any holes in my idea here, then perhaps come up with a bulletproof strategy.
  4. LOL sorry I keep thinking of things that I've long forgotten about, until just ow thinking about all of this. I've heard tale of people setting up a local git repo, and using that to backup files too. The cool thing about using git in this manner, is that you would have file versioning too !
  5. It'll be a steep learning curve figuring it out. But once learned it's really awesome. Actually, I've known of it for years, and have used it a few times, and still do not know everything about it There's guides online though. As for Linux, use a systemd timer to fire off once a day or whatever, that calls a script to run your rsync's. If not using systemd, then use a cronjob once or so a day to fire off that same previously mentioned script.
  6. As far as my process. I avoid RAID arrays like the plague. RAID in home systems is an additional point of failure, and is a recipe for disaster. So, I use USB 3.0 external drives. Just bare drives, well partitioned and formated of course. And I just hand copy important files to these drives. As for my code base. Well for my beaglebone I have a dedicated Debian system that acts as a NFS, and Samba file share. This means I can mount the NFS share on my beaglebone, and through Samba I can mount the file system on my Windows system. From here, I edit files directly off this share, in windows, using my text editor of choice. Anyway, the files stay on this debian support system of mine, and never go away until I delete them .
  7. I was wondering if anyone knows of a good read concerning implementing an I2C slave device. What I'm looking for, is something that covers kind of a high level discussion of what needs doing. Without a bunch of specification, or physical characteristic discussion. Meaning, I do not care about the electrical / physical characteristics of such a device, and I have a hard time digesting specification type books. Mostly, what I really need to know is a processor / language agnostic view of how to implement the slave addressing stuff. Such as device addresses, and register addressing. Code examples would be cool, in any language, but are not strictly speaking, necessary. Additionally, I'm also interested in implementing a slave device using the 1-wire protocol too. EDIT: Additional information, which may help someone else help me. As a hobby project, for the purpose of learning, I'm attempting to turn an MSP430G2553 into a slave device that could potentially be used as an I2C slave device that could possibly be an ADC, PWM, GPIO expander, or a combination of all mentioned plus more. However, the reading material does not necessarily have to be specific to the MSP430G2553, or any MSP430 for that matter.
  8. Which platform ? Windows ? Linux ? OSX ? But rsync is a pretty good way if you use the correct strategy. This can be done on Windows too using the rsync application Deltacopy. Which runs services, and will automatically sync your watched directory, to the backup location.
  9. Yes, and no. I know what you're saying, just not sure how that would work lol . . . But i really need to understand the low levels of both better anyway.
  10. You're talking about the 1wire protocol or just one wire of I2C as in sda / scl ?
  11. By default, the beaglebone buses have to operate at 100khz. It's been my impression that 400Khz is standard, and 800Khz is "fast" speed. Anyway, the beaglebone buses must be 100Khz, but ours doesn't. But from what I understand, with differential I2C, you don't need to worry about all of that. High speed may not work but 400Khz is supposed to work. But, thats why we're testing. To find out what we can get away with. Reliably.
  12. So, we've got the capes( Beaglebone ), and remote boards(I2C slave boards ) made and assembled for testing. For the moment, we're testing over cat5e, that's only about 1 foot in length, but later we're going to test 300m ( 900 feet ) worth of cable. So for those who are not aware of this "technology". We're attempting differential I2C which is supposed to be capable of up to 300 meters. So, I really do not want to sounds stupid . . . but essentially, you're transmitting over cat5e using CANBUS hardware, and what I think is reverse logic 24vdc carrier. So basically, an amplified signal. Anyway, it seems to be working on at least 1 foot, which from what I understand is fairly difficult with normal I2C signalling. So far, I've only queried the remote board via i2cdetect from the i2c-tools Linux utility set. Well, we've also dumped the registers( i2c-dump ) and everything seems to be in sync with our test jig. This specific board does not have an MSP430 on it, and this specific one probably wont. But in the future we'll be doing all kind of crazy stuff over differential I2C. So if you're interested, stay tuned ! Which by the way, wulf, and I will be selling beaglebone capes sometime in the future.
  13. Yeah, first off. Avoid anything from Harbor freight if you hate spending good money after bad. Their parts boxes with clear plastic drawers where you have say 4-5 across, and 8 high( or whatever ) are garbage. My buddy bought like ten of these, and less than a year later the drawers, and boxes started disintegrating. All of their other stuff is garbage too. Search youtube for "harbor freight", and get the gist. As for the one thing I personally find "most desirable". That would be more work space. Our place has lots of work space, but not where I spend most of my time. In my own area, I'm constantly struggling to keep space clear for development boards, prototypes, etc as I develop software and test the hardware. I even built a 4'x8'( full sheet plywood ) workbench, that is a bit higher than waist high to make it easy to work on things while standing. Off of one corner, I have a one of the 4x4 legs built up with stacked 2x6's( screwed together ) to accommodate a custom built, by me, swing arm for my laptop Pretty much, I was given the base for this, and I welded together galvanized pipe, and angle iron as the post mount, and then 3 other pieces of pipe for the laptop base to swivel on. Anyway, I did not really plan all of this from the start, so it does not work out the way I had hoped. So I think the best possible thing you could probably do is draw up several plans, until you're happy with something for making the best possible use of your space. Then gives you what you want. Also, at first thought, it may make sense to keep your electronic design space separate from your software development space. Which make total sense to me too. However, if we're talking about constantly moving between the garage and an in house room. That could present its self as a problem. So it may make better sense to *somehow* do all this inside your room, and keep the garage for other things like . . . I don't know project box fabrication, etc, if you're into that sort of thing. As for a good place to find related tools ? I find amazon a good place to start looking sometimes, but may not necessarily purchase form amazon. Or sometimes I'll just google, find something, then check to see if amazon has comparable prices. But I also am an Amazom Prime member, So I usually get free shipping on everything. So maybe, consider buying some cabinets to hang on the wall above your work benches, so all you have to do is stand up to grab something that may not always need to be on the bench. Then have your benches shallow enough to be able to do that. Maybe 2-3 feet from the wall out. This way, you could potentially span a whole wall with one long bench, then have storage above in easily accessible cabinets. Or you could do a whole room like this is you wish, Which we've done here. Several rooms actually. As for monitors, do you really need more than one ? I know, I prefer at least dual monitors too for documentation, and editor type situations, but you may only need one. But if you need a single, double or even triple stand, Amazon has a wide variety of stands. Also keep in mind that some 4K monitors can be partitioned into 4 separate 1080p screen areas. So basically giving the possibility to have 4 screens displayed on 1. If something like that would work for you. With all that said, I think the most important thing you could do is start thinking about what you need, and want. Then start drawing up plans until you're happy with what you've come up with.
  14. So this is not meant as a bashing of C, or C++. Just some observations I've made in the last couple of days while working with both C, and C++. On a Beaglebone, while toying around with reading from a DS18B20 1-wire sensor. Also do keep in mind I'm not exactly an expert with either language, but I would say that I am more proficient in C, than C++. First off. I often have to search the web for perhaps some common things related to any language. Not because I do not know how to do something. But because I often switch between languages for various things, and perhaps I do not remember specific details. Other times, maybe I'm not 100% sure what I need to use for a specific situation. Anyway . . . With a DS18B20 1-wire sensor. In Linux, one has to setup a pin for 1-wire communications. It's fairly straight forward once you figure out how to create a proper device tree overlay for a Beaglebone. Once that is completed, and your kernel modules are loaded. You get a directory listing in /sys/bus/w1/devices/. This sub-directory is based off the sensors serial number, and starts off with "28-", followed by a 12 byte value representing this serial number. Example: william@beaglebone:~/dev$ ls /sys/bus/w1/devices/ 28-00000XXXXXXX w1_bus_master1 Once I had this figured out, while I did not know exactly how I was going to deal with this in C, I knew it would be easy. Now since lately I had been reading up on C++, I figured I'd give it a go with C++. So I did a lot of reading on how to "properly" in C++ read from a file. Okay, no problem there, everything seemed equally as easy in C, just different, and perhaps more readable. As I really do like using classes when working with things of this nature. Then I brushed up on using strings, and various other C++-isms. Again, no problems, until I ran into a hitch. Traversing directories, and building paths from wildcards . . . So this is what I've found out so far from reading many posts on stackoverflow, and other similar C++ forum sites. C++ has no way to work with directories in this manner. Other than falling back on C, and the struct dirent type. Now, I'm thinking to myself "how in the hell is this possible . . .". All while searching the web, more, and more, because I can not believe this is true. Right ? Wrong! So guys, what am I missing ? Passed that, I'm following some C code I found on the web, to work with the sensors sysfs file entry to get data form the device. I then decided to attempt to port the code from C to C++. When I ran into another snag. When using open() on an ifstream object, it expects a const char * value for the path . . . So again, I'm thinking to myself "wtf ?! I'm using a language where string is preferred over char arrays . . ." A language that Kate Gregory(So called C++ expert) proclaims to "stop using C" to teach C++ . . . In addition to the above, in order to format double / floating point value to a specific precision . . . std::cout.setf(std::ios_base::fixed, std::ios_base::floatfield); std::cout.precision(2); So much for %.02f eh ? Anyway, I was able to port most of the code over to C++, at the cost of an additional 14 lines of code. Which the original C code was only 57 lines. At which point I gave up. At least for the time being. Perhaps I need to read more, and get a better understanding of the C++ language. But this bit: dir = opendir(path); if(dir != NULL){ while((dirent = readdir(dir))) // 1-wire devices are links beginning with 28- if(dirent->d_type == DT_LNK && strstr(dirent->d_name, "28-") != NULL) { strcpy(dev, dirent->d_name); cout << endl << "Device: " << dev << endl; } (void)closedir(dir); } else{ cerr << "Couldn't open the w1 devices directory" << endl; return 1; } sprintf(devPath, "%s/%s/w1_slave", path, dev); time_t start_time = time(0); Mostly remains straight C. Because I have not found a C++ equivalent. Which also brings me to time_t time(). . . I do not know, perhaps I'm being pedantic. But when I use a language, I expect to use that language, and not have to rely on another . . . I am however starting to see why Linus Torvalds has a really bad attitude when it comes to C++. [EDIT] Do keep in mind when I say "C++ cant x.y.z . . .", I mean through the STL. I know there are libraries such as boost, and Qt, etc. But I really do not wish to deal with all that . . . for several reasons.
  15. I meant to respond to your post here in more detail concerning "Event driven". So, I'll usually mix up OO with event driven when talking about event driven. Which I do not think that event driven is necessarily a part of what makes a language Object Oriented. But I can not remember using an Object Oriented language that did not have some form of events. VB.NET, C#, and Javascript are all languages I've personally had hands on with using events. In the context of C++, using events is very similar to you'd do the same thing in C. Which is to say, for me, this does not feel very much like an event at all. Just a function that is occasionally called when some condition is met. With other languages higher level than C++, such as those I've already mentioned, The message pump loop is all abstracted away, and for me this totally feels like an event. Weird huh ? Using interrupts though, again feels very naturally event driven to me. But on some very low level, I'm sure there is something similar to a "hardware message pump", or at minimum some sort of conditional checks that fire the interrupts off. I won't pretend to know the hardware on that level however. Knowing just enough to know how to use an interrupt is good enough for me. I also do not really know the low level gory details of how these events work, but I'm fairly confident that there is some form "wrapper" or interface code, that's really running a message pump loop. Similar to how you'd see in C, or C++. Now days with C++, there may even be something in the STL, but I do not know for sure. I try to keep myself busy programming, instead of spending all my days keeping up with the C++ STL. Which is one reason why I'll try to avoid C++ most of the time. I do not feel like I have enough time to keep up with the language 100%, and still do what I need to get done programmatically. The other part of that equation, is that unless I really know something in full detail, I'm not exactly happy about using that thing. This does not mean I think I know everything about C. This means, I think I'm proficient enough with C, that if I am unfamiliar with a concept, or language feature. It will not take me long to brush up on the subject - Usually. So here is my take on the whole C++ class ISR thing. It's too complex. Complex code is more likely to be buggy, or have mistakes in it. If in contrast you feel more comfortable using C, for ISR handlers. Then by all means just write the code in C, and use C++ elsewhere where it makes more sense for you. Do keep in mind, I understand the *need* to do things differently in order to learn something, or possibly start thinking about that given thing differently. Complex code is also more likely to be slower. Unless your compiler is very good at optimizing things out. Which C++ compilers seem to be working "magic" in this area in the last several years. But this is yet more information you need to overfill that full glass of a brain we have already . . .
  16. @NurseBob Thanks for you input. We should move the I2C talk to the other thread if more conversation presents it's self. So we can try to keep this post on topic. Just sayin' and not blaming anyone, which if anyone is to blame it's me who shifted the conversation towards I2C.
  17. @NurseBob Uh, yeah, no offense, I think I'll look for a different source. To understand I2C better. From what I've seen in that book, the guy is setting up datatypes( enums, etc ) and additional code inside an interrupt handler ? Yeah, that's not usually a good sign . . hehehe Yeah, never mind, it seems I was mistaken.
  18. So the idea with our hardware is that we're going to have X amount of jumpers to represent X amount of addresses. So without knowing much about I2C in general, I figure I take the binary form of these jumpers combined, and use that to represent a device address. Which seems obvious to me, an I2C slave implementer newb. So ok, great I'll check out that read, and see if it starts to make sense to me then. From what you're saying though, it sounds like I just pull the first X amount of bits ( excluding start bits, etc ) from a transmission, and see if the comm traffic is meant for the device or not?
  19. It shouldn't be. For GPIO, the blinking LED is obvious, but for testing other peripherals, you just have to use a set of conditionals to test for various conditions, then act accordingly through the LED. I've done this myself too, but prefer to use "printf() debugging" through UART when at all possible. As far as using an oscilloscope, I do not have to do that, and in fact do not really know how to use one. Using a logic analyzer may be different, and I have one. But have never had to use one in the past. Maybe that'll change with this project, we'll see. Anyway, my buddy who has been an EE for years has several oscilloscope, and definitely knows how to use them. So I defer to him, when(if) his hardware has issues. I know my way around Linux, and most of it's programming interface ( userspace ) usually to know when something is "my fault". I2C on the MSP430 is a bit new to me however, so we'll see how that works out. I have seen a few different bits of example code for the MSP430 though, that makes it all look very trivial. Software communications wise, it seems very similar to setup, and usage compared to UART. The one thing I don't know how it works yet is how to setup the device bus address, but maybe I'm a little bit too worried about that. The rest of the code seems almost trivial. For hardware based I2C that is.
  20. I've actually used javascript to write test code, which was later ported to C for a beaglebone, and once for a Launchpad. You have to write your own hardware simulators to test, but those do not usually have to be very complex. Since for me, I do not design the hardware, I just have to understand how it's to be used. For my I2C slave project, it's a bit different since the slave device will initially be connected to a Beaglebone for testing. But in this respect, the Beaglebone uses file descriptors ioctl(), and Linux API calls such as SMBus_*, but all of the hardware abstraction is already taken care of by Linux. On the MSP430 I2C salve side though . . . I'll need to understand it all. I had a few capes made for the beaglebone, which I have attached to a beaglebone as we speak, I am just waiting for the other half to be finished, In short though, we're experimenting with differential I2C. Which I do not want to get into too much detail at this time, but it'll allow us to do all kinds of cool things. Eventually, we'll be open sourcing the hardware, and selling capes + addons. I'll be providing software for the devices, in binary form, but may not open source the software. Or may only open source the beaglebone / Linux side, I do have my reasons, which are in no way related to greed, and everything to do with responsibility of modified code that could potentially be dangerous,
  21. Sometimes, coding is all about experimentation, and figuring out what will work for your own given situation. I find this a really good way to learn too. This is actually what keep me interested in programming - "Learning". However, one of the turn off's for me concerning C++, is classes. This is one reason for me Golang is interesting. C++ does have Lambda's though. Which I'm not proficient enough with this aspect of C++ to say, or know if using Lambda's throughout your code in place of classes, is a good idea or not. So the whole point of me avoiding C++ in embedded projects is that the language is far more complex when compared to C. For me, with C, you have less to worry about. Also, since now days I realize that C++ is really another language in it's own right. I do not want to spend the time to learn another similar( to C ) language, when I can just use C, and be done with it. Then all the subtleties between the two like with const, you need to be careful you know what you're actually doing. For me, this is too much to keep track of, when doing anything serious. With all that said, I do still keep up a bit with C++. As I mentioned before, I like learning, and I'll probably never stop trying to learn until the day I die. It does seem this day and age, at least the GNU C++ compiler has been under heavy development, and the standards are incorporating new, and interesting features. But again, that can sometimes be a problem, when "the rug gets pulled out from under you . .". Another thing I like about C, is that because it's so simple by comparison. I do not really need to do much debugging afterwards, Which is to say, I've adopted my own TDD( Test driven development ) style that works really well for me. But I also modularize my projects as much as possible. Which means, I do not really have one executable that is very large.*shrug*
  22. I'm actually quite the opposite when it comes to object oriented versus procedural. I prefer the procedural model more often then not. However, I do like the Object Oriented "event driven" model of Javascript very much. I suppose I just like event driven models in general. However, on some level, you have an event loop( message pump if you prefer ) that is really procedural at the core. With embedded Linux, I tend to stick as much as possible to a procedural model, but on a bare metal platform, such as the MSP430's, I do like using hardware interrupts as much as possible, Which is pretty much event driven, at least when thinking from a high level.
  23. @nickds1 Yes, one language is procedural, while the other is object oriented in nature. You'll possibly hear a lot from the crowd that thinks procedural versus object oriented is not something you classify a language with. But is instead a style of coding. I can see good points from both sides of that "argument". But I do think that some languages are better suited for one thing or another - Of course I do not think anyone here was arguing that C or C++ is better for embedded systems. Any programmer with any amount of experience knows there are many languages to choose from, and that every programmer may potentially know, or be familiar with several. We ( experienced programmers ) also probably have a language that we're most proficient with. In my own case, that would be C. So when people start talking about C versus C++ strings, like I did here. I may not know every_single_detail, but I am familiar enough with both languages to know which conceptually is better suited for strings. By "conceptually" I mean if you're aware of a potential issue, and how to solve that same issue. Is that issue really an issue at all ? Subjective I say, but in my own opinion. No it's not a problem. For me. Does that mean that I think one language or another has no place in the programmers world ? No, not necessarily. I think even "BF" has a use, if for nothing else. To help people think differently. I've always said, at least for the last 15 or so years( I've been "coding" since the mid 90's ) that every language has it's use. Also that every programmer is going to have a favorite language, that they'll try to use first. Whenever possible. So in this context, when I say I do not like something, about any language other than C. That's because I have far more experience in C, than most language. Do I think C++ is garbage. No, not by a long shot. I do think it is odd that a more "RAD" style language would actually take more lines of code when compared to C. To do a given thing. I also think C++ is far more complex as a language. Which in turn could present its self as a problem to a programmer who knows exactly what to expect from another language. In my case, C. Usually, when I'm thinking Object Oriented, C++ is not a the top of my own list. Golang actually is looking far more suitable now days. But it too has it's own pluses, and minuses. EDIT: By the way, I do think Kate Gregory has many good points on the subject of "Stop teaching C" as well. From my own perspective, learning C first, then attempting to learn C++ after. Early on, I did not even know the difference between the two language. To me, C++ was "just another" C, with newer features. I even had a few mentors from IRC trying to explain to me the differences. Here, for me, I think the biggest difference from then, until now is - Experience. But I do agree with Kate, and I suppose you too, That we do not need to know C, to use C++. In fact, it may even be beneficial to teach C++ first, or even only.
  24. So as for Jason Turner, I like his focused talks, but am on the fence where his personal youtube videos are concerned. Pretty much as he mentioned in one of his talks. He codes too much, and it gets pretty boring. Going to start watching some of his later videos and see if it gets better.
  25. @zeke thanks, That, more or less seems to be what I do myself. In this case however, there won't be a user interface, nor anything related to "feedback" over serial as an I2C slave can't communicate with the master unless the master initiates the communications. There also won't be an LCD, but I do not think that was you point, and I'm not sure If I will be able to use UART for serial debugging or not. However, this very well may be a chance, and a reason for me to use the logic analyzer I bought a few months ago. Now, I need to try and figure out how to program the 28pin G2553's. With the 20pin dip package MCU's I've just been using a ZIF socket on the launchpad, and moving programmed parts to a socket on the board which needed the MCU. Now, I'm kind of at a loss. My buddy wulf has put JTAG, and spy-by wire headers on the prototype, but . . . I've no clue where to start.