• Announcements

    • bluehash

      Forum Upgrade   03/11/2017

      Hello Everyone, Thanks for being patient while the forums were being fixed and upgraded. Please see details and report issues in this thread. Thanks!

yyrkoon

Members
  • Content count

    1,383
  • Joined

  • Last visited

  • Days Won

    24

yyrkoon last won the day on April 10

yyrkoon had the most liked content!

About yyrkoon

  • Rank
    Level 4
  • Birthday 07/05/1966

Profile Information

  • Gender
    Not Telling
  • Location
    Arizona

Recent Profile Visitors

2,220 profile views
  1. Here is something to think about. The ODROD XU4 has USB 3.0, and non shared GbE. It's an Octa core ARM based board that has a footprint very similar to a Raspberry PI. Can use Raspberry PI hats, and all that. These board for what they are, are very fast. They've been compared to an Intel core i3 in speed, So they're perfectly capable of handling home server based loads. With no problems. They're also inexpensive at ~$74 US each bare, or ~$130 US with power supply, and a 32G class 5.0 eMMC. The point here is that can easily handle a situation like this, at a minimal cost. I do own one, but have not powered it up yet. So physically, and personally testing it has not happened yet. However, the board I own will be used exactly in this capacity. But man, coupling one of these with a ~$200 hard drive to have a very robust development system at less than $350 US. Wow . . . I actually bought mine to also serve as a development system for beaglebone that shares a common ABI. e.g. I can compile form source on the XU4, and run those binaries directly on a beaglebone with no further work needed from me. No cross compiling, or cross compiling setup, not need to run an emulator anywhere . ..
  2. Yeah I have hands on experience with highpoint, and none of their stuff is above average consumer level. Wulf, my buddy had two drives fail initially, connected to a high point controller. Brand new drives mind you. Seagates, at a time Seagate was making really good drives. Lifetime warranty and all that. Then about a year later, when everyone was feeling all warm and fuzzy about their data *BAM* controller failed. No way to replace the controller, as Highpoint discontinued the model. On a RAID5 array, . . so no way to rebuild the array.without losing all that data . . . I'll tell you what though. I used to know several people who worked in the IT industry that were moving to all Linux based software RAID, These people were all using RAID 10 arrays in software as mentioned. The majority of the people seemed very please with their setups. Fewer hardware failures, because you're form the start getting rid of the controller, and the drives configured in this manner just seemed to be more robust, and seemed to yield far few failures. My point here though is this: If you MUST go with RAID, unless you can afford costly gear built by the WoZ himself( very very pricey by the way ). One should seriously consider looking into software RAID. Personally, I avoid the whole RAID scene categorically. Because using disks as single avoids losing space, due to parity, and if I need speed, I'll just toss in a Samsung SSD, and be done with it. Assuming I can't get that speed done using a zram ramdisk. Which if coupled with a system that has the capacity to hold 64G ram, can be quite large. Up to 128G theoretical using the lz4 compression algorithm. IN practice, a 96G ramdisk could easily be obtained, while leaving enough ram available to the system.
  3. I think because I've been using Linux daily for the last 4+ years now, instead of semi often since the 90's my familiarity level has increased dramatically. It used to be no one could tell me anything about Windows, because chances were really good, I already knew it. But I've stopped caring since Windows 7, as like with so many things. I have a life, and I can not keep up. Plus right now, working with Linux is how I make a paycheck. With everything that has been happening at Microsoft, how they treat their customers now days. I just kind of feel "dirty" using Windows for anything other than in a desktop capacity. Then with the way I see Windows / Microsoft going, I may end up shifting all my systems to Linux. For instance, this laptop I'm using right now came with Windows 8 preinstalled. I ran Win8 for a couple of years, and finally had enough. So I bought a copy of Windows 7 pro OEM, and proceeded to install Windows 7 pro and a new hard drive. Finding drivers was a pain, but when it came time to installing the USB 3.0 root hub driver . . . it doesn't work. So, I'm stuck using USB 2.0, Because Microsoft, Intel, nor Asus made Win7 drivers for the USB roothub for this platform. But I bet if I installed Linux on this laptop, USB3.0 would work fine. Not to mention the direction they're going with UEFI BIOS. You *HAVE* to disable hardware features if you decide you want to install an OS on the platform, that it did not originally come with. Regardless if it's just a slightly older version of Windows, or Linux. Anyway, I'm about at my wits end with Microsoft, and their partners for the antics they're forcing upon their paying customers. Sooner or later I'll probably just decide to move to all Linux and be done with it. Do I want to ? Not really, but I can't help but feel I'm being forced into this position. Especially since I like to occasionally play modern games. Something that Linux is catching up on compared to Windows very rapidly. So maybe in the near future, a moot point.
  4. So, one thing immediately pops to mind. The server you're considering could be used as a Samba server. This way all your workstations could connect directly to it, and access the drive very similar to how a local disk is accessed. Of course you have network in the middle, but if you use GbE that should not be much of an issue. So immediately, you have remote storage, great. After that, you could use an additional drive on this system as redundancy, You could do this a couple ways, including RAID1, which believe it or not is faster than a single drive on Linux, when using software RAID. Another thing I would probably prefer myself, would be to have a second drive, keep it as a seperate single, and only mount that drive when backups of the original are made. This could all be done automatically, using systemd timers, or a cronjob. Either way would call a script when it's time. One added benefit to using a Samba drive is that all the work is done on a system that's not your workstation. So you would not notice any performance issues while a backup was happening. On your workstation that is. Another, is that it would not matter which workstation you were using at a given time, if you setup your share correctly, any system could access any file no matter which workstation the file originated from. A trick I do here personally, when working embedded arm boards all the time. It's pretty handy.
  5. Not so worried about all that. For now, let's just assume, I have an 1-wire temp / humidity sensor in our greenhouse, and I just want a way to connect a cell phone to *something*, that can give me that data easily. The product says that the sub Ghz radio range is 2km, so I'm not worried about that so much. Actual distance from our main building to the greenhouse, line of sight is not much further than 300 feet. There are trees between here, and there, but not so many that it would be an issue, I think. Mainly what I'm trying to figure out is if I can use two of these to achieve something of the above mentioned nature Or if I would be better off getting a different device for the remote end that will not need bluetooth.
  6. Anyone here have hands on with this board ? https://store.ti.com/CC1350STKUS-Simplelink-CC1350-SensorTag-Bluetooth-and-Sub-1GHz-Long-Range-Wireless-Development-Kit-P50807.aspx?HQS=ecm-tistore-promo-diyweek17-null-lp-null-wwe So what has me intrigued is that there is bluetooth, and sub ghz radios on this both. So I'm kind of thinking "remote control" of some object on our property here from a cell phone with bluetooth of course. One thing I'm not sure of is, I can not just use two of these connected together thru subGhz, or can I ? If so, what would be the better way to do so, assuming two of these connected together would not be the ideal situation ? EDIT: The reason why I ask is that I have zero hands on with these boards, and I would like to set myself up to be able to "play" at some point when I have nothing else going on in my life. The idea is as mentioned above. Having *something* remote on our property, e.g. a receiver that performs some task, or several small tasks. Then of course this, the transmitter.
  7. That's not a G2 value line board, sounds liek NBob was using a 5529LP as well. So hey guys, this is off topic to the post, but not to the hardware. I'm curious, I've only really used the G2 valueline LP's why would I want to use one of these ? Honest question, not a trick, just curious.
  8. So, I'm not a fan of Western Digital, but I'd be interested in what you think about that drive over time. Seagate is my brand name of choice. I've never had a Seagate fail on me personally, But I have seen them fail in the wild. Mostly having to do with RAID arrays, but not always.
  9. heh switch to a Linux server Although they're prone to this kind of problem too. If you don't pay attention. but honestly who has time to keep up with updates, other than installing them ? Most of us have lives . . . I can't run Windows in a server capacity any more though, not sure how, or why "you guys" do this. It's not a hate thing or anything like that( this post is being posted from my Windows 7 pro laptop ), it's IDK, hard to explain.
  10. RAOD0 has no parity. Only RAID5 and RAID6 have parity in this context. RAID10 or RAID 0+1 are both pretty much the same A stripped mirror. Anyway, no I'm not talking about RAID zero. I'm talking about RAID5 or RAID 6. They're garbage, and they're slow. Try dealing with an array where the controller can't be replaced too, Yeah it's not fun. It's also comparatively expensive. blah blah blah Suffice it to say, I have years hands on experience with RAID, and I refuse to use it for any of my own storage. For multiple reasons. You need to get enterprise hardware, and software out of your mind. Most people could not, or will not want to afford it. It's also not usually available through "normal" channels such as newegg, or amazon. newegg *maybe* Even people who use SAS controllers, run normal stock SATA drives from them., and iSCSI really has no place in this context. I seriously doubt anyone will want to spend money on an initiator, just to give them something that can be had by simply plugging in a hard drive. Not to mention the network, and all that . . . My main point about RAID though is that it's not simple. A single disk attached to USB, firewire, eSATA, or internally is simple. Passed not being simple, it *IS* very prone to brake. As I said forget about enterprise equipment, most people outside of big business can't afford such hardware. An enterprise grade drive or three, sure.
  11. @zeke The programs used are less important than the results. Multiple tier local backup, and at least one stage offsite. So two copies local would be sufficient, but typically 3 is around the best, with at least one copy offsite. Offsite is for natural disaster, fire, etc. If you could somehow keep multiple copies "in house" with surety that at least one copy would survive after a disaster. That would probably be as good as offsite. Most people do not have that luxury however. So going by what I know works. For Windows systems, you could use Deltacopy to a local *NIX* system, OSX to a local system, using rsync. Then from that local system to some form of remote system AWS S3 perhaps, but I have no hands on with that. Or, you cold stage a secondary system that stores a second redundant copy, then from that to "the cloud". I would however advise against storing Operating system data on the cloud. The way I handle that is as I mentioned before I get my systems ready for *whatever*. Perhaps even after a fresh install with a minimal amount of tools installed. Then make a 1:1 disk image of that partition, After that, either keep your data on a second partition, or all remotely. That way, if you ever need to reinstall your OS, your data never goes away. Personally, I keep data on a second partition, then on a single local system, for important data. I've yet to have a problem with this method, and I do not use cloud storage, I figure if there is something catastrophic happening where my local systems are. I'm done too . . YMMV. EDIT: Oh and right, as also said before. RAID is not usually a good way to go for personal backup. JOBD is not terrible though, as there is no need for parity. e.g. "Just a Bunch of Disks". With parity, unless done properly, and using proper equipment. It turns into a money pit, that is unwieldy to maintain, and deal with in the event of disk failure. Plus, you end up losing space from the parity disks. God forbid, you have a controller fail, that turns out to be impossible to replace. You'll lose your data forever. You be better off investing into one or a few enterprise drives, and storing the important data on those. Using them as single drives.
  12. It's I2C over CANBUS, exactly. I've been told by my buddy that I was wrong, that the carrier is actually 5v, and that the 24vdc is for power only. Not quite sure how that works with only two "lines"( single twisted pair over cat5e ). But yeah I do not recall how long a distance CANBUS can handle, only that at the time I was told, it seemed ridiculously long when you consider how long a car is. I do know that the software protocol is 1Mbits max, which I'm not exactly sure is applicable to this situation or not. EDIT: So maybe reverse logic coms ? e.g. offset starting at 24vdc minus 5vdc when pulled low/ high or something ? heh yeah I'm lost
  13. You need to remember though. This isn't stock I2C, even though to the devices on each end, it looks like stock I2C. This I2C is on a 24v carrier, and runs over CANBUS hardware. Granted, the 24v is also used to power the devices, or can be. Anyway, we won't know what to expect until we get a 1000 foot spool of cat5e, and cut off 100feet. I think the spool we currently have is nearly gone. So not so much to test with at this moment. CANBUS is max 1Mbit/s though, and quite honestly I'm not exactly sure how mbit, and Khz correlate(compare) to one another. Something else I need to look into eventually. Anyway, got coding to do, I've pretty much been procrastinating starting on all of this, and now I have hardware in hand, with a test jig. I need to get busy writing code. It'll probably end up being something similar to how I handled the CANBUS project I was working on months ago. Two process halves, where the first half will be reading / writing registers, and writing this data to an intermediary file. Where a second half, which could be any number of interfaces, would probably start off as a simple C websocket server. For UI purposes.
  14. I think if we're lucky, we should be able to do at least 100khz for 300M. I would be nice if we could go 400khz , or even 800Khz. I won't hold my breathe though . . . I'm not exactly sure how Linux will handle less than 100Khz, but I know how to change the frequency. So I'll figure that out when,/ if I need to. Eventually, we're considering open sourcing the hardware. So that'll be cool too. It'll be a cape with a few other features on it too such as an RTC, and external power management. The external power management right now is just a single GPIO that'll take a "pulse train" to enable. Eventually I want to change that to 1-wire for more flexibility. That'll be fun for me to figure out I think. When I get to it.
  15. I think it's important to keep OS and data separate always. So maybe once, after setting up a fresh install, make a bootable image of the OS partition. But after that, always keep your data separate from the OS. With Linux, I usually mount /home or /home/username on a separate partition. In windows I create an OS partition( ~ 500GB or something ) and partition the rest as a pure data drive.