Jump to content
43oh

Recommended Posts

Just now, dubnet said:

It also makes drive upgrades, or switching from spinning drives to SSD, a breeze.

Yep.  I've been though a couple of cycles of boot and data drive upgrades.

I love being a kindred spirit! :)

My primary workstation is an i7 x980 with 24GB ram and 7 hard drives (21TB total storage).  I built this machine to do viideo and photo editing.  Even at 6+ years old, it will render an hour of 1080p HD video in just under an hour. Everytime I think about upgrading, I find myself looking at Xeon processors and about 4 - 6 grand to get any significant improvement in performance.  Since I really only do the video for non-commercial stuff, it's not worth the extra cost.  So, since the system is responsive, meets my current needs, I don't get too caught-up in getting the latest and greatest - though I am wanting one of the Wacom Cintiq HD tablets. Maybe for Christmas??? :)

Off topic: I've found that having multiple monitors and running two instances of CCS or IAR an interesting and productive way to debug comms between a pair of '430 devices.

Link to post
Share on other sites
  • Replies 65
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

In the 37 years I've been writing code, I've only asked an admin to recover a file for me once.  Turns out that file was on a disk that was being backed up by a SCSI tape drive that had been having pr

Which platform ? Windows ? Linux ? OSX ?   But rsync is a pretty good way if you use the correct strategy. This can be done on Windows too using the rsync application Deltacopy. Which runs s

Bummer is certainly an understatement. As to my local backups I run Macrium reflect pro (now V7 home edition). I have all of my code and eagle files on a 1TB 10k  rpm hard drive. Macrium does a f

Wow! My head is spinning trying to understand your description of your backup processes. 

It's cool though. It looks well thought out.

My challenge is trying to come up with a process that will work on windows, mac, and Linux simultaneously.

I am reading about AWS S3 services. It's bewildering because of all the options available.

It would be cool if there was an application that could run on windows, mac, and linux that would orchestrate the backup process.

Link to post
Share on other sites

 

@NurseBob

For your video work where is the bottleneck? If it's the drive subsystem then perhaps an upgrade to SSD would be helpful. Perhaps not for the whole 21TB but maybe as scratch drives for the rendering process.  Prices keep dropping and are now at the point, in my opinion at least, where the price difference between spinning drives and SSD is pretty easily justified by the performance gains. I have them in my laptops and main desktop and love the performance boost. 

In the past I have recommended adding memory, to a point (and you are north of the point with 24GB), as a way to inexpensively increase performance. Now, its a toss up between memory and SSD and I am leaning more toward the SSD and not solely due to lower prices. Reason is that even with less than optimal memory, the swap to virtual memory on the SSD is so much faster it mitigates the need for more system memory.

Link to post
Share on other sites
1 hour ago, zeke said:

I am reading about AWS S3 services. It's bewildering because of all the options available.

RE: AWS S3, I'm running with their simplest service - basically a dropbox type of situation.  AWS S3 suffers from a more complex interface than dropbox, but fits my needs.  When I was actively posting to my video blog, I used their service to store the downloadable files rather than paying Network Solutions for exorbitantly priced storage.

As to cross platform single backup solutions, I've not researched that, so I've really got nothing to offer there.  However, if your machines are co-located, you might consider setting-up some type of NAS storage that everyone writes to, and then upload those backups to a cloud-based option?

While my original plan was to upload all of my backups, they end up in the TB+ range, and uploading a TB of data for one day's worth of data would take days... Losing proposition. So, I limit what gets sent off site, and pray that having multiple locations (basement, and a couple of other locations at home), that my data will survive a local disaster. Bad choice, but I've yet to see any TB level solutions that involve consumer-based internet speeds. Fingers crossed in that regard.

Link to post
Share on other sites

@dubnet

> For your video work where is the bottleneck?

CPU.  Disk and RAM are not a problem. HD video processing is very, very CPU intensive, and the graphics accelerators and GPUs really don't participate in the rendering process. Overall, I'm not at all unhappy.  I've seen other systems that take easily 8 to 10 times longer to render files similar to mine.  Aside from CPU, the real bottleneck is uploading to youtube, or any other service. A 90 minute video takes about 4-6 hours to upload. On more than one occasion I've had both a render and upload running. No stress for the machine (all twelve threads will be at 40-70% for the render, but the upload really doesn't register on the resources).

FWIW - the long videos were my recordings of lectures for my nursing students; they were subjected to 3 hour lectures on a weekly basis. Otherwise my goal is less than ten minutes for a topic of interest... :)

 

Link to post
Share on other sites
4 hours ago, NurseBob said:

Off topic: I've found that having multiple monitors and running two instances of CCS or IAR an interesting and productive way to debug comms between a pair of '430 devices.

...and a third monitor (or laptop/PC) for the logic analyzer watching the comms.  :)  I can truly relate as I had two laptops and a PC in play debugging comms not too long ago.

It is somewhat interesting that we need a handful of computers to play with very inexpensive MCUs.

Link to post
Share on other sites

@NurseBob

I had been playing around with github but not every program that I use can use github i.e.: Altium.

I may have to setup a subversion server as well.

 

But, I am trying to not depend on an external cloud storage option so I went to a local supplier today and purchased a WD 8TB Red NAS drive today. This will allow me to setup my own cloud storage and sync files against it.

Since I want to set something up locally, I looked for the following options:

  1. Self-hosted
  2. Linux client
  3. Windows client
  4. Mac client
  5. Open source

These are the possible programs I found today:

  1. Nextcloud
  2. Owncloud
  3. Seafile
  4. Pydio
  5. Syncthing

I've installed Nextcloud but I haven't tried it out yet because I've run out of time today.

The important thing will be the automated process of syncing/backing up all the files on the different sources.

 

Link to post
Share on other sites

@zeke

The programs used are less important than the results. Multiple tier local backup, and at least one stage offsite. So two copies local would be sufficient, but typically 3 is around the best, with at least one copy offsite. Offsite is for natural disaster, fire, etc. If you could somehow keep multiple copies "in house" with surety that at least one copy would survive after a disaster. That would probably be as good as offsite. Most people do not have that luxury however.

So going by what I know works. For Windows systems, you could use Deltacopy to a local *NIX* system, OSX to a local system, using rsync. Then from that local system to some form of remote system AWS S3 perhaps, but I have no hands on with that. Or, you cold stage a secondary system that stores a second redundant copy, then from that to "the cloud".

I would however advise against storing Operating system data on the cloud. The way I handle that is as I mentioned before I get my systems ready for *whatever*. Perhaps even after a fresh install with a minimal amount of tools installed. Then make a 1:1 disk image of that partition, After that, either keep your data on a second partition, or all remotely. That way, if you ever need to reinstall your OS, your data never goes away. Personally, I keep data on a second partition, then on a single local system, for important data. I've yet to have a problem with this method, and I do not use cloud storage, I figure if there is something catastrophic happening where my local systems are. I'm done too . . YMMV.

EDIT:

Oh and right, as also said before. RAID is not usually a good way to go for personal backup. JOBD is not terrible though, as there is no need for parity. e.g. "Just a Bunch of Disks". With parity, unless done properly, and using proper equipment. It turns into a money pit, that is unwieldy to maintain, and deal with in the event of disk failure. Plus, you end up losing space from the parity disks. God forbid, you have a controller fail, that turns out to be impossible to replace. You'll lose your data forever. You be better off investing into one or a few enterprise drives, and storing the important data on those. Using them as single drives.

 

 

Link to post
Share on other sites
3 hours ago, dubnet said:

and a third monitor...

@dubnet

Well, I do have three monitors... So, yes, I am able to see my logic analyzer, two debuggers, and off to my right, when needed the oscilloscope. I know there are those out there with more talent, skill and experience who are able to manage with their blinky LED, but I need all the help I can get! :)

 

Link to post
Share on other sites
10 hours ago, yyrkoon said:

Oh and right, as also said before. RAID is not usually a good way to go for personal backup. JOBD is not terrible though, as there is no need for parity. e.g. "Just a Bunch of Disks". With parity, unless done properly, and using proper equipment. It turns into a money pit, that is unwieldy to maintain, and deal with in the event of disk failure. Plus, you end up losing space from the parity disks. God forbid, you have a controller fail, that turns out to be impossible to replace. You'll lose your data forever. You be better off investing into one or a few enterprise drives, and storing the important data on those. Using them as single drives.

 

It sounds like the RAID you are describing is RAID 0 which is non redundant striping, used to increase performance but has no redundancy/failure tolerance.  Although single disk backups have simplicity, the risk you run is that drive can fail anytime (usually when you need it most :sad: ).  It happened once when I was doing a customer workstation upgrade from XP to Win7. Copied the customer's data to a USB hard drive, did the OS install and then during the restore the disk started to fail. After much sweat and coaxing was able to get the restore finished but it was yet another reminder of the frailty of hard drives.

RAID 1, 5 or 6 or even some of the RAID xx variants are, however, excellent backup platforms.  For customer near line server backups we used 8, 12 and 16 drive SAS and high speed iSCSI based disk arrays, typically configured as RAID 6 with a hot spare.  This allowed up to three drive failures before the array became critical (after the 4th drive failure your data is history). While this might sound like overkill in terms of redundancy, it addressed the problem of drives sourced from the same batch that tended to fail at the same time, or a drive failing during rebuild (happens with a weak drive due to the stress of the rebuild process).  This gave us a little headroom to get drives replaced and put the array back to optimum status.

With regard to controller failure, although I can't say that it never happens, in 20+ years I've had no controller failures but more drive failures (including enterprise drives) than I could even count.  

Link to post
Share on other sites

Interesting coincidence:  My Windows R2 2008 server won't boot this morning - according to the diagnostics - bad driver due to recent update...

This machine is set up with a failover shadow boot drive, and of course there are a month's worth (if I remember correctly) of system images.

However, I'm REALLY BUSY right now. So, I'll limp along without fixing it today, maybe Sunday, or not...

Who has time for this junk. Sigh :(

Superstition ON

Talking about backup strategies triggered this failure...

Superstition OFF

Link to post
Share on other sites
4 hours ago, dubnet said:

It sounds like the RAID you are describing is RAID 0 which is non redundant striping, used to increase performance but has no redundancy/failure tolerance.  Although single disk backups have simplicity, the risk you run is that drive can fail anytime (usually when you need it most :sad: ).  It happened once when I was doing a customer workstation upgrade from XP to Win7. Copied the customer's data to a USB hard drive, did the OS install and then during the restore the disk started to fail. After much sweat and coaxing was able to get the restore finished but it was yet another reminder of the frailty of hard drives.

RAID 1, 5 or 6 or even some of the RAID xx variants are, however, excellent backup platforms.  For customer near line server backups we used 8, 12 and 16 drive SAS and high speed iSCSI based disk arrays, typically configured as RAID 6 with a hot spare.  This allowed up to three drive failures before the array became critical (after the 4th drive failure your data is history). While this might sound like overkill in terms of redundancy, it addressed the problem of drives sourced from the same batch that tended to fail at the same time, or a drive failing during rebuild (happens with a weak drive due to the stress of the rebuild process).  This gave us a little headroom to get drives replaced and put the array back to optimum status.

With regard to controller failure, although I can't say that it never happens, in 20+ years I've had no controller failures but more drive failures (including enterprise drives) than I could even count.  

RAOD0 has no parity. Only RAID5 and RAID6 have parity in this context. RAID10 or RAID 0+1 are both pretty much the same A stripped mirror.

Anyway, no I'm not talking about RAID zero. I'm talking about RAID5 or RAID 6. They're garbage, and they're slow. Try dealing with an array where the controller can't be replaced too, Yeah it's not fun. It's also comparatively expensive. blah blah blah Suffice it to say, I have years hands on experience with  RAID, and I refuse to use it for any of my own storage. For multiple reasons.

You need to get enterprise hardware, and software out of your mind. Most people could not, or will not want to afford it. It's also not usually available through "normal" channels such as newegg, or amazon. newegg *maybe* Even people who use SAS controllers, run normal stock SATA drives from them., and iSCSI really has no place in this context. I seriously doubt anyone will want to spend money on an initiator, just to give them something that can be had by simply plugging in a hard drive. Not to mention the network, and all that . . .

My main point about RAID though is that it's not simple. A single disk attached to USB, firewire, eSATA, or internally is simple. Passed not being simple, it *IS* very prone to brake. As I said forget about enterprise equipment, most people outside of big business can't afford such hardware. An enterprise grade drive or three, sure.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...