Jump to content
43oh

Recommended Posts

1 hour ago, NurseBob said:

Interesting coincidence:  My Windows R2 2008 server won't boot this morning - according to the diagnostics - bad driver due to recent update...

This machine is set up with a failover shadow boot drive, and of course there are a month's worth (if I remember correctly) of system images.

However, I'm REALLY BUSY right now. So, I'll limp along without fixing it today, maybe Sunday, or not...

Who has time for this junk. Sigh :(

Superstition ON

Talking about backup strategies triggered this failure...

Superstition OFF

heh switch to a Linux server ;) Although they're prone to this kind of problem too. If you don't pay attention. but honestly who has time to keep up with updates, other than installing them ? Most of us have lives . . .

 

I can't run Windows in a server capacity any more though, not sure how, or why "you guys" do this. It's not a hate thing or anything like that( this post is being posted from my Windows 7 pro laptop ), it's IDK, hard to explain.

Link to post
Share on other sites
  • Replies 65
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

In the 37 years I've been writing code, I've only asked an admin to recover a file for me once.  Turns out that file was on a disk that was being backed up by a SCSI tape drive that had been having pr

Which platform ? Windows ? Linux ? OSX ?   But rsync is a pretty good way if you use the correct strategy. This can be done on Windows too using the rsync application Deltacopy. Which runs s

Bummer is certainly an understatement. As to my local backups I run Macrium reflect pro (now V7 home edition). I have all of my code and eagle files on a 1TB 10k  rpm hard drive. Macrium does a f

14 hours ago, zeke said:

@NurseBob

I had been playing around with github but not every program that I use can use github i.e.: Altium.

I may have to setup a subversion server as well.

 

But, I am trying to not depend on an external cloud storage option so I went to a local supplier today and purchased a WD 8TB Red NAS drive today. This will allow me to setup my own cloud storage and sync files against it.

Since I want to set something up locally, I looked for the following options:

  1. Self-hosted
  2. Linux client
  3. Windows client
  4. Mac client
  5. Open source

These are the possible programs I found today:

  1. Nextcloud
  2. Owncloud
  3. Seafile
  4. Pydio
  5. Syncthing

I've installed Nextcloud but I haven't tried it out yet because I've run out of time today.

The important thing will be the automated process of syncing/backing up all the files on the different sources.

 

So, I'm not a fan of Western Digital, but I'd be interested in what you think about that drive over time. Seagate is my brand name of choice. I've never had a Seagate fail on me personally, But I have seen them fail in the wild. Mostly having to do with RAID arrays, but not always.

Link to post
Share on other sites
1 hour ago, yyrkoon said:

heh switch to a Linux server ;)

@yyrkoon

I've thought about restoring my centos server - I took it out of service when I stopped writing my own websites; easier to use wordpress to manage blogs.  Though, as you've noted, nothing is immune from some type of corruption, including my own thought processes...

Link to post
Share on other sites
1 hour ago, yyrkoon said:

So, I'm not a fan of Western Digital, but I'd be interested in what you think about that drive over time. Seagate is my brand name of choice. I've never had a Seagate fail on me personally, But I have seen them fail in the wild. Mostly having to do with RAID arrays, but not always.

I am not a fan of RAID arrays either. I see them as pointless since the new drives are so big and fast compared to 15 years ago when RAID actually made a difference. It was all about transfer speeds and creating large amounts of disk space.

If I really really really cared about backing up stuff then I would go and find an LTO tape drive and figure out how to use it.

 

I have had both Western Digital and Seagate drives fail on me. The latest failure (this week) was a Seagate ST3000NC000 3GB. I believe the magnetic media is peeling off of the platters inside. I have tried to rescue it numerous times over this past week but nothing is helping it. I tried getting it to map out all of the bad sectors but there is just too many of them. I cannot get gparted (on linux) to make a new GPT but the disk refuses to co-operate. 

I tried getting warranty on it but it's over a year out of warranty so I'm screwed that way. I may go and harvest the magnets out of it.

So, I checked out the BackBlaze hard disk failure data because they beat the snot out of commercial type hard drives. I wanted to see which drive had the lowest failure rates for them. The Western Digital 8TB drives had zero failures for them. So that's what I went and purchased yesterday. 

I am still not certain which backup/syncing process I am going to employ but I am leaning towards setting up a separate Linux box with the 8TB drive inside and install NextCloud on it and then sync against it with all of my clients.

I am trying to go towards the least amount of effort with the greatest amount of success that works for Windows, Mac, and Linux clients.

The Lazy Way can be The Most Efficient Way
;-)

Link to post
Share on other sites
1 hour ago, zeke said:

I am not a fan of RAID arrays either. I see them as pointless since the new drives are so big and fast compared to 15 years ago when RAID actually made a difference. It was all about transfer speeds and creating large amounts of disk space.

If I really really really cared about backing up stuff then I would go and find an LTO tape drive and figure out how to use it.

 

I have had both Western Digital and Seagate drives fail on me. The latest failure (this week) was a Seagate ST3000NC000 3GB. I believe the magnetic media is peeling off of the platters inside. I have tried to rescue it numerous times over this past week but nothing is helping it. I tried getting it to map out all of the bad sectors but there is just too many of them. I cannot get gparted (on linux) to make a new GPT but the disk refuses to co-operate. 

I tried getting warranty on it but it's over a year out of warranty so I'm screwed that way. I may go and harvest the magnets out of it.

So, I checked out the BackBlaze hard disk failure data because they beat the snot out of commercial type hard drives. I wanted to see which drive had the lowest failure rates for them. The Western Digital 8TB drives had zero failures for them. So that's what I went and purchased yesterday. 

I am still not certain which backup/syncing process I am going to employ but I am leaning towards setting up a separate Linux box with the 8TB drive inside and install NextCloud on it and then sync against it with all of my clients.

I am trying to go towards the least amount of effort with the greatest amount of success that works for Windows, Mac, and Linux clients.

The Lazy Way can be The Most Efficient Way
;-)

So, one thing immediately pops to mind. The server you're considering could be used as a Samba server. This way all your workstations could connect directly to it, and access the drive very similar to how a local disk is accessed. Of course you have network in the middle, but if you use GbE that should not be much of an issue. So immediately, you have remote storage, great. After that, you could use an additional drive on this system as redundancy, You could do this a couple ways, including RAID1, which believe it or not is faster than a single drive on Linux, when using software RAID. Another thing I would probably prefer myself, would be to have a second drive, keep it as a seperate single, and only mount that drive when backups of the original are made. This could all be done automatically, using systemd timers, or a cronjob. Either way would call a script when it's time.

One added benefit to using a Samba drive is that all the work is done on a system that's not your workstation. So you would not notice any performance issues while a backup was happening. On your workstation that is. Another, is that it would not matter which workstation you were using at a given time, if you setup your share correctly, any system could access any file no matter which workstation the file originated from. A trick I do here personally, when working embedded arm boards all the time. It's pretty handy.

Link to post
Share on other sites
3 hours ago, NurseBob said:

I've thought about restoring my centos server

Well, there's magical thinking...

And then there's magic!

While looking for a file I noticed my server had booted and was back online.  Checked the logs and discovered the driver for my 8-drive 24TB JOBD external SATA enclosure had failed to load.  This is a Highpoint Rocket Raid system.  It's junk. Has been from the outset, but I purchased everything while relatively ignorant.  And, as @yyrkoon has noted, enterprise hardware is mind-bendingly pricey.  So, I use the system as not a backup, but as a secondary copy location for video, images and audio. Maybe after I win the lottery I can put a proper system in place.

Link to post
Share on other sites
3 hours ago, NurseBob said:

@yyrkoon

I've thought about restoring my centos server - I took it out of service when I stopped writing my own websites; easier to use wordpress to manage blogs.  Though, as you've noted, nothing is immune from some type of corruption, including my own thought processes...

I think because I've been using Linux daily for the last 4+ years now, instead of semi often since the 90's my familiarity level has increased dramatically. It used to be no one could tell me anything about Windows, because chances were really good, I already knew it. But I've stopped caring since Windows 7, as like with so many things. I have a life, and I can not keep up. Plus right now, working with Linux is how I make a paycheck.

With everything that has been happening at Microsoft, how they treat their customers now days. I just kind of feel "dirty" using Windows for anything other than in a desktop capacity. Then with the way I see Windows / Microsoft going, I may end up shifting all my systems to Linux. For instance, this laptop I'm using right now came with Windows 8 preinstalled. I ran Win8 for a couple of years, and finally had enough. So I bought a copy of Windows 7 pro OEM, and proceeded to install Windows 7 pro and a new hard drive. Finding drivers was a pain, but when it came time to installing the USB 3.0 root hub driver . . . it doesn't work. So, I'm stuck using USB 2.0, Because Microsoft, Intel, nor Asus made Win7 drivers for the USB roothub for this platform. But I bet if I installed Linux on this laptop, USB3.0 would work fine. Not to mention the direction they're going with UEFI BIOS. You *HAVE* to disable hardware features if you decide you want to install an OS on the platform, that it did not originally come with. Regardless if it's just a slightly older version of Windows, or Linux.

Anyway, I'm about at my  wits end with Microsoft, and their partners for the antics they're forcing upon their paying customers. Sooner or later I'll probably just decide to move to all Linux and be done with it. Do I want to ? Not really, but I can't help but feel I'm being forced into this position. Especially since I like to occasionally play modern games. Something that Linux is catching up on compared to Windows  very rapidly. So maybe in the near future, a moot point.

Link to post
Share on other sites
13 minutes ago, NurseBob said:

Well, there's magical thinking...

And then there's magic!

While looking for a file I noticed my server had booted and was back online.  Checked the logs and discovered the driver for my 8-drive 24TB JOBD external SATA enclosure had failed to load.  This is a Highpoint Rocket Raid system.  It's junk. Has been from the outset, but I purchased everything while relatively ignorant.  And, as @yyrkoon has noted, enterprise hardware is mind-bendingly pricey.  So, I use the system as not a backup, but as a secondary copy location for video, images and audio. Maybe after I win the lottery I can put a proper system in place.

Yeah I have hands on experience with highpoint, and none of their stuff is above average consumer level. Wulf, my buddy had two drives fail initially, connected to a high point controller. Brand new drives mind you. Seagates, at a time Seagate was making really good drives. Lifetime warranty and all that. Then about a year later, when everyone was feeling all warm and fuzzy about their data *BAM* controller failed. No way to replace the controller, as Highpoint discontinued the model. On a RAID5 array, . . so no way to rebuild the array.without losing all that data . . .

I'll tell you what though. I used to know several people who worked in the IT industry that were moving to all Linux based software RAID, These people were all using RAID 10 arrays in software as mentioned. The majority of the people seemed very please with their setups. Fewer hardware failures, because you're form the start getting rid of the controller, and the drives configured in this manner just seemed to be more robust, and seemed to yield far few failures. My point here though is this: If you MUST go with RAID, unless you can afford costly gear built by the WoZ himself( very very pricey by the way ). One should seriously consider looking into software RAID. Personally, I avoid the whole RAID scene categorically. Because using disks as single avoids losing space, due to parity, and if I need speed, I'll just toss in a Samsung SSD, and be done with it. Assuming I can't get that speed done using a zram ramdisk. Which if coupled with a system that has the capacity to hold 64G ram, can be quite large. Up to 128G theoretical using the lz4 compression algorithm. IN practice, a 96G ramdisk could easily be obtained, while leaving enough ram available to the system.

Link to post
Share on other sites

Here is something to think about. The ODROD XU4 has USB 3.0, and non shared GbE. It's an Octa core ARM based board that has a footprint very similar to a Raspberry PI. Can use Raspberry PI hats, and all that.

These board for what they are, are very fast. They've been compared to an Intel core i3 in speed, So they're perfectly capable of handling home server based loads. With no problems. They're also inexpensive at ~$74 US each bare, or ~$130 US with power supply, and a 32G class 5.0 eMMC.

The point here is that can easily handle a situation like this, at a minimal cost. I do own one, but have not powered it up yet. So physically, and personally testing it has not happened yet. However, the board I own will be used exactly in this capacity. But man, coupling one of these with a ~$200 hard drive to have a very robust development system at less than $350 US. Wow . . .

I actually bought mine to also serve as a development system for beaglebone that shares a common ABI. e.g. I can compile form source on the XU4, and run those binaries directly on a beaglebone with no further work needed from me. No cross compiling, or cross compiling setup, not need to run an emulator anywhere . ..

Link to post
Share on other sites

So I have finally got my new file realtime backup system installed and operational.

The file server details:

  1. OS: Ubuntu 16.04.2 box
  2. Storage: 8TB Western Digital Red
  3. Software: NextCloud server

The client details:

  1. Client 1 OS: Win10
  2. Client 2 OS: mac OS
  3. Client 3 OS: Ubuntu 16.04.2 Desktop
  4. Software: NextCloud client on each

Usage:

  1. Using NextCloud Client, sign into the Server
  2. Select local directory you want to backup
  3. Add it to the Server
  4. Stand back and it will copy everything over to the server automatically

I have observed that if you create a new file in the local monitored directory then the NextCloud Client will almost immediately copy it over to the Server without your interaction. I like that.

If desired, you can setup another client machine and get it to replicate a files from the server to itself locally. Multiple redundancy.

So far, this system has transferred over 110GB to the server unattended.

This configuration will backup files that I generate on a regular basis. Now, I want to setup git and subversion on this same server so that I can take care of files generated during software coding (git) or hardware design files generated by Altium (SVN).

So far, I like NextCloud and it fits my work processes.

 

Link to post
Share on other sites

In the 37 years I've been writing code, I've only asked an admin to recover a file for me once.  Turns out that file was on a disk that was being backed up by a SCSI tape drive that had been having problems and of course all the tapes were bad. However, it is always easier to write code the second time  : )

 

Link to post
Share on other sites
1 hour ago, Rickta59 said:

What happens when you do:

$ for rev in $(seq 1000); do cat /dev/null >reallyimportant.c; echo $rev; sleep 1; done

How many revisions does it save before it starts throwing out old copies and you end up with an empty c file?

 

I don't know. 

Link to post
Share on other sites
1 hour ago, Rickta59 said:

In the 37 years I've been writing code, I've only asked an admin to recover a file for me once.  Turns out that file was on a disk that was being backed up by a SCSI tape drive that had been having problems and of course all the tapes were bad. However, it is always easier to write code the second time  : )

 

I wished that I had the option to recover two months of work that I lost unknowingly. <sad trombone>

With this, I will at least have the option as I move forward.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...