54 posts in this topic

3 hours ago, dubnet said:

and a third monitor...

@dubnet

Well, I do have three monitors... So, yes, I am able to see my logic analyzer, two debuggers, and off to my right, when needed the oscilloscope. I know there are those out there with more talent, skill and experience who are able to manage with their blinky LED, but I need all the help I can get! :)

 

dubnet likes this

Share this post


Link to post
Share on other sites
10 hours ago, yyrkoon said:

Oh and right, as also said before. RAID is not usually a good way to go for personal backup. JOBD is not terrible though, as there is no need for parity. e.g. "Just a Bunch of Disks". With parity, unless done properly, and using proper equipment. It turns into a money pit, that is unwieldy to maintain, and deal with in the event of disk failure. Plus, you end up losing space from the parity disks. God forbid, you have a controller fail, that turns out to be impossible to replace. You'll lose your data forever. You be better off investing into one or a few enterprise drives, and storing the important data on those. Using them as single drives.

 

It sounds like the RAID you are describing is RAID 0 which is non redundant striping, used to increase performance but has no redundancy/failure tolerance.  Although single disk backups have simplicity, the risk you run is that drive can fail anytime (usually when you need it most :sad: ).  It happened once when I was doing a customer workstation upgrade from XP to Win7. Copied the customer's data to a USB hard drive, did the OS install and then during the restore the disk started to fail. After much sweat and coaxing was able to get the restore finished but it was yet another reminder of the frailty of hard drives.

RAID 1, 5 or 6 or even some of the RAID xx variants are, however, excellent backup platforms.  For customer near line server backups we used 8, 12 and 16 drive SAS and high speed iSCSI based disk arrays, typically configured as RAID 6 with a hot spare.  This allowed up to three drive failures before the array became critical (after the 4th drive failure your data is history). While this might sound like overkill in terms of redundancy, it addressed the problem of drives sourced from the same batch that tended to fail at the same time, or a drive failing during rebuild (happens with a weak drive due to the stress of the rebuild process).  This gave us a little headroom to get drives replaced and put the array back to optimum status.

With regard to controller failure, although I can't say that it never happens, in 20+ years I've had no controller failures but more drive failures (including enterprise drives) than I could even count.  

Share this post


Link to post
Share on other sites
10 hours ago, zeke said:

I may have to setup a subversion server as well.

@zeke

I am working with SVN as well. At times I find it a bit arcane... :(

But no more so than the other code management tools.

Share this post


Link to post
Share on other sites

Interesting coincidence:  My Windows R2 2008 server won't boot this morning - according to the diagnostics - bad driver due to recent update...

This machine is set up with a failover shadow boot drive, and of course there are a month's worth (if I remember correctly) of system images.

However, I'm REALLY BUSY right now. So, I'll limp along without fixing it today, maybe Sunday, or not...

Who has time for this junk. Sigh :(

Superstition ON

Talking about backup strategies triggered this failure...

Superstition OFF

dubnet likes this

Share this post


Link to post
Share on other sites
4 hours ago, dubnet said:

It sounds like the RAID you are describing is RAID 0 which is non redundant striping, used to increase performance but has no redundancy/failure tolerance.  Although single disk backups have simplicity, the risk you run is that drive can fail anytime (usually when you need it most :sad: ).  It happened once when I was doing a customer workstation upgrade from XP to Win7. Copied the customer's data to a USB hard drive, did the OS install and then during the restore the disk started to fail. After much sweat and coaxing was able to get the restore finished but it was yet another reminder of the frailty of hard drives.

RAID 1, 5 or 6 or even some of the RAID xx variants are, however, excellent backup platforms.  For customer near line server backups we used 8, 12 and 16 drive SAS and high speed iSCSI based disk arrays, typically configured as RAID 6 with a hot spare.  This allowed up to three drive failures before the array became critical (after the 4th drive failure your data is history). While this might sound like overkill in terms of redundancy, it addressed the problem of drives sourced from the same batch that tended to fail at the same time, or a drive failing during rebuild (happens with a weak drive due to the stress of the rebuild process).  This gave us a little headroom to get drives replaced and put the array back to optimum status.

With regard to controller failure, although I can't say that it never happens, in 20+ years I've had no controller failures but more drive failures (including enterprise drives) than I could even count.  

RAOD0 has no parity. Only RAID5 and RAID6 have parity in this context. RAID10 or RAID 0+1 are both pretty much the same A stripped mirror.

Anyway, no I'm not talking about RAID zero. I'm talking about RAID5 or RAID 6. They're garbage, and they're slow. Try dealing with an array where the controller can't be replaced too, Yeah it's not fun. It's also comparatively expensive. blah blah blah Suffice it to say, I have years hands on experience with  RAID, and I refuse to use it for any of my own storage. For multiple reasons.

You need to get enterprise hardware, and software out of your mind. Most people could not, or will not want to afford it. It's also not usually available through "normal" channels such as newegg, or amazon. newegg *maybe* Even people who use SAS controllers, run normal stock SATA drives from them., and iSCSI really has no place in this context. I seriously doubt anyone will want to spend money on an initiator, just to give them something that can be had by simply plugging in a hard drive. Not to mention the network, and all that . . .

My main point about RAID though is that it's not simple. A single disk attached to USB, firewire, eSATA, or internally is simple. Passed not being simple, it *IS* very prone to brake. As I said forget about enterprise equipment, most people outside of big business can't afford such hardware. An enterprise grade drive or three, sure.

Share this post


Link to post
Share on other sites
1 hour ago, NurseBob said:

Interesting coincidence:  My Windows R2 2008 server won't boot this morning - according to the diagnostics - bad driver due to recent update...

This machine is set up with a failover shadow boot drive, and of course there are a month's worth (if I remember correctly) of system images.

However, I'm REALLY BUSY right now. So, I'll limp along without fixing it today, maybe Sunday, or not...

Who has time for this junk. Sigh :(

Superstition ON

Talking about backup strategies triggered this failure...

Superstition OFF

heh switch to a Linux server ;) Although they're prone to this kind of problem too. If you don't pay attention. but honestly who has time to keep up with updates, other than installing them ? Most of us have lives . . .

 

I can't run Windows in a server capacity any more though, not sure how, or why "you guys" do this. It's not a hate thing or anything like that( this post is being posted from my Windows 7 pro laptop ), it's IDK, hard to explain.

NurseBob likes this

Share this post


Link to post
Share on other sites
14 hours ago, zeke said:

@NurseBob

I had been playing around with github but not every program that I use can use github i.e.: Altium.

I may have to setup a subversion server as well.

 

But, I am trying to not depend on an external cloud storage option so I went to a local supplier today and purchased a WD 8TB Red NAS drive today. This will allow me to setup my own cloud storage and sync files against it.

Since I want to set something up locally, I looked for the following options:

  1. Self-hosted
  2. Linux client
  3. Windows client
  4. Mac client
  5. Open source

These are the possible programs I found today:

  1. Nextcloud
  2. Owncloud
  3. Seafile
  4. Pydio
  5. Syncthing

I've installed Nextcloud but I haven't tried it out yet because I've run out of time today.

The important thing will be the automated process of syncing/backing up all the files on the different sources.

 

So, I'm not a fan of Western Digital, but I'd be interested in what you think about that drive over time. Seagate is my brand name of choice. I've never had a Seagate fail on me personally, But I have seen them fail in the wild. Mostly having to do with RAID arrays, but not always.

Share this post


Link to post
Share on other sites
1 hour ago, yyrkoon said:

heh switch to a Linux server ;)

@yyrkoon

I've thought about restoring my centos server - I took it out of service when I stopped writing my own websites; easier to use wordpress to manage blogs.  Though, as you've noted, nothing is immune from some type of corruption, including my own thought processes...

yyrkoon likes this

Share this post


Link to post
Share on other sites
1 hour ago, yyrkoon said:

So, I'm not a fan of Western Digital, but I'd be interested in what you think about that drive over time. Seagate is my brand name of choice. I've never had a Seagate fail on me personally, But I have seen them fail in the wild. Mostly having to do with RAID arrays, but not always.

I am not a fan of RAID arrays either. I see them as pointless since the new drives are so big and fast compared to 15 years ago when RAID actually made a difference. It was all about transfer speeds and creating large amounts of disk space.

If I really really really cared about backing up stuff then I would go and find an LTO tape drive and figure out how to use it.

 

I have had both Western Digital and Seagate drives fail on me. The latest failure (this week) was a Seagate ST3000NC000 3GB. I believe the magnetic media is peeling off of the platters inside. I have tried to rescue it numerous times over this past week but nothing is helping it. I tried getting it to map out all of the bad sectors but there is just too many of them. I cannot get gparted (on linux) to make a new GPT but the disk refuses to co-operate. 

I tried getting warranty on it but it's over a year out of warranty so I'm screwed that way. I may go and harvest the magnets out of it.

So, I checked out the BackBlaze hard disk failure data because they beat the snot out of commercial type hard drives. I wanted to see which drive had the lowest failure rates for them. The Western Digital 8TB drives had zero failures for them. So that's what I went and purchased yesterday. 

I am still not certain which backup/syncing process I am going to employ but I am leaning towards setting up a separate Linux box with the 8TB drive inside and install NextCloud on it and then sync against it with all of my clients.

I am trying to go towards the least amount of effort with the greatest amount of success that works for Windows, Mac, and Linux clients.

The Lazy Way can be The Most Efficient Way
;-)

yyrkoon likes this

Share this post


Link to post
Share on other sites
1 hour ago, zeke said:

I am not a fan of RAID arrays either. I see them as pointless since the new drives are so big and fast compared to 15 years ago when RAID actually made a difference. It was all about transfer speeds and creating large amounts of disk space.

If I really really really cared about backing up stuff then I would go and find an LTO tape drive and figure out how to use it.

 

I have had both Western Digital and Seagate drives fail on me. The latest failure (this week) was a Seagate ST3000NC000 3GB. I believe the magnetic media is peeling off of the platters inside. I have tried to rescue it numerous times over this past week but nothing is helping it. I tried getting it to map out all of the bad sectors but there is just too many of them. I cannot get gparted (on linux) to make a new GPT but the disk refuses to co-operate. 

I tried getting warranty on it but it's over a year out of warranty so I'm screwed that way. I may go and harvest the magnets out of it.

So, I checked out the BackBlaze hard disk failure data because they beat the snot out of commercial type hard drives. I wanted to see which drive had the lowest failure rates for them. The Western Digital 8TB drives had zero failures for them. So that's what I went and purchased yesterday. 

I am still not certain which backup/syncing process I am going to employ but I am leaning towards setting up a separate Linux box with the 8TB drive inside and install NextCloud on it and then sync against it with all of my clients.

I am trying to go towards the least amount of effort with the greatest amount of success that works for Windows, Mac, and Linux clients.

The Lazy Way can be The Most Efficient Way
;-)

So, one thing immediately pops to mind. The server you're considering could be used as a Samba server. This way all your workstations could connect directly to it, and access the drive very similar to how a local disk is accessed. Of course you have network in the middle, but if you use GbE that should not be much of an issue. So immediately, you have remote storage, great. After that, you could use an additional drive on this system as redundancy, You could do this a couple ways, including RAID1, which believe it or not is faster than a single drive on Linux, when using software RAID. Another thing I would probably prefer myself, would be to have a second drive, keep it as a seperate single, and only mount that drive when backups of the original are made. This could all be done automatically, using systemd timers, or a cronjob. Either way would call a script when it's time.

One added benefit to using a Samba drive is that all the work is done on a system that's not your workstation. So you would not notice any performance issues while a backup was happening. On your workstation that is. Another, is that it would not matter which workstation you were using at a given time, if you setup your share correctly, any system could access any file no matter which workstation the file originated from. A trick I do here personally, when working embedded arm boards all the time. It's pretty handy.

Share this post


Link to post
Share on other sites
3 hours ago, NurseBob said:

I've thought about restoring my centos server

Well, there's magical thinking...

And then there's magic!

While looking for a file I noticed my server had booted and was back online.  Checked the logs and discovered the driver for my 8-drive 24TB JOBD external SATA enclosure had failed to load.  This is a Highpoint Rocket Raid system.  It's junk. Has been from the outset, but I purchased everything while relatively ignorant.  And, as @yyrkoon has noted, enterprise hardware is mind-bendingly pricey.  So, I use the system as not a backup, but as a secondary copy location for video, images and audio. Maybe after I win the lottery I can put a proper system in place.

yyrkoon likes this

Share this post


Link to post
Share on other sites
3 hours ago, NurseBob said:

@yyrkoon

I've thought about restoring my centos server - I took it out of service when I stopped writing my own websites; easier to use wordpress to manage blogs.  Though, as you've noted, nothing is immune from some type of corruption, including my own thought processes...

I think because I've been using Linux daily for the last 4+ years now, instead of semi often since the 90's my familiarity level has increased dramatically. It used to be no one could tell me anything about Windows, because chances were really good, I already knew it. But I've stopped caring since Windows 7, as like with so many things. I have a life, and I can not keep up. Plus right now, working with Linux is how I make a paycheck.

With everything that has been happening at Microsoft, how they treat their customers now days. I just kind of feel "dirty" using Windows for anything other than in a desktop capacity. Then with the way I see Windows / Microsoft going, I may end up shifting all my systems to Linux. For instance, this laptop I'm using right now came with Windows 8 preinstalled. I ran Win8 for a couple of years, and finally had enough. So I bought a copy of Windows 7 pro OEM, and proceeded to install Windows 7 pro and a new hard drive. Finding drivers was a pain, but when it came time to installing the USB 3.0 root hub driver . . . it doesn't work. So, I'm stuck using USB 2.0, Because Microsoft, Intel, nor Asus made Win7 drivers for the USB roothub for this platform. But I bet if I installed Linux on this laptop, USB3.0 would work fine. Not to mention the direction they're going with UEFI BIOS. You *HAVE* to disable hardware features if you decide you want to install an OS on the platform, that it did not originally come with. Regardless if it's just a slightly older version of Windows, or Linux.

Anyway, I'm about at my  wits end with Microsoft, and their partners for the antics they're forcing upon their paying customers. Sooner or later I'll probably just decide to move to all Linux and be done with it. Do I want to ? Not really, but I can't help but feel I'm being forced into this position. Especially since I like to occasionally play modern games. Something that Linux is catching up on compared to Windows  very rapidly. So maybe in the near future, a moot point.

Share this post


Link to post
Share on other sites
13 minutes ago, NurseBob said:

Well, there's magical thinking...

And then there's magic!

While looking for a file I noticed my server had booted and was back online.  Checked the logs and discovered the driver for my 8-drive 24TB JOBD external SATA enclosure had failed to load.  This is a Highpoint Rocket Raid system.  It's junk. Has been from the outset, but I purchased everything while relatively ignorant.  And, as @yyrkoon has noted, enterprise hardware is mind-bendingly pricey.  So, I use the system as not a backup, but as a secondary copy location for video, images and audio. Maybe after I win the lottery I can put a proper system in place.

Yeah I have hands on experience with highpoint, and none of their stuff is above average consumer level. Wulf, my buddy had two drives fail initially, connected to a high point controller. Brand new drives mind you. Seagates, at a time Seagate was making really good drives. Lifetime warranty and all that. Then about a year later, when everyone was feeling all warm and fuzzy about their data *BAM* controller failed. No way to replace the controller, as Highpoint discontinued the model. On a RAID5 array, . . so no way to rebuild the array.without losing all that data . . .

I'll tell you what though. I used to know several people who worked in the IT industry that were moving to all Linux based software RAID, These people were all using RAID 10 arrays in software as mentioned. The majority of the people seemed very please with their setups. Fewer hardware failures, because you're form the start getting rid of the controller, and the drives configured in this manner just seemed to be more robust, and seemed to yield far few failures. My point here though is this: If you MUST go with RAID, unless you can afford costly gear built by the WoZ himself( very very pricey by the way ). One should seriously consider looking into software RAID. Personally, I avoid the whole RAID scene categorically. Because using disks as single avoids losing space, due to parity, and if I need speed, I'll just toss in a Samsung SSD, and be done with it. Assuming I can't get that speed done using a zram ramdisk. Which if coupled with a system that has the capacity to hold 64G ram, can be quite large. Up to 128G theoretical using the lz4 compression algorithm. IN practice, a 96G ramdisk could easily be obtained, while leaving enough ram available to the system.

Share this post


Link to post
Share on other sites

Here is something to think about. The ODROD XU4 has USB 3.0, and non shared GbE. It's an Octa core ARM based board that has a footprint very similar to a Raspberry PI. Can use Raspberry PI hats, and all that.

These board for what they are, are very fast. They've been compared to an Intel core i3 in speed, So they're perfectly capable of handling home server based loads. With no problems. They're also inexpensive at ~$74 US each bare, or ~$130 US with power supply, and a 32G class 5.0 eMMC.

The point here is that can easily handle a situation like this, at a minimal cost. I do own one, but have not powered it up yet. So physically, and personally testing it has not happened yet. However, the board I own will be used exactly in this capacity. But man, coupling one of these with a ~$200 hard drive to have a very robust development system at less than $350 US. Wow . . .

I actually bought mine to also serve as a development system for beaglebone that shares a common ABI. e.g. I can compile form source on the XU4, and run those binaries directly on a beaglebone with no further work needed from me. No cross compiling, or cross compiling setup, not need to run an emulator anywhere . ..

Share this post


Link to post
Share on other sites
1 hour ago, yyrkoon said:

So physically, and personally testing it has not happened yet. However, the board I own will be used exactly in this capacity.

I'd like to know what comes of your testing when you get to it.

Share this post


Link to post
Share on other sites

So I have finally got my new file realtime backup system installed and operational.

The file server details:

  1. OS: Ubuntu 16.04.2 box
  2. Storage: 8TB Western Digital Red
  3. Software: NextCloud server

The client details:

  1. Client 1 OS: Win10
  2. Client 2 OS: mac OS
  3. Client 3 OS: Ubuntu 16.04.2 Desktop
  4. Software: NextCloud client on each

Usage:

  1. Using NextCloud Client, sign into the Server
  2. Select local directory you want to backup
  3. Add it to the Server
  4. Stand back and it will copy everything over to the server automatically

I have observed that if you create a new file in the local monitored directory then the NextCloud Client will almost immediately copy it over to the Server without your interaction. I like that.

If desired, you can setup another client machine and get it to replicate a files from the server to itself locally. Multiple redundancy.

So far, this system has transferred over 110GB to the server unattended.

This configuration will backup files that I generate on a regular basis. Now, I want to setup git and subversion on this same server so that I can take care of files generated during software coding (git) or hardware design files generated by Altium (SVN).

So far, I like NextCloud and it fits my work processes.

 

yyrkoon and NurseBob like this

Share this post


Link to post
Share on other sites

What happens when you do:

$ for rev in $(seq 1000); do cat /dev/null >reallyimportant.c; echo $rev; sleep 1; done

How many revisions does it save before it starts throwing out old copies and you end up with an empty c file?

 

yyrkoon likes this

Share this post


Link to post
Share on other sites

In the 37 years I've been writing code, I've only asked an admin to recover a file for me once.  Turns out that file was on a disk that was being backed up by a SCSI tape drive that had been having problems and of course all the tapes were bad. However, it is always easier to write code the second time  : )

 

yyrkoon, NurseBob and dubnet like this

Share this post


Link to post
Share on other sites
1 hour ago, Rickta59 said:

What happens when you do:

$ for rev in $(seq 1000); do cat /dev/null >reallyimportant.c; echo $rev; sleep 1; done

How many revisions does it save before it starts throwing out old copies and you end up with an empty c file?

 

I don't know. 

Share this post


Link to post
Share on other sites
1 hour ago, Rickta59 said:

In the 37 years I've been writing code, I've only asked an admin to recover a file for me once.  Turns out that file was on a disk that was being backed up by a SCSI tape drive that had been having problems and of course all the tapes were bad. However, it is always easier to write code the second time  : )

 

I wished that I had the option to recover two months of work that I lost unknowingly. <sad trombone>

With this, I will at least have the option as I move forward.

Share this post


Link to post
Share on other sites
6 hours ago, zeke said:

I wished that I had the option to recover two months of work that I lost unknowingly. <sad trombone>

With this, I will at least have the option as I move forward.

This is where (possibly) git could come in handy, Lets say you need a specific file from a specific day for whatever reason. But anyhow, I personally, do not necessarily agree with the strategy you chose. But the point is here. we do not have to agree, because what you're getting is what *you* want. At minimum, what you think you want right now.

Still, I urge you to look at your process, and think about it objectively. Which I think is what Rick was also trying to do. Finding an outside corner case where your backup strategy would fail you for a specific case. But let's say it does not fail, and does end up copying 1000 iterations of the same file slightly modified . . . this also may be less than desirable if that outside corner case just makes 1000 copies of the same file, with a different time stamp. or whatever.

Really, what you need to think about is exactly what you want, and if your strategy is fulfilling what you want / need. For me, the difference between a file being saved when actual code differences have been made is a must, Meaning, I change a single line, I may want that change to stick and be persisted, For you, that may not be appropriate?

So, for me personally. I think a local git hub may be exactly what I want, but also having a redundant copy of that local git would be necessary. For you . . . it may be different. Have I beat this horse a bit too much ?

zeke likes this

Share this post


Link to post
Share on other sites

Nope, the horse is alive and kicking still. :) 

I firmly believe that ideas need to battle other ideas to the death in order for the best idea to survive so it's all good with me.

I looked further into the versioning question because it is a valid and intriguing question.

I know that I did some rapid save/edits cycles as I edited some graphics files yesterday and this is what I found. Nextcloud has an interesting file structure.

The designers decided to separate the current version from the past version by using directories.

The current version is in the "../nextcloud/data/zeke/files/" directory while the past versions are located in the "../nextcloud/data/zeke/files_versions/" directory.

The past versions are suffixed with a unique identifier. For example, the past versions of my graphics file are named:

  1. "../nextcloud/files_versions/zeke/logo.ai.v1492765859"
  2. "../nextcloud/files_versions/zeke/logo.ai.v1493766306"

The current version is just "../nextcloud/files/zeke/logo.ai" 

This seems to suggest that there is some way to retrieve an older version of a file but I have not looked into that yet.

So to fully answer your question @Rickta59, if I did this:

$ for rev in $(seq 1000); do cat /dev/null >reallyimportant.c; echo $rev; sleep 1; done

Then I would expect the files_versions directory to start filling up with files named reallyimportant.c.v<nnnnnnnnnn>.

How does that sound?

 

Share this post


Link to post
Share on other sites

I had taken a quick glance at the feature list and had noticed that it is supposed to support that feature. I thought I read a further comment stating at some point it starts throwing away old versions to make room. I didn't dig deeper. I thought you might have.

While it is all well and good to backup files as they change, it seems like it would make sense for some scenarios to only create a backup if the contents change.  However, I can imagine other situations where it would be important to note file timestamp changes.

I've noticed that simplistic approaches to backup schemes lead to data loss, while giving the user a false sense of safety that really isn't there.  They only find out about an unrecoverable data loss after the fact.

Write open source software, share it with all your friends. If you ever lose a file you just reach out for a little help from your friends.

 

zeke likes this

Share this post


Link to post
Share on other sites
9 hours ago, Rickta59 said:

I had taken a quick glance at the feature list and had noticed that it is supposed to support that feature. I thought I read a further comment stating at some point it starts throwing away old versions to make room. I didn't dig deeper. I thought you might have.

While it is all well and good to backup files as they change, it seems like it would make sense for some scenarios to only create a backup if the contents change.  However, I can imagine other situations where it would be important to note file timestamp changes.

I've noticed that simplistic approaches to backup schemes lead to data loss, while giving the user a false sense of safety that really isn't there.  They only find out about an unrecoverable data loss after the fact.

Write open source software, share it with all your friends. If you ever lose a file you just reach out for a little help from your friends.

 

If someone loses data because their strategy is simplistic. Then they;'re not thinking how to do backup properly( not thinking the process through fully ), or they're doing something silly. You know the reflow oven wulf and I fiddled about with 5 is years ago ? I still have that code safely tucked away on a removable USB hard drive. My backup strategy ? About as simple as it gets. Manually copy files directly to my USB hard drive. Is it fool proof ? not by a long shot, but it works for me.

Now all my beaglebone embedded Linux code, sits on a dedicated server, that's  actually an Asus eee-pc, with a broken screen. But . . .

william@eee-pc:~$ uptime
 17:21:38 up 190 days, 22:13,  2 users,  load average: 0.00, 0.01, 0.05


I'd say it's doing a pretty good job of what it's doing. Do I have a redundant copy of all my work ? In some cases, kind of, but not really. A lot of my important code is on a private git, on github. Now some of the other "important" stuff, is not redudant. But this code I classify as "important" is mostly experimental code, that I wrote while getting familiar with *something*. Be it CANBUS, sm_bus I2C stuff or whatever.

By the way, the reason why that server only has a 190 uptime, Is because I shut it down while I was on vacation for a month . . .Unplugged it from power, and the network while I was at it too( thunderstorms . . .)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now