I've had a NAS (Network Attached Storage) box for years. I've been using a 1-TB RAID 1 setup for it. That 1 TB got used up. 99% full.
Since the manufacturer, D-Link, doesn't sell this particular model anymore, they haven't updated the firmware in years. Their firmware limits the disk size to 2 TB.
I thought I'd have to buy a new NAS device to be future-safe for bigger storage media.
But, I recently became aware of an Open Source firmware for these D-Link NAS boxes, called Alt-F. Alt-F supports disks up to 4 TB.
I installed ALt-F on my D-Link NAS last weekend. It worked without problems right out of the box.
So, on Monday, I ordered 2 4-TB SATA drives to put in the NAS. They arrived today. Put them into the box, selecting RAID1, and off we go.
I am right now transferring data from my 1 TB drive to the new 4 TB disk array, having connected one of the 1 TB disks to my Linux box.
I initially had some minor issues with mounting the old drive, because it's one disk out of a Linux Software RAID.
As it turns out, Linux detects the drive as a RAID drive, and creates a /dev/mdx entry, but doesn't really tell you about it. And it doesn't let you mount it, either, because it's only one disk from the RAID.
The technical desciption:
After googling a bit, I found that a 'cat /proc/mdstat' shows you if Linux detected the drive. If I close the /dev/md127 device that Linux gave me on bootup, with mdadm -S /dev/md127, I can then use mdadm --assemble --force on the device, and then I can mount it normally as an ext3 partition. That was all I needed to be able to copy files from the old drive to the new drives in the NAS. Even with a GBit network connection, it takes a while to transfer 1 TB...
This is what df reports right now:
Filesystem Type 1M-blocks Used Available Use% Mounted on /dev/md0 ext3 936374M 918270M 18104M 99% /mnt/hd 192.168.2.110:/mnt/md0/ nfs 3754944M 57527M 3697401M 2% /mnt/nas
This probably has happened to a lot of people:
You think you have backed up your files with tar, but when disaster strikes, and you need to restore the files, you find that your tar archive is corrupted...
This happened to me this weekend, and seeing the dreaded "bzip2: Data integrity error when decompressing" error (the tar file was compressed with bzip2,) I had pretty much resigned myself to an hour or two of hex editing to skip the damages files and stitch the tar file back together...
But the Internet to the rescue
It turns out somebody already did the hard work, and has a handy Perl script on his website to find the next file header in the tar archive.
So, all that needed to be done was to run the Perl script and then tail to skip the broken parts. The details are on the aforementioned website.
I had the good files out of the tarfile in a few minutes. This is why I love Open Source
Update: the author of the Perl script I used now has made available an even easier tool: repair corrupt tar archives ? the better way.
20 years ago today, Linus Torvalds posted the first message about this new operating system he was writing.
Linux has come a very long way from these humble beginnings.
Nowadays, it powers everything from phones (Android) to mainframes. Whole companies rely on Linux, and probably wouldn't exist in their current form if it wasn't for Linux (e.g., Google.)
Thanks, Linus Torvalds, for this awesome OS.
Linus Torvalds has tagged a commit as Linux 3.0-rc1.
That marks the end of the 2.6.x kernel series.
In the apparently never-ending SCO saga, the judge has dealt SCO another blow that hopefully puts this zombie to rest once and for all. They don't get a new trial, and Novell can shut down their lawsuit against IBM.
Ars Technica has the details in their article aptly titled "SCOwned"
My favorite piece of the ruling:
Finally, while SCO?s witnesses testified that the copyrights were ?required? for SCO to run its SCOsource licensing program, this was not something that SCO ever acquired from Novell.
:: Next >>