Advertisement

Drudge Retort: The Other Side of the News
Friday, September 13, 2024

The music industry traded tape for hard drives and got a hard-earned lesson.

More

Comments

Admin's note: Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

More from the article...

... Music industry publication Mix spoke with the people in charge of backing up the entertainment industry. The resulting tale is part explainer on how music is so complicated to archive now, part warning about everyone's data stored on spinning disks.

"In our line of work, if we discover an inherent problem with a format, it makes sense to let everybody know," Robert Koszela, global director for studio growth and strategic initiatives at Iron Mountain, told Mix. "It may sound like a sales pitch, but it's not; it's a call for action."

Hard drives gained popularity over spooled magnetic tape as digital audio workstations, mixing and editing software, and the perceived downsides of tape, including deterioration from substrate separation and fire. But hard drives present their own archival problems. Standard hard drives were also not designed for long-term archival use. You can almost never decouple the magnetic disks from the reading hardware inside, so that if either fails, the whole drive dies.

There are also general computer storage issues, including the separation of samples and finished tracks, or proprietary file formats requiring archival versions of software. Still, Iron Mountain tells Mix that "If the disk platters spin and aren't damaged," it can access the content.

But "if it spins" is becoming a big question mark. Musicians and studios now digging into their archives to remaster tracks often find that drives, even when stored at industry-standard temperature and humidity, have failed in some way, with no partial recovery option available.

"It's so sad to see a project come into the studio, a hard drive in a brand-new case with the wrapper and the tags from wherever they bought it still in there," Koszela says. "Next to it is a case with the safety drive in it. Everything's in order. And both of them are bricks." ...


#1 | Posted by LampLighter at 2024-09-13 08:04 PM | Reply

Um, tape degrades faster that hard drives.

"You can almost never decouple the magnetic disks from the reading hardware inside"

I'm sure there are plenty of data recovery companies that call -------- on that whopper.

#2 | Posted by LegallyYourDead at 2024-09-13 08:37 PM | Reply

Why don't they just copy everything to solid-state drives?

#3 | Posted by LegallyYourDead at 2024-09-13 08:37 PM | Reply

@#2 ... I'm sure there are plenty of data recovery companies that call -------- on that whopper. ...

My guess would be that the record industry has ample money to employ such services.

But they came up with nothing.

#4 | Posted by LampLighter at 2024-09-14 12:10 AM | Reply

@#3 ... Why don't they just copy everything to solid-state drives? ...

Have SSDs been around long enough to ascertain that they are OK for archiving?


My approach, with all of the audio and video that I have digitized over the years...

A server in the basement (properly protected by UPS's) that house the 10TB (yeah, 10 terabytes) of media that I have.

That server runs FreeBSD and ZFS.

So, what happens as a result of that?

Well, every couple of years, I get an email from that server informing me that it is having issues assuring the integrity of the data it is charged with preserving.

If you want details, just ask.

But the long story short version...

FreeBSD and ZFS inform me of a problem with a disk drive, and I am able to address that problem, well before any data are lost.

But back to the topic at hand...

Yeah, when all the data are placed on a disk drive with no redundancy, you are asking for issues to crop up. (I'm being kind)


#5 | Posted by LampLighter at 2024-09-14 12:22 AM | Reply

That's undeniable. Raid 5 or better!

#6 | Posted by LegallyYourDead at 2024-09-14 10:12 AM | Reply

@#6 ... That's undeniable. Raid 5 or better! ...

Yeah, it is that "or better" aspect that drew me towards ZFS 15 years ago for the data servier on my home network.

ZFS - Zettabyte File System.

ZFS
en.wikipedia.org

... ZFS (previously Zettabyte File System) is a file system with volume management capabilities. It began as part of the Sun Microsystems Solaris operating system in 2001.

Large parts of Solaris, including ZFS, were published under an open source license as OpenSolaris for around 5 years from 2005 before being placed under a closed source license when Oracle Corporation acquired Sun in 2009 - 2010. During 2005 to 2010, the open source version of ZFS was ported to Linux, Mac OS X (continued as MacZFS) and FreeBSD.

In 2010, the illumos project forked a recent version of OpenSolaris, including ZFS, to continue its development as an open source project.

In 2013, OpenZFS was founded to coordinate the development of open source ZFS.[3][4][5] OpenZFS maintains and manages the core ZFS code, while organizations using ZFS maintain the specific code and validation processes required for ZFS to integrate within their systems. OpenZFS is widely used in Unix-like systems.[6][7][8]

Overview

The management of stored data generally involves two aspects: the physical volume management of one or more block storage devices (such as hard drives and SD cards), including their organization into logical block devices as VDEVs (ZFS Virtual Device)[9] as seen by the operating system (often involving a volume manager, RAID controller, array manager, or suitable device driver); and the management of data and files that are stored on these logical block devices (a file system or other data storage). ...


Yeah, there are some things that drew me to ZFS. The first was the inherent integrity checking.

Every time ZFS retrieves a block of data from the disk, it checks the checksum of that block with the checksum that was stored when that block was initially stored. If the checksum fails, then ZFS goes to the redundant storage for that block, and (this is the important part) other things happen.

Those other things involve making a new redundant copy of that block of data so that there remains the redundancy. Also (in my configuration) sending an email to inform someone that there has been a problem with a disk drive. An early warning, so to speak. This is the point where I buy a disk drive to replace the faulty one.

Oh, this is way cool.

At this point the operating system does not know of the problem because ZFS resolved it.

But when ZFS begins to have issues resolving disk drive issues within its RAID (in my experience, this is about two weeks later), then it bumps the messaging up to the OS level, and I will see FreeBSD (the OS) sending me emails about drive errors. This is not A Good Thing.

Still, no data loss at this point because of the redundancy. But there are warnings that the redundancy is beginning to fail. At this time I have the replacement drive in hand. (thank-you...www.bhphotovideo.com ).

So I replace the questionable drive with the new on and run the appropriate ZFS command to re-establish the redundancy.

With my usage of ZFS (because of my back-up strategy), I have only one-drive redundancy. ZFS allows multiple-drive redundancy.

And, regarding ZFS and FreeBSd...

If you watch videos on Netflix, guess the origin of those videos? Yeah, FreeBSD and ZFS. And major kudos to Netflix, their techies have contributed majorly back to FreeBSD and ZFS.




#7 | Posted by LampLighter at 2024-09-15 08:39 PM | Reply

Comments are closed for this entry.

Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy | Copyright 2024 World Readable

Drudge Retort