Monday, April 26, 2010

Contest: If you were an intern, how would you destroy data?

Here is the premise of the question: how many ways can you think an intern could destroy a hard drive or LTO tape?

Okay, okay...I am picking on interns here. In reality, we've all mindlessly destroyed data by being careless or reckless. I know I've dropped my share of drives, disconnected spinning Firewire disks, and even caused a DV tape to unspool itself inside a Sony VTR.

Some other methods here for destroying data:

1. Falling off bookshelves
2. Tumbling down staircases
3. Knocked off a 2nd story railing
4. Getting run over by a car
5. Being thrown by a little kid

What do you say? What is the strangest, oddest, or most bizarre way your data was destroyed (either by you or someone else) and did you have a backup? Put your best story down in the comments below.

The best entry will get a free RED 101 special 2-Disc DVD collection by DreamWelder Productions.

9 comments:

Virgil Kastrup said...

Many years ago I bought a copy of doom2 and left the floppy disks on the dashboard of a car mid summer. That data was GONE! (or at least inaccessible...)

J Simoneau said...

1. By letting Fry's try to recover a drive (turned it on and off about 10 times scratching the disk unrecoverable).

2. If an iphone counts... A week or so after we finally got out of the hospital with our first baby being born (9 days of issues) I decided to clean one of our bathtubs out that wasn't ever used. I cleaned it then thought I'd fill it up with water to get any remaining soft scrub residue out. So turned the water on high and plugged the drain. When I returned and the tub was full I looked down and seen my iphone sitting in the bottom of the tub. Unsynced and held the only pictures of my baby during the tragic times at the hospital (we didn't allow visitors and they weren't allowed in with the baby so no picts by them). After I let it sit for 2 days I plugged it in and turned it on. Problem was the touch screen was only half working on the phone. I could swipe my finger across it but had it code locked so I couldn't do the code because of the touch screen being broke. Long story short there is no way to get around that "back then" and after days of researching I was sent to a program called "iphone drive" that ignored the password and I was able to retrieve out pictures. Back then in itunes you had to unlock your phone via password syncing. There was no-way to do it via itunes. So in this case the data wasn't lost although it should have been but thought it might fit in for a weird story.

Buck said...

Well some of my best work is made on the toilet lol. While I was working on a comp in AE while finishing my biz I placed my notebook on the floor flushed the toilet and when I stepped around the firewire cable connected to the drive on the counter near the toilet I snagged it and in the toilet it went...splash!! it also wet my notebook a lil 2!! Davey Data Jones Locker she went arggghh matey!!

Joshua Brown said...

My most bizarre data loss would have to have happened on a Raided drive.
I had recently taken over the tech department at my Organization and was asked to get some footage off of an external HD. The man who had ran the department before me was very smart and had used a HD enclosure with hardware Raid. Unfortunately the enclosure was very cheap and had horrible reviews. I found out why when the enclosure simply stopped working one day. Apparently, the hardware inside used some awkward method of Raiding the Drives together and I am completely unable to mount or even see the drives even when connected through an identical model enclosure. So now I have two 1Tb drives sitting in a box in a race against data degradation to try and recover Corrupted RAID data... when the drives won't even allow themselves to be recognized by a computer... *wiiiIIIIIRRRrrrrr*...*click click click*

Axel Mertes said...

OK, you wanted to hear the most astounding version of loosing data?

Here it is:

We have a bunch of Infortrend RAID systems over here. All identical. For minimal reaction time in case of a part failure, I have an entire set of replacement parts on the shelf: PSU, Fans, RAID Controller, Drive Shelfs...

Once in the past a controller was failing with a defective RAM. So it was pulled, the spare controller was put into the RAID system and it was powered up again.

The failing controller was getting a new RAM and put back on the shelf - for next time.

A good year later I was emptying preparig a RAID to be moved to our new facility in Berlin. All data was cleaned up and archived, nothing bad so far. It appeared that this controller had less RAM than the others, and it seemed it was occasionally not working properly. So we decided to use that controller from the shelf...

I therefor disconnected the RAID from local network and used a direct Ethernet cable to my notebook, sitting behind the RAIDs 19" rack to administrate the new controller. So I set up the new partitions and restarted the controller as required.

I walked around the rack to see its doing its reboot in the LCD screen. Everything fine, so I went back to the notebook.

For some reasons - after logging in again - I saw the original configuration of the RAID - again!

My changes seemed to not have been applied. Everything was exactly as before I changed all the settings, created the partitions etc.

I wondered for a few moments and the decided to try again.

Would you do the same?

Keep in mind:

This RAID is DIRECTLY connected via a single cable between the RAID and the notebook in my hands!

Would you?

I bet on it!

So I reconfigured it all again. Hit again the execution button for all this to apply and for the RAID to reboot. Walking again around the rack and

PANIC!

A completely different RAID with roughly 2 TBytes of data was at already 3% of RAID initialise...

PANIC! Running back to the back side and simply plugging power off, to not wait any second longer.

Would you have done the same?
I think so...

So what has happened?
Why does a different RAID look the same?
Why can I access it even if I have only a single cable between me and the RAID I really wanted to work on?

Well, I took my about 15 minutes to find out:

My clever collegue administrator (no names here...) had the - not so cool - idea to install a WiFi repeater in out office without letting me know. For another collegue who has a notebook and did not want to plug in the Gigabit cable (instead preferring "easy" WiFi at 100 Mbit - argh!).

So what happened is the following:

I rebooted the controller I configured. So it become unavailable from TCP/IP on my local Ethernet cable. My notenbook then tried to reach it, but found - accidentially - the WiFi, and then found THERE: The replaced original controller from the shelf (did you remember my beginnging words?) that hat EXACTLY THE SAME CONFIG. BECAUSE IT WAS REPLACED IN A FAILURE MOMENT. IT WAS NEVER RECONFIGURED!

Who would shut down a RAID to configure a spare controller for future use?
Without knowing what its future config will ever need to be (because it was a safe spare part for 5 identical RAIDs)?

After all, I am nearly over it.

WHY HAD YOU TO REMIND ME ON THAT TRAGEDY????

And where the heck is the number of my psychiatrist?

All the best,
Axel

PS: All I wrote has really happend! Unfortunately...

Axel Mertes said...

Well,

where to start?

I had quite a bunch of ways of destroying data - myself AND my collegues involved.

Not to talk about that guy who accidentially formatted a 8 TByte SAN drive by installing Windows XP64 on a friday 13th to "save time" since "he knew me" and "it will otherwise take longer". Well, I told him his new z800 (equipped with an NVidia SDI card I got from Silverado - honestly!) is already ordered. He still insisted to do it - and forgot to disconnect FC cables before installing. What shall I say? Good work. We used several recovery tools, but the few important project files on that disks had been gone, forever. Don't tell me I am wrong, I recovered 3 RAIDs in the past successfully (12 bay RAID 5's that have had 2 drives failing...).

I really knew that business...

I think it feels like I had invented RAID6 - at last. Btw, where has triple RAID went to?

What about that darn RAID backplane, that was defective. No single drive was bad, but one was shown to fail. We replaced it, and the backplane told the RAID controller: A second drive failed!

Well I managed to get >95% of all files back safely, using R-TT Studio (worked for me, several times – can recommend it!).

Clearly you don't want to hear about that high voltage destroyment that fried us the electric parts of a 50,000 bucks workstation and lost us a 20,000 bucks job with it (hey, we had insurance on that – 3 weeks later all was finished – again).

What about that one year job we worked on for a theme park for Andrè Heller that we lost about 1 month before deadline, due to a RAID5 malfunction? And which – after successfully reproducing all in time – was rejected for a total remake „because it did not work out in the installation as originally imagined“ - so new idea, new project and all done in another 4 weeks?

No, I am sure, none of the above stories is interesting you...

Axel Mertes said...

OK, you wanted to hear the most astounding version of loosing data?

Here it is:

We have a bunch of Infortrend RAID systems over here. All identical. For minimal reaction time in case of a part failure, I have an entire set of replacement parts on the shelf: PSU, Fans, RAID Controller, Drive Shelfs...

Once in the past a controller was failing with a defective RAM. So it was pulled, the spare controller was put into the RAID system and it was powered up again.

The failing controller was getting a new RAM and put back on the shelf - for next time.

A good year later I was emptying preparig a RAID to be moved to our new facility in Berlin. All data was cleaned up and archived, nothing bad so far. It appeared that this controller had less RAM than the others, and it seemed it was occasionally not working properly. So we decided to use that controller from the shelf...

I therefor disconnected the RAID from local network and used a direct Ethernet cable to my notebook, sitting behind the RAIDs 19" rack to administrate the new controller. So I set up the new partitions and restarted the controller as required.

I walked around the rack to see its doing its reboot in the LCD screen. Everything fine, so I went back to the notebook.

For some reasons - after logging in again - I saw the original configuration of the RAID - again!

My changes seemed to not have been applied. Everything was exactly as before I changed all the settings, created the partitions etc.

I wondered for a few moments and the decided to try again.

Would you do the same?

Keep in mind:

This RAID is DIRECTLY connected via a single cable between the RAID and the notebook in my hands!

Would you?

I bet on it!

So I reconfigured it all again. Hit again the execution button for all this to apply and for the RAID to reboot. Walking again around the rack and

PANIC!

A completely different RAID with roughly 2 TBytes of data was at already 3% of RAID initialise...

PANIC! Running back to the back side and simply plugging power off, to not wait any second longer.

Would you have done the same?
I think so...

So what has happened?
Why does a different RAID look the same?
Why can I access it even if I have only a single cable between me and the RAID I really wanted to work on?

Well, I took my about 15 minutes to find out:

My clever collegue administrator (no names here...) had the - not so cool - idea to install a WiFi repeater in out office without letting me know. For another collegue who has a notebook and did not want to plug in the Gigabit cable (instead preferring "easy" WiFi at 100 Mbit - argh!).

So what happened is the following:

I rebooted the controller I configured. So it become unavailable from TCP/IP on my local Ethernet cable. My notenbook then tried to reach it, but found - accidentially - the WiFi, and then found THERE: The replaced original controller from the shelf (did you remember my beginnging words?) that hat EXACTLY THE SAME CONFIG. BECAUSE IT WAS REPLACED IN A FAILURE MOMENT. IT WAS NEVER RECONFIGURED!

Who would shut down a RAID to configure a spare controller for future use?
Without knowing what its future config will ever need to be (because it was a safe spare part for 5 identical RAIDs)?

After all, I am nearly over it.

WHY HAD YOU TO REMIND ME ON THAT TRAGEDY????

And where the heck is the number of my psychiatrist?

All the best,
Axel

PS: All I wrote has really happend! Unfortunately...

Axel Mertes said...

OK, you wanted to hear the most astounding version of loosing data?

Here it is:

We have a bunch of Infortrend RAID systems over here. All identical. For minimal reaction time in case of a part failure, I have an entire set of replacement parts on the shelf: PSU, Fans, RAID Controller, Drive Shelfs...

Once in the past a controller was failing with a defective RAM. So it was pulled, the spare controller was put into the RAID system and it was powered up again.

The failing controller was getting a new RAM and put back on the shelf - for next time.

A good year later I was emptying preparig a RAID to be moved to our new facility in Berlin. All data was cleaned up and archived, nothing bad so far. It appeared that this controller had less RAM than the others, and it seemed it was occasionally not working properly. So we decided to use that controller from the shelf...

I therefor disconnected the RAID from local network and used a direct Ethernet cable to my notebook, sitting behind the RAIDs 19" rack to administrate the new controller. So I set up the new partitions and restarted the controller as required.

I walked around the rack to see its doing its reboot in the LCD screen. Everything fine, so I went back to the notebook.

For some reasons - after logging in again - I saw the original configuration of the RAID - again!

My changes seemed to not have been applied. Everything was exactly as before I changed all the settings, created the partitions etc.

I wondered for a few moments and the decided to try again.

Would you do the same?

Keep in mind:

This RAID is DIRECTLY connected via a single cable between the RAID and the notebook in my hands!

Would you?

I bet on it!

So I reconfigured it all again. Hit again the execution button for all this to apply and for the RAID to reboot. Walking again around the rack and

PANIC!

A completely different RAID with roughly 2 TBytes of data was at already 3% of RAID initialise...

PANIC! Running back to the back side and simply plugging power off, to not wait any second longer.

Would you have done the same?
I think so...

So what has happened?
Why does a different RAID look the same?
Why can I access it even if I have only a single cable between me and the RAID I really wanted to work on?

Well, I took my about 15 minutes to find out:

My clever collegue administrator (no names here...) had the - not so cool - idea to install a WiFi repeater in out office without letting me know. For another collegue who has a notebook and did not want to plug in the Gigabit cable (instead preferring "easy" WiFi at 100 Mbit - argh!).

So what happened is the following:

I rebooted the controller I configured. So it become unavailable from TCP/IP on my local Ethernet cable. My notenbook then tried to reach it, but found - accidentially - the WiFi, and then found THERE: The replaced original controller from the shelf (did you remember my beginnging words?) that hat EXACTLY THE SAME CONFIG. BECAUSE IT WAS REPLACED IN A FAILURE MOMENT. IT WAS NEVER RECONFIGURED!

Who would shut down a RAID to configure a spare controller for future use?
Without knowing what its future config will ever need to be (because it was a safe spare part for 5 identical RAIDs)?

After all, I am nearly over it.

WHY HAD YOU TO REMIND ME ON THAT TRAGEDY????

And where the heck is the number of my psychiatrist?

All the best,
Axel

PS: All I wrote has really happend! Unfortunately...

J. Truesdale said...

I once kicked a portable hard drive across a room - while footage was being ingested.

It was a dark room (TV control room) and the black power cable was clotheslined about 1 foot above the dark carpet floor.

I came through the room at speed, my foot caught the cord and whipped the drive across the room.

Luckily 1tb drives are cheap these days, but unfortunately no backup had been made and we lost 6 hours of multicamera footage...