Jump to content
Strawberry Orange Banana Lime Leaf Slate Sky Blueberry Grape Watermelon Chocolate Marble
Strawberry Orange Banana Lime Leaf Slate Sky Blueberry Grape Watermelon Chocolate Marble

MSFN is made available via donations, subscriptions and advertising revenue. The use of ad-blocking software hurts the site. Please disable ad-blocking software or set an exception for MSFN. Alternatively, register and become a site sponsor/subscriber and ads will be disabled automatically. 


Zenskas

Seagate Barracuda 7200.11 Troubles

Recommended Posts


Has anyone fixed their hard disk and updated to SD1A firmware? Those who have done so, can you check in Seagate Sea Tools, does the long DST fail? Mine is failing the long DST test, but the disk is working fine. Managed to format to 100%, managed to do surface scan in Windows XP as well and looks all ok. Just wondering if any one else is failing long DST too.

Thanks!

Share this post


Link to post
Share on other sites
And in any case, the "objective" of the speculation:
Without official data, and as clearly stated, the above numbers are just speculative, and, while they might be inaccurate, the order of magnitude seems relevant enough to rule out that the 100÷150 reports here on MSFN, represent NOT a significant fraction (1/7 or 1/8) of all affected drives.

was NOT to determine exactly:

  • how many drives were produced
  • how many drives are affected and have developed the problem

only to check if it should be a matter of a few hundreds (a "few" or a "handful") as opposed to several thousands (a "lot" or "too many to count").

jaclaz

But those last two questions ARE what I'm trying to figure out. I'm not arguing with the figures you have come up with - they seem perfectly reasonable to me.

I should probably state what I'm looking for like this: If I bought a 7200.11 recently or I plan to,

1) what are the chances that it is/will be one of the affected ones (per Seagate), and

2) If it is one of the affected ones, what are the chances that it will fail by locking on boot?

#2 is a no-brainer: if I had a known affected drive, I would just update the firmware immediately because there is a substantial enough (to me) chance of it locking up in the near future.

#1 is easy if you have the machine or drive right in front of you - just punch the number in on the Seagate site. You either have an affected one or you don't.

#1 becomes a little trickier if the machines are not local, you don't have network access to them and you don't know the exact model, S/N or firmware version, but you know they're almost all aftermarket 7200.11s. This is a problem one of my coworkers 'inherited' from a different company. These machines will all be replaced in the next year or two, and they can probably get away with just replacing whichever ones go down first. But how many are likely to be in the affected group now? They may choose to take their chances if 1% are affected, but they would need to accellerate the replacemnt schedule if 30% of the machines have an affected drive and might lock up.

The population he's interested in is whatever the distributors/retailers, i.e., TigerDirect/Newegg/Amazon/Big Box stores (USA) have been selling the last year - that's where the drives supposedly came from. I think that also applies to a lot of the people posting here as well, except I have no idea what the big retailers are outside the US.

The only other information (aside from here) is the occasional customer review or posting in other forums where someone bought these in bulk from the same sources. There have been reports of 30 - 40% failure rate or at least included in the affected group. I would guess that's at the high end of the range simply because unhappy customers are more likely to post reviews than satisfied customers. But is the 'real' figure .2% like Seagate claims for the entire product family? It appears to be higher than that, but does that make it .5% or 10% or 35% for retail drives? I was just scrounging around for a reasonable number in the absence of anything useful from the vendors or Seagate.

The issue of if or how many OEM drives are affected has no real impact on me, personally. I'm just curious about that one since it came up.

My experience (quite meaningless) is six 7200.11s, two affected one bricked. I unbricked it successfully with the instructions here, so thanks to Gradius, fatlip and the hddguru forum people for your help.

Share this post


Link to post
Share on other sites
My experience (quite meaningless) is six 7200.11s, two affected one bricked. I unbricked it successfully with the instructions here, so thanks to Gradius, fatlip and the hddguru forum people for your help.

I also have six 7200.11 drives and one went down a week ago to the BSY bug after about 5 months of use. I am waiting on a TTL adapter so that I can attempt to unbrick it, and the reports here seem very encouraging. I don't want to go through the hassle of RMAing the drive and waiting an eternity to get it back. I currently have all of the drives powered down until I can fix the affected one and make sure my RAID is still OK before I patch each individual drive's firmware.

It's really frustrating how your drive can be perfectly fine one day and then *zap* the data is inaccessible. HDDs are not something I play around with -- I purchased an expensive high-end PSU and RAID controller just to help prevent random failure/data loss from occuring. Seagate really dropped the ball badly here and they darn well better have it fixed with the new firmware. I'm not totally convinced yet. Thank %deity% I went with RAID-6. Funny coincidence, though, that I lost a drive within a couple days of first reading about the issue.

I also think it's a total crock that only 0.2% of all drives are affected. It's public relations damage control. If the root cause analysis on this forum is correct then its only a matter of time before EVERY drive prematurely fails. You are rolling the dice every single time you reboot, and I hit the mark only rebooting about once a week for five months. Most people power down every night. Seagate even recommends proactively patching firmwares, but NO ONE who isn't already "in the know" is going to even think about patching a HDD firmware until after their drive is already borked. It seems crazy that they'd ask someone like my grandmother to patch drive firmware on a drive that already contains sensitive data that (probably) isn't backed up. This whole issue is a timebomb and it's going to get much much worse as regular non-technical folks start losing all of their family photos and documents and then start flipping out and suing Seagate. This "free data recovery" is really going to put the hurt on Seagate as well. Repairing and patching all of these drives and attempting to copy data from them is a laborious process, I'm sure. We are only at the onset of the crapstorm -- I expect this to make mainstream headline news this year.

Share this post


Link to post
Share on other sites
I should probably state what I'm looking for like this: If I bought a 7200.11 recently or I plan to,

1) what are the chances that it is/will be one of the affected ones (per Seagate), and

2) If it is one of the affected ones, what are the chances that it will fail by locking on boot?

#2 is a no-brainer: if I had a known affected drive, I would just update the firmware immediately because there is a substantial enough (to me) chance of it locking up in the near future.

#1 is easy if you have the machine or drive right in front of you - just punch the number in on the Seagate site. You either have an affected one or you don't.

Well, your solution for #1 is not an answer to the question, it's the solution to the problem, a different thing.

Is not possible to calculate the chances of a drive x being part of the affected lot as we do not know exact volumes of production, nor ratio of "problematic tester/good tester" in the actual factories.

If we knew this, we would have some of the factors involved, example (assumptions):

Seagate produced 5,000,000 drives of the family in 2008.

On average 5,000,000/360=÷13,000 drives/day (in a 24/7 timetable, which cannot say if it's how factories work)

Let's say that these drives are manufactured in 3 factories, thus each factory outputs roughly 4,333 drives/day, i.e. 180 drives/hour

How long does it take to produce/assemble a single drive? :unsure:

Let's say that 10 minutes are needed, thus the parallelism of the production line has to be 3, i.e. they need to have 30 identical lines in the factory, each one will produce 6 drives per hour, 30x6=180 .

If just one of the testers is of the faulty type, 1/30 of drives will be affected, i.e. 3.33%

If half an hour (30 minutes) is needed, parallelism becomes 90, 2*90=180 and chances become 1/90=1.11%

But without actual data, all the above cannot represent "real world", though the initially assumed 0.002 would mean, an overall parallelism of 1/0.002=500, which you may imagine as 3 factories having 166 production lines or 10 factories with 50 lines each.....in which just one single tester is of the defective type, a bit hard to believe (or not?) :unsure:

As well, your solution for #2 is not an answer to the question, the chances that an affected drive will develop the problem are 100% in enough time. This is due to the nature of the problem, a circular buffer will hit one of the (320+n*256) values, sooner or later.

But is the 'real' figure .2% like Seagate claims for the entire product family? It appears to be higher than that, but does that make it .5% or 10% or 35% for retail drives? I was just scrounging around for a reasonable number in the absence of anything useful from the vendors or Seagate.

Yep, that is the most "questionable" data,

some percentage

can mean ANYTHING bigger than 0 and smaller than 1. (i.e. 0.000%<p<100.000%)

jaclaz

Share this post


Link to post
Share on other sites

Oh my...

It seems that there is still some issues with Seabrick...

It is not clear, but by the thread title i can suppose it is the SD1A.

Comment of warpandas from 01-30-2009 07:24 AM

http://forums.seagate.com/stx/board/messag...ing&page=10

"I completed a successful firmware upgrade on my ST31000340AS one week ago after receiving problems of an I/O device error in Windows Vista when I tried to access my data on the harddrive. Now, one week later, I am completely unable to access my harddrive. BIOS will not detect it.

Anyone else?"

There is another guy (avivahl) reporting similar problems with ST3500320AS in this same thread.

Edited by SpXuxu

Share this post


Link to post
Share on other sites

if the data is important, cannot be reproduced, you better back it up. never know what may happen. for example, if a power failure happens during the firmware update - you might be staring at a brick. good luck!

Do I have to backup my 1tb hard disk before I update the firm ware? Anyone failed using the update for 1tb sd15 firmware?

TIA

Regards

Leigh

Share this post


Link to post
Share on other sites
Oh my...

It seems that there is still some issues with Seabrick...

It is not clear, but by the thread title i can suppose it is the SD1A.

Comment of warpandas from 01-30-2009 07:24 AM

http://forums.seagate.com/stx/board/messag...ing&page=10

"I completed a successful firmware upgrade on my ST31000340AS one week ago after receiving problems of an I/O device error in Windows Vista when I tried to access my data on the harddrive. Now, one week later, I am completely unable to access my harddrive. BIOS will not detect it.

Anyone else?"

There is another guy (avivahl) reporting similar problems with ST3500320AS in this same thread.

Well, a couple of my post's have been deleted on Seagate calling in to question their FW being written for the wrong revision number of the drives (IE: written for 7200.11 that are actually 7200.9). After talking to Seagate support, I found out that even though the label on my drive says ST3500320AS (in 2 places) and 7200.11 on it. According to Seagate's computers, my serial number is for a ST3500641AS which after looking it up, is a 7200.9. Funny thing is, as far as I can tell all the problems are with the 7200.11, but if what I looked up about the ST3500641AS being a 7200.9. Why am I setting here with a bricked drive which went out just like the 7200.11? Then the question is how many drives are out there that are labeled wrong? Could it be all the ones with all the troubles?.

So when I posted that maybe they should be looking in to if they where writing code for the wrong drives, they delete the posts.

When ever I update any other FW, they always say make sure you have the right model # and revision # because if not, you may render it inoperable, which kind of sounds like the hard drives to me. But which is worse for Seagate, saying they have a bug in the FW or selling drives as one type when they are not that type? Could it have been a corporate decision to label the drives wrong or was it an accident, and when did they find out?

Edited by Jake36

Share this post


Link to post
Share on other sites

We could probably figure out some of this information with the controller diagnostics port (the RS232 / terminal thing) with the output of the Ctrl+L command, e.g.:

[UNKNOWN MODEL]
TetonST 2.0 SATA Moose Gen 3.0 (RAP fmt 10) w / sdff (RV)
Product FamilyId: 27, MemberId: 03
HDA SN: 9QK01LX2, RPM: 7206, Wedges: 108, Heads: 6, Lbas: 575466F0, PreampType: 47 A8
PCBA SN: 0000C816C5P6, Controller: TETONST_2(6399)(3-0E-3-1), Channel: AGERE_COPPERHEAD_LITE, PowerAsic: MCKINLEY Rev 51, BufferBytes: 2000000

[UNKNOWN MODEL]
TetonST4 SATA Brinks Gen3.1 (RAP14)
Product FamilyId: 2D, MemberId: 07
HDA SN: 6SZ<censored>, RPM: 7203, Wedges: 108, Heads: 2, Lbas: 2542EAB0, PreampType: 59 21
PCBA SN: <censored>, Controller: TETONST_4(63A0)(3-0E-4-0), Channel: AGERE_COPPERHEAD_LITE, PowerAsic: MCKINLEY DESKTOP LITE Rev 94, BufferBytes: 1000000

7200.11 ST3320613AS SD22
TetonST4 SATA Brinks Gen3.1 (RAP14)
Product FamilyId: 2D, MemberId: 07
HDA SN: 6SZ02ZXZ, RPM: 7203, Wedges: 108, Heads: 2, Lbas: 2542EAB0, PreampType: 73 01
PCBA SN: 0000M847KPRX, Controller: TETONST_4(63A0)(3-0E-4-0), Channel: AGERE_COPPERHEAD_LITE, PowerAsic: MCKINLEY DESKTOP LITE Rev 94, BufferBytes: 1000000

7200.11 ST3320613AS
TetonST4 SATA Brinks Gen2 1-Disc (RAP14)
Product FamilyId: 2D, MemberId: 03
HDA SN: 9SZ081HN, RPM: 7203, Wedges: 108, Heads: 2, Lbas: 2542EAB0, PreampType: 73 21
PCBA SN: 0000C832VTX8, Controller: TETONST_4(63A0)(3-0E-4-0), Channel: AGERE_COPPERHEAD_LITE, PowerAsic: MCKINLEY Rev 94, BufferBytes: 1000000

We would be left guessing what this reveals about the drive internals, but we could probably figure out some kind of patterns with enough data.

And if you want to see your posts deleted quickly on the Seagate forums, just try asking about ANY of the information above. They will not even acknowledge the diagnostics port exists and certainly don't want users poking around in the drive controller's ROM.

Edited by Gibby

Share this post


Link to post
Share on other sites
Oh my...

It seems that there is still some issues with Seabrick...

It is not clear, but by the thread title i can suppose it is the SD1A.

Comment of warpandas from 01-30-2009 07:24 AM

http://forums.seagate.com/stx/board/messag...ing&page=10

"I completed a successful firmware upgrade on my ST31000340AS one week ago after receiving problems of an I/O device error in Windows Vista when I tried to access my data on the harddrive. Now, one week later, I am completely unable to access my harddrive. BIOS will not detect it.

Anyone else?"

There is another guy (avivahl) reporting similar problems with ST3500320AS in this same thread.

Well, so far I got no report about that, thanks to guide at: http://www.msfn.org/board/index.php?showtopic=128807

I estimate it helped to fix over 1000 HDDs so far, w/o any problem, and absolutely no errors were reported.

Someone with 150 HDDs with BSY and LBA problems told me he recovered 100% of them, w/o issues or data loss.

I'm using 6 HDDs, 2 them were with BSY error, and 1 with wrong firmware, I updated those 4 (SD15) to SD1A and they all are running just fine at this very moment.

Gradius

Edited by Gradius2

Share this post


Link to post
Share on other sites
Oh my...

It seems that there is still some issues with Seabrick...

It is not clear, but by the thread title i can suppose it is the SD1A.

Comment of warpandas from 01-30-2009 07:24 AM

http://forums.seagate.com/stx/board/messag...ing&page=10

"I completed a successful firmware upgrade on my ST31000340AS one week ago after receiving problems of an I/O device error in Windows Vista when I tried to access my data on the harddrive. Now, one week later, I am completely unable to access my harddrive. BIOS will not detect it.

Anyone else?"

There is another guy (avivahl) reporting similar problems with ST3500320AS in this same thread.

Well, so far I got no report about that, thanks to guide at: http://www.msfn.org/board/index.php?showtopic=128807

I estimate it helped to fix over 1000 HDDs so far, w/o any problem, and absolutely no errors were reported.

Someone with 150 HDDs with BSY and LBA problems told me he recovered 100% of them, w/o issues or data loss.

I'm using 6 HDDs, 2 them were with BSY error, and 1 with wrong firmware, I updated those 4 (SD15) to SD1A and they all are running just fine at this very moment.

Gradius

Glad to hear that

Maybe 2 in one million is not that significant (unless it happens to you)

Share this post


Link to post
Share on other sites
I'm using 6 HDDs, 2 them were with BSY error, and 1 with wrong firmware, I updated those 4 (SD15) to SD1A and they all are running just fine at this very moment.

Gradius

I thought the firmware version out there now was SD1B and not SD1A.....

I upgraded my ST31000333AS from SD15 to SD1B although my drives didn't fail

before the upgrade.

Edited by mikesw

Share this post


Link to post
Share on other sites
I'm using 6 HDDs, 2 them were with BSY error, and 1 with wrong firmware, I updated those 4 (SD15) to SD1A and they all are running just fine at this very moment.

Gradius

I thought the firmware version out there now was SD1B and not SD1A.....

I upgraded my ST31000333AS from SD15 to SD1B although my drives didn't fail

before the upgrade.

So far it seems a new 7200.11 symptom with SD1A:

http://www.msfn.org/board/index.php?showtopic=129459

Well, SD1B and SD2B are for other HDDs.

SD1B:

ST31500341AS: http://support.seagate.com/firmware/Brinks-4D8H-SD1B.ISO

ST31000333AS: http://support.seagate.com/firmware/Brinks-3D6H-SD1B.ISO

ST3640323AS: http://support.seagate.com/firmware/Brinks-2D4H-SD1B.ISO

ST3640623AS: http://support.seagate.com/firmware/Brinks-2D4H-SD1B.ISO

While those uses SD2B:

ST3320613AS: http://support.seagate.com/firmware/Brinks-1D2H-SD2B.ISO

ST3320813AS: http://support.seagate.com/firmware/Brinks-1D2H-SD2B.ISO

ST3160813AS: http://support.seagate.com/firmware/Brinks-1D1H-SD2B.ISO

From Seagate:

Note: If your drive has CC firmware, your drive is not affected and no further action is required. Attempting to flash the firmware of a drive with CC firmware will result in rendering your drive inoperable.

Funny enough, they removed LC above.

Gradius

It seems CC are affected after all:

31000333asxf4.jpg

Know issue:

Clicking: doesn't read/write while clicking leading to BSOD or other errors.

Edited by puntoMX

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Recently Browsing   0 members

    No registered users viewing this page.

×