If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. Registration is $1 to post on this forum. To start viewing messages,
select the forum that you want to visit from the selection below.

The Samsung met its "write death" on Aug 20, so I figured I would check it again for readability on September 20. I plugged the drive in and powered up my computer, but neither Windows nor the BIOS could see the Samsung SSD. I tried rebooting a couple times to no avail.

Then I powered down and disconnected the SSD, and brought it to another computer with an eSATA port and an external SATA power connectors. Disaster! When I went to plug in the power to the SSD, my hand slipped and bent the connector sideways, snapping off the plastic ridge of the SATA power connector. The metal pieces are still there (and still soldered to the PCB), but they are not stabilized by the plastic ridge. I found that it is still possible to get the SSD to power up by working a SATA power connector onto the metal pieces at the right position (they have a warp/bend to them that actually helps a little), but I am not certain that they are all making contact. But it is enough that when I "hot plug" the SSD, Intel RST notices something, although it never manages to mount the SSD.

I've been contemplating trying to repair the connector, but I have not yet come up with a good plan. Possibly I can super-glue the plastic ridge back on, but it is going to be difficult to get it lined up properly I think (it is in two pieces). I'm also thinking about trying to solder another SATA power connector on (if I can salvage one from a dead HDD), but there is a lot of solder there and if I get it hot enough to desolder, I am worried I might disturb some of the other components on the SSD PCB. So I haven't done anything yet.

Actually, if anyone reading this is experienced at this sort of thing, and would like to contribute to this thread, I'd be happy to send the SSD to you for repair and then you could keep it (if you are willing to try the read-only tests yourself), or send it back, whichever works best for you.

Johnw - I have a pretty high end forensics/data recovery lab over here . A sata connector is very easy for me to repair. Furthermore, I actually have the ability to take the NAND chips right off the Samsung and try to read them directly with a specialized device to see how bad it is . I will be doing that to my Intel when it finally dies.

Johnw - I have a pretty high end forensics/data recovery lab over here . A sata connector is very easy for me to repair. Furthermore, I actually have the ability to take the NAND chips right off the Samsung and try to read them directly with a specialized device to see how bad it is . I will be doing that to my Intel when it finally dies.

I am in Canada though.

Your lab sounds awesome! Is Canada customs difficult about shipping electronics components from the US? Or can I just fill out a customs form and ship the drive to you?

I'm starting to think the drives either like your motherboard or they hate it. End of story.
...

I'm not convinced about that, it's more like some SSD's are OK and some are not. (could be a combo of course)

My 240GB SF-2281's have never caused BSOD's, just the 60GB Agility and the 120GB Force 3.
(not sure about the Force GT 120GB, it might have had issues)
None of the 240GB drives have been used in Endurance testing though.

Although the Samsung performed admirably I can’t help think that it should have flagged up a warning (via SMART) once a critical endurance threshold had been reached, which then switched the drive to read only after a warning period. At least it would then have failed gracefully.

According to JEDEC218A “The SSD manufacturer shall establish an endurance rating for an SSD that represents the maximum number of terabytes that may be written by a host to the SSD” It then outlines integrity conditions that the SSD must retain after the maximum amount of data has been written:

1) The SSD maintains its capacity
2) The SSD maintains the required UBER for its application class
3) The SSD meets the required functional failure requirement (FFR) for its application class
4) The SSD retains data with power off for the required time for its application class

The functional failure requirement for retention of data in a powered off condition is specified as 1 year for Client applications and 3 months for Enterprise (subject to temperature boundaries).

I’m really not sure why the MWI appears to be so conservative. Does it really represent a point in time when the endurance threshold to maintain integrity (according to JEDEC specs) has passed? The Samsung wrote over 3 ½ times the data required to expire the MWI. Are you really supposed to throw it away when the MWI expires?

It will be really interesting to see what One_Hertz can uncover on the condition of the NAND.

Anyway I came across an interesting paper from SMART Modular Technologies. This is the second time I’ve seen compressibility referred to as data randomness. Anyone know the issues related to why randomness of data is linked to compressibility?

With regards to the Samsung, surely only enough blocks have to be bad that the drive can't enforce it's own ECC or data integrity scheme. Not every block or even 25% of blocks could be bad... right?

Anvil,

I have to think that a lot of SF2281 drives are just BSODs waiting to happen... I'm not sure how it is that some drives seem to have problems, but endurance testing seems to tease it out. I wouldn't be at all surprised to learn that SF just can't handle days on end of endurance loads, regardless of motherboard. However, I've seen marked improvement with the H67, if not complete rock-solid stability. With a normal desktop load, some motherboards and drives just don't work well together.

I really want another Mushkin Chronos Deluxe to play with, but I'm running out of systems to use in such a small space. I'd need a bigger apartment for another system. Might be worth it though.

OCZ is the poster child for spontaneous bluescreen interruptus. Only because its lonely at the top -- they must sell more 2281s than every one else put together. There won't be a recall because no one knows why it happens. As long as Sanforce can lay this at the feet of Intel, nothing is going to happen. The fact that other drives don't have the same problem is further indictment of Sandforce.

The pentagrams drawn in vigin's blood didn't help either... the drive crashed a few minutes after I posted the update. Aggravating.

EDIT:
I've given up on the MSAHCI drivers. They're slow, and I was able to hack the ports to internal only -- but then I just had to manually powercycle the system to get the drive back. And it's slow. I'm running back to RST where I'm just getting 127+ AVG MBs. And it only crashes after 24-32 hours. That sucks, but that's life. I'm just glad my very first SSD wasn't a SF2281... it would have ruined me.

There is little chance of a recall. That is what SHOULD happen but because Sandforce is all about the $$$, they will never do that and still keep claiming that only 10% are affected when it clearly is more like 50%. I was tempted to buy SF but this issue is too much to overlook.

LOL, you guys may need to start thinking the host may have something to do with the bsod's or its something that the host does and the drive does.

Also, I just posted a blurb about talking to Asus, they are stuck on Orom 10.5 and lock down MEI, GBT and MSI are not locked and do not lock down and have far fewer issues with uefi with newer Orom (11 series for example) and note Intel have moved all sandy Bridge boards to 11 series Orom also..

LOL, you guys may need to start thinking the host may have something to do with the bsod's or its something that the host does and the drive does.

Also, I just posted a blurb about talking to Asus, they are stuck on Orom 10.5 and lock down MEI, GBT and MSI are not locked and do not lock down and have far fewer issues with uefi with newer Orom (11 series for example) and note Intel have moved all sandy Bridge boards to 11 series Orom also..

funny how 11 series was supposed to be for X79 boards only

back to writing Voodoo posts, ta ta

The move to the 11 series Orom hasn't yielded much in the way of results if that's indeed the case. I finally gave up on my Intel mobo and switched to a Biostar H67 to somewhat more satisfaction, but only when using RST drivers. I should point out that my Intel DP67BG had a recent UEFI release that was recalled due to incompatibility with many late model nVidia GPUs. One of the only changes was to RST, so I've wondered whether that pulled release would have helped in some way. I still have it, and I think I can get it to work, so after the next crash I may give it a shot.

I don't want to speak for Anvil, but I think our thoughts are similar as to whether our drives would behave properly under more pedestrian workloads. I certainly believe that they would, and I think I am right in saying that Anvil thinks along those lines as well. The current longest consecutive run between crashes for the two SF2s is 51 hours on Anvil's X58 rig, but both drives in 1155 boards are substantially shorter. It's not a question of if, but when. I think some drives just despise some motherboards, and you can mask the instability... but it seems that around here, it's going to pop up consistently if not in a timely fashion. I have tried a few of your suggestions, and in fact I'm doing a little testing for science now... a little twist on the Tony style. I'm not really expecting much, but I'm optimistic that I'll stumble into something, and I'm not giving up.

By the way, no one here coined the moniker "voodoo" thread -- that illustrious honor belongs to "scottylans". I'm not dissing OCZ for anything they don't deserve (ahem... plastic chassis...) and collectively, I've only had positive experiences with the crew over there, so they deserve some praise as well.

Good luck with the voodoo; I'll be prepping the goats and practicing my incantations.

I believe we are trying most of the suggestions made on your forum, I found "Voodoo science" to be a funny expression that's all.

I'm pretty sure that my Asus M4E-Z (Z68) has the 10.6 OROM, there has been almost no updates on that particular MB and so I'm looking for 11.X as well.
The M4E-Z is the next one one my list if the current setup fails I'll try disabling Hot Plugging, that is probably the last thing I can try on the current OROM that can make a difference.

Also, I just posted a blurb about talking to Asus, they are stuck on Orom 10.5 and lock down MEI, GBT and MSI are not locked and do not lock down and have far fewer issues with uefi with newer Orom (11 series for example) and note Intel have moved all sandy Bridge boards to 11 series Orom also..

Could you talk to EVGA as well, please? All of their recent boards (P67, Z68 and X58) are stuck with 10.5 as well, which is... not good.

LOL, you guys may need to start thinking the host may have something to do with the bsod's or its something that the host does and the drive does.

Innuendo blaming everyone (including end users) and the dog is not helpful and just comes over as an attempt to deflect the problem away from SF2xxx products.

Originally Posted by Tony

Also, I just posted a blurb about talking to Asus, they are stuck on Orom 10.5 and lock down MEI, GBT and MSI are not locked and do not lock down and have far fewer issues with uefi with newer Orom (11 series for example) and note Intel have moved all sandy Bridge boards to 11 series Orom also..

funny how 11 series was supposed to be for X79 boards only

More innuendo with no substantiaition ...oh let me guess NDA

I have updated the firmware on my V3 whenever f/w versions have been released. I ran the endurance app for 7 days straight (~168 hours). I have installed Win 7 and Win 8 in my main PC without any tweaks whatsoever. I’ve installed the V3 as an OS drive in two different laptops, again with zero tweaks. I have used sleep in all instances.

Not a single problem to report. SF2xxx drives seem to either work or not work regardless of the configuration.

Despite the fact that some drives appear to work OK the potential for data loss, combined with the fact that neither SF nor any of the vendors know what the problem is makes the situation unacceptable. Forget trying voodoo, in my view SF2xx products should be recalled.