No Initialization delay. Ultra-fast performance. Scalable using larger thumb drives. Hackable by enthusiasts. What do folks think? Should I do this? Would folks buy it if I did? Any inputs or feedback is appreciated.
You will also come across very strange performance troubles as you have no way of ensuring that the drives will actually keep performing in sync.
Flash memory does not write and erase in sync without chips controlling that. Can you point me to a reference? This is an issue I need to drive to ground. I was thinking there would not be a reliability issue because the MLC chips come from only a few vendors and all have the same fundamental reliability assuming similar wear leveling.
SLC is, of course, more reliable, but it's use is diminishing due to cost and capacity i. XE vs. There are, by the way, SLC thumb drives, but I wasn't planning on using them As for sync, I do have independent PHYs on all thumb drives and am writing them in sync.
Their internal clocks are independent, so I would expect to see them out of phase by a few edges of 60 MHz, but the PHYs and my own internal elastic buffers will "hide" this from the user. I appreciate inputs..
The wear leveling is in the controller, not in the memory chip unless they've combined the USB interface and memory into a single chip. I expect the simplest of wear leveling algorithms in a USB flash, but a significantly improved wear leveling in even the cheapest SSD.
USB thumb drives is bottom of the barrel quality of Flash-chips. I thought everyone knew this. Now I have to find numbers It quotes between 10, and , writes Per cell, since flash only overwrite once all cells have been used you do get a bit more in reality.
Many sources I find like this one, and this one, quotes between 1 and 5 million writes typical for SSD-flash. However, this might differ wildly. Most Wear leveling controller has simply a set maximum safe writes and will lock down as soon as that is reached and no spare cells is left. Very primitive controllers might not even have that safeguard this would likely be memory-cards which may allow them to write until they actually fail, which should not be far off the expected wear leveling control chips expectations.
Which just HAS to be very wrong. But reading a bit more I now nderstand that he wrote a small number of bytes on a Gigabyte USB Flash Drive, and it moved the write to next least used block so many times before he actually did the first over-write that it kind of makes sense. SD, Thanks for digging that material back up. I didn't mean for you to go do research on my behalf; I thought you'd have it handy. In any case, I appreciate your effort.
The manufacturer obviously can't test every article, both because it would destroy the device and because of added cost. Not just presentation, but also real time p Microsoft's in-house solution disk management does not allow me to make a RAID of removable drives so I tried to find a third party software that does this but am I blind I couldn't find one, and bitt flipping or what not to trick removable drives into fixed doesn't work. I hope that someone knows a great third party software that creates a RAID-0 of whatever drive it'd be the best but, enough babbling.
How do I make a raid-0 of 2 USB flash memories? Was this reply helpful? Yes No. Sorry this didn't help. Thanks for your feedback. It seems that even though seek times are close to zero there is initialization and bus contention to deal with, this causes the optimal chunk size to be somewhere in the 32kk range. But what about write speeds? But what about in practice? At first there are linear gains when adding devices. However, by the time we added the fourth device we started to see some bus limitation penalties.
It does seem that chunk size has very little impact on the speed of write operations. But wait a second, this motherboard has two buses. We plugged in half of the devices to the first host controller and half into the second host controller. When creating the raid, you have to choose the order the raid uses the devices. Tests showed that interleaving the device-bus order had no effect over non-interleaved device-bus order.
We chose to interleave them anyway. It appears the optimal chunk actually depends on the number of devices used. As we saw earlier on the single bus, when there are very few devices on the bus the performance gain by adding an additional device is nearly linear. An interesting note is that Reiser4 shined in the benchmarks. However, it performed very poorly when used as the root and home partition of a desktop system. On our desktop system we tried reiser4 with lzo compression and reiser4 with default settings.
In general performance was extremely good, especially with the lzo compression. The problem we encountered were very long pauses when writing to files.
0コメント