I've tried a bunch.
Seems like the default NTFS format is best. I checked my speed with USBDeview
www.portablefreeware.com/?id=1004
Using a startech 8gb ($12), works ok, I got around 12 w / 18 r, though I know that is a simplistic measurement. What program was it that does the complex testing?
Also, does size matter? And, what is the best economy stick on the market now, as in around $1/GB to $2/GB.
4GB or 8GB is definitely the sweet spot, last I checked. I got a little fewer GB's for my $$'s with my 16GB drive, but I wanted the capacity.
Mine's formatted FAT32; y'all think it'll work better as NTFS? Is NTFS good for portability, as long as I only use Windows XP and newer (which I do)? I always thought FAT32 was more portable-friendly and NTFS was more local-friendly (all my desktop PC's hard drives are NTFS).
Now I'm in a predicament. I know that eventually I'll need a 64GB drive, but FAT32 doesn't support that much. How will I live?
(BTW, what keeps me from switching to NTFS is that I back up my flash drive to a Linux box with Back in Time. Haven't found a good alternative for Windows yet. Hm... Maybe I can just set it to NTFS and not do any kind of writing on it with Linux...)
Insert original signature here with Greasemonkey Script.
That isn't entirely accurate. Windows 2000 and XP won't format a FAT32 system over 32GB, but they can read from one up to the format's hard limit of ~127GB. A third-party tool must be used to format the volume. After that, you're good to go, but Microsoft says operations on large FAT32 volumes can become slow and tedious. [source]
I'd get a portable hard drive, like I said in the other topic, and partition it. Linux can read NTFS, as I understand it, it just can't write to it (?) but both can read/write FAT32. So make a 32GB FAT32 partition from a 320GB hard drive (which will come closer to 300GB) and a ~270GB NTFS partition. Put PortableApps apps on the NTFS partition, backup to the FAT32 partition, and have Linux pull the backup from there. Would that do ya?
I just read a tutorial on encrypting an entire dual-boot space-saving computer with TrueCrypt, and it said that the NTFS-3G driver can write in NTFS filesystems, so... NTFS it is.
Insert original signature here with Greasemonkey Script.
FAT32 Doesn't have a 127GB limit. The 137GB limit was a limitation of Windows 98 and older motherboards. I'm not sure what the disk size limit is, but I've used a 1 TB external HD with no problems. There is a file size limit of 4GB.
Also, any linux distro with kernal 2.2.0 or later can read and write NTFS.
I have a terabyte USB external drive, and I'm sure it is FAT32. (It's plugged into my Linux box, anyway.)
I am not my signature.
It's called LBA formatting support. Gather any recent Linux live CD (only tested this with Ubuntu, but i suppose most of the distros will have that, especcially the Gnome ones.
I've been using CrystalDiskMark.
http://crystalmark.info/software/CrystalDiskMark/index-e.html
It's a freeware program that is portable. It tests read and write speeds for both large and small files.
When is comes to flash drive speed, around 33MB/s is the upper limit for any USB 2.0 device. Size doesn't seem to be a major factor.
I've tested all of my flash drives with this program. The best scores I got were from a OCZ Diesel 16GB that I bought from Newegg for 16.99 after rebate. It's more than that now, but still on the cheap side for its size. The worst results I got were from an older PNY 8GB drive.
Does any program offer a testing solution that mimics real world usage?
Perhaps reading and writing small files simultaneously?
How bout one that measures usage data on the fly? As you download torrents, files, use a browser, stream radio, whatever.
I suppose seek time might be a more important factor than transfer rates in normal operations.
NTFS is more stable than FAT32.
I tried various cluster sizes. The defaults seemed to be best.
;>jamvaru
The problem with "real world" is what you are trying to emulate.
For testing flash drive writes, I tend to just time the install of PortableApps big suite. With an A-Data single channel 8GB drive formatted FAT32 w/ 4K clusters:
Install to bare stick: 32:22
Install to USB-SuperCharger volume: 5:21 (about 6X faster)
We see similar times for other sticks. Basically faster sticks can be 2-4X faster than these numbers, both for bare and USB-SuperCharger times.
A table of test times for the suite installer might be a useful number.
In terms of stability, I tend to prefer FAT32 because it allows "hot unplugs". NTFS requires an explicit unmount.
the problem is we do not know if the user wants write big files to it or lot of small files.
Writing big files can be fast , but then writing the same amount of small files can seem slow, because of the permanent rewrite of the file allocation table. since it can not be written to the same place like on harddrive, it will after each file delete the old version and write the new version, then delete the copy of the old version and write the copy of the new version and first then go to write the next small file.
This will happen similar on hd too, but there the fat can simply be overwritten, here not, on flash erase cycles will be the result.
Otto Sykora
Basel, Switzerland
The underlying issue with Flash is that you have a very large "erase block size". With USB sticks, this can easily be 1MB to 4MB in size depending on how the Flash chips are striped internally.
This leads to a characteristic called "write amplification". When you write a small block, the drive's controller has to merge the small write with existing data to update a full erase block. With "simple" controllers, this can result in a very high level of amplification. For example, the SanDisk cruzer (plus a lot of CF cards) tend to update 4MB when doing a 4K write for an amplification of 1024:1. This literally means that the 4K update has 1/1000th the available bandwidth. You can also look at it as wearing the drive out 1000 times as fast.
Most USB sticks are not as bad as the Sandisk, but they are still terrible. Most run 250:1 for 4K random updates, although some cheat and can take multiple updates that are interleaved and coalesce them. In the end, with a completely unfragmented FAT32 file system, new file writes of 150K items tend to run at about 1/6th the speed of the drive if it were being updated linearly. If the average file size is larger, then the differential is smaller. If the average file size is smaller, then the write amplification gets bigger.
And remember that this slowdown is not just performance but also wear.
We have tested 1st gen SSDs with simple FTLs (Flash Translation Layers) such as jMicro controllers and MTron drives. Some of these SSDs have SMART registers that show counts for erase cycles. Testing with random writes verified the predicted write amplification factors based on erase block sizes.
2nd gen SSDs have controllers that don't map the flash blocks the same way. Thus intel, Samsung, and Indilinx controllers get better ratios. They are still not 1:1, but having a couple hundred or a couple thousand IOPS for small random writes matters a lot. Then again, if you throw white noise writes at one of these drives over the whole surface of the drive, they still degrade a lot.
What we have done with our software is at one level similar to the 2nd gen SSD controllers, but we have a lot more resources available to us on the host. Lets face it, an SSD controller is lucky to have a 200 MHz ARM and 32 MB of DDR where we have essentially unlimited RAM and multi-core GHz cpus to play with. This lets us "best case" the remapping of the drive allowing for close to theoretical performance numbers.
Theoretical performance numbers actually mean two different things. For the "first pass" when the drive starts out empty, it means you can random write white noise aligned blocks to the logical capacity of the volume with 1:1 wear amplification and linear speed. After the drive is full, any controller has to start re-organizing looking for free space. At this point, the best case is to degrade to the inverse of the free space available on the drive. The Flash-SuperCharger logic actually comes within a percent or two of theoretical on both tests.
Long term, with real data, we tend to run at linear speed for applications that have "down time". If the drive is quiet, we reorg in the background. This gives you a large burst, usually equal to about 3/5ths of the amount of free space on the drive. After this, we degrade down to the inverse of free space.
If you measure the long term wear amplification on a Windows drive, we usually see about 3:1 wear amplification. This is 10X or more better than 2nd gen SSDs and often is a bigger differential than the wear ratio of MLC vs SLC Flash.
The PE Suite install is about 6X faster with a USB-SuperCharger volume formatted as FAT32 with 4K clusters and configured "Optimize for Fast Removal". In that our software runs on "real" disks as well (not just USB sticks), we have tested ext2/3/4, xfs, ntfs, and others. When writes collapse to a pure bandwidth model, the file system tends to stop waiting for disk and becomes a pure CPU exercise.
I suppose someone will make a GPL equivalent eventually, but for now, you have a good thing going.
One thing somewhat unclear. It runs from the stick, no computer changes necessary?
Does it improve function of portable browsers like opera or firefox?
So, what about cross-linking in the fat32 system? I thought NTFS was supposed to fix that, so why use fat32? (or fat16, yuk)
If the only problem is unmounting, then i'd stick with ntfs. I use "EjectUSB" to eject when unmounting is problematic. Seems to work pretty good.
The Overwriting problem is definitely a big deal. That would take precedence, since the life of the stick is at stake!
So, fat32 then? What cluster size? I understand it doesn't matter to the stick, but maybe something gets lost in translation or whatever. I suppose a cluster size equal to the erase block size would make sense, or smaller than.
I'm still confused about the "overwriting" thing. Fat32 doesn't? It is all a bit much.. I guess we are still in the infancy of stick technology.
Back to the scratching board.
;>jamvaru
>I suppose a cluster size equal to the erase block size would make sense, or smaller than.
Otto Sykora
Basel, Switzerland
And to make it even more "fun", windows pre-Vista/7 can't even create a properly aligned partition on flash-media .. My guess is that's the reason for m$ not allowing multiple partitions on removable flash-media .
This is a big deal on SSD's, see http://www.ocztechnologyforum.com/forum/showthread.php?t=48309 for tech-details . The author claims to have experienced a 300% (!!)improvement just by properly aligning the partition on a NTFS sys-drive.
I tried doing the same to my FAT32 flash-drive (with the RMB flipped,, as I like it)
The improvement was about 20% when writing multiple small files.. and it didn't cost me a dime
Anyway, I get the same performance from a sandisk cruzer as the benchmark shows with this new commercial closed-source software, by encrypting the entire RAW DEVICE with TrueCrypt. In TrueCrypt there ARE no random writes thanks to the driver, everything is written in nice 512b sequential blocks.
how'd you flip it?
Too many lonely hearts in the real world
Too many bridges you can burn
Too many tables you can't turn
Don't wanna live my life in the real world
is small utility for it, but it does not work with all controllers.
Otto Sykora
Basel, Switzerland
>by encrypting the entire RAW DEVICE with TrueCrypt. In TrueCrypt there ARE no random writes thanks to the driver, everything is written in nice 512b sequential blocks.
Otto Sykora
Basel, Switzerland
I should probably say that I don't own a flash-drive larger than 4GB,
it's likely that the larger the capacity the larger the erase-block size ..
Otto wrote :
> Does the raw device encryption behave as big single file, obvious, but what happends with changes to such file?
>No, it's not a file, it's the ENTIRE blank device encrypted .
Otto Sykora
Basel, Switzerland
Yes, and that's why "trim" probably won't do anything for a TC-encrypted SSD,
there was a discussion about it on the OZC-forum .
It probably does degrade performance on a USB flash-drive also, but I can't say I notice it,maybe because the gain from having a properly aligned partition, the use of exFAT and no random writes is much greater than the degradation caused by a "full" device . (Yeah, mounted TC-volumes can be re-formatted to exFAT from within windows )
My guess is something will be done when USB3 arrives and people start to wonder why
flash-drive performance STILL sucks
Forgive my misunderstanding, but isn't it not actually random with flash drives? It uses wear-leveling to spread out the data. So doesn't using TrueCrypt to write sequential blocks defeat the purpose of wear-leveling?
in this context are not the thing used to assure wear leveling, but it is a property of the file system. Those are then written as changes to a file, thus will have to end up as write operations to free blocks anyway and clear the relevant erase blocks.
This is hardcoded in the controller and can not be cheated. However writing big 'file' will be using sequential blocks when they are free.
Otto Sykora
Basel, Switzerland
Ok, thanks.
An interesting app USB-SuperCharger.
Does installing it destroy all data on the USB stick? Can it be removed if one doesn't like it's performance change? If so does one loose all their data in the removal process? If it is removed from a USB stick can it then be tried on a different stick?
I don't see these items addressed on the webpage's FAQs.
Ed
Sorry all, I have been off-line for a couple of days. I will try to answer a bunch of questions in one shot.
First, to correct a misconception. TrueCrypt, which is a very good package, does not change the order of where data is written to an underlying device. If your application random writes, then TrueCrypt will not change this.
Regarding USB-SuperCharger.
I hope this answers most of the questions I saw go by.
Doug Dumitru
EasyCo LLC
Thanks for responding DougDumitru but this feature pretty much kills it's usefulness.
* Admin privs are required.
- There is a driver after all.
Ed
I think there is a sizable minority of users with admin rights that could make use of it. Of course, as with TrueCrypt, the majority of users can't, but some will find it useful.
Sometimes, the impossible can become possible, if you're awesome!
Given USB flash drives do not perform background/idle time garbage collection nor do they support Trim I have a few questions. If either of these functions were supported, only the flash controller could perform these operations?
1. Does your USB Supercharger program write to ALL the NAND blocks (for the size selected) during the Fat32 formatting function?
2. How does your product perform if installed to a USB flash device that has been entirely filled and erased several times? I believe in this case the NAND blocks would be severely fragmented and any write would require at least a block erase before write and probably garbage collection before the NAND erase block/write operation.
USB Flash sticks have very simple FTLs (Flash Translation Layers). As such, concepts such as trim don't really exist.
Trim is used with more complicated controller to increase the percentage of unallocated space so that the controller can "find" larger areas to write to without requiring flash to flash copying. With very simple devices like USB sticks and SD cards, the situation is a bit different.
Starting with the simplest FTL in a flash device, the arrangement of blocks is such that the controller keeps track of "erase blocks" and builds an LBA table of them. Thus a 2GB CF card with 2MB erase blocks will have a 1000 entry LBA table. This table translates logical to physical addresses. When you write to the device, the table has 2 pointers to the current write location. One for the data that has already been written and one for existing data after the write point. As you write linearly, the pointer progresses down the erase block.
This is why even small deviations from linearity kill these devices.
Some devices are a little smarter than this. They might have multiple write locations or multiple channels that interleave differently. Regardless, they don't really have the concept of "free" space beyond the extra erase blocks they allocate to swap in when cells die. 2nd gen SSD controllers are different. They play all sorts of games trying to get random writes reasonable.
In the end, if you write 100% linearly to either a dumb or a smart device, you tend to end up with perfect use of the FTL. This means that true linear writes can actually get to 100% of the theoretical write speed of the device.
So how does this impact flash sticks and USB supercharger.
a. If the flash stick is old, the NAND blocks don't really fragment like an SSD. They might be half way worn out, but the performance should still be close to a new stick unless error correction is slowing things down a lot.
b. When you install USB SuperCharger onto a stick, it is best if the stick starts out reasonably defragmented. The USB SuperCharger installer does not use the system FAT32 code for installation (assuming you are installing in "fast" mode) and builds it's backing files using "best fit" to find the biggest linear spaces possible.
If your stick is badly fragmented, you should probably defrag it first. In practice, it is usually easier to just erase all of the files and then copy your stuff back in.
c. If you look at the SuperCharger code running on top of SSDs, we tend to actually defragment the SSD itself over time. Because our writes are long and 100% linear, this gives the SSD enough churn to get free space back.
d. Our latest posted release has support for "trim". This allows us to spot deallocated blocks in the file system and return them to free space within the USB SuperCharger management tables. We don't "trim" to the physical drive itself because 1) the drives has no clue, and 2) even if it did our linear writes make trim much less important.
We have been stress testing "trim" by running a program that build 32K files in directory with random sizes averaging 100K. This creates a data set 3.2GB in size. Put this on a 4GB stick and it is 80% full. The program then picks a file at random. This file is deleted and re-created at a new random size. Run this for a few hours and the FAT32 file system ends up fragmented as bad as you have ever seen. If you do this to a bare stick, writes slow to 0.5 delete/creates per second. With USB SuperCharger performance stays around 80 delete/creates per second. This is on a dual-channel Patriot XP stick. So far, our longest single test has been 4+ days (several tens of million file writes). The starts fast when the stick is empty and then quickly levels off after the first couple of passes. If you stop and restart the test, the restart is again fast at first (USB SuperCharger background defrags during quite periods) and then drifts back to the expected level of performance for the free space percentages of the drive. The behaviour is actually amazingly predictable and smooth.
Doug Dumitru
EasyCo LLC
Doug
Thanks for your reply. I have researched how SSD NAND based hard drives work and assumed flash sticks would be similar except for not supporting TRIM, background garbage collection, etc. If I understand your post, typical flash sticks have erase blocks that can be quite large (due the limited resources of the on-board flash controllers). For example, the minimum flash stick erase block size might consist of four 512KB NAND blocks or (2MB) tied together with one control line (Very serious write amplification!!!). IE.. making the erase block size 2MB instead of the 512KB which is the native NAND erase size in this example. I believe ALL SSD's handle the erase block at the single native NAND block level (which can be 128KB, 256KB, 512KB etc.). Also, SSD hard drive NAND writes are at the page level which is typically 4KB (same as the default NTFS cluster size). Regarding SSD hard drives, I have never ran into any discussion of two write pointers as per your previous post. Are the two write pointers referring to writes to normal flash sticks (IE.. Without your USB SuperCharger wherewithal) or to how linear writing is handled when using your USB SuperCharger driver? Referring to d. in your post: Only the flash controller can actually erase a NAND block (or ganged blocks) and this erase function can only be performed when new data is written to the NAND flash (IE.. the controller does not know what files/nand pages have been released/deleted and background garbage collection is typically not supported on flash based sticks.
Again, Thanks you Very Much for your post.
Regards, Ron
The USB Flash controllers need to be really cheap. They have zero DRAM. This limits them in what they can do. If you look at the 1st jMicro SSD controller, it had zero DRAM and similar performance issues.
You are correct in how blocks can be interleaved. Ganging stuff together gets you better linear speed, but at the expense of a really long erase block and really bad amplification. From external tests that we have run with 4GB SanDisk Cruzers, they literally have amplification of 1000:1 (4MB erase block for 4K writes).
Some sticks seem to do better with FAT than the underlying "single update point" FTL would imply. This is why I suspect some controllers have multiple "active lines" or some other tweak that helps, at least a little. Patriot's seem to behave like this.
When you actually fragment a stick, they all tend to equalize at very bad values. We wrote a "fragmenter" program and even old SLC sticks have a hard time getting above 1 delete/write operation/sec. SuperCharger runs about 80/sec.
When a flash block gets erases happens when a new write line is setup. If you seek a write or write to the end of the current block, then a new block is selected from the available pool. Some controllers might erase early and some might erase late, but it still yields about the same throughput. The SSDs that pre-erase (ie, Indilinx barefoot controller) and give you extra performance on the first pass of the disk are a bit of a scam. Unless you have an environment that does active trim, you won't see that throughput ever again. Trim, while good in theory is a bear to implement and actually have work right. No-one is driving trim through raid controllers or software layers yet. Some drives try to fake it by guessing the NTFS bitmap tables (this really scares me. what if the bitmap table is actually split apart because of LVM in Linux or spread across two drives in Raid-0. i sure hope you can turn this off). SuperCharger's use of zero'd blocks is at least easy to propogate to the disk and does not create strange raid issues. How exaclty is trim to the handled with raid-6. what pattern do the 2 parity drives need to read so that the raid XORs are "correct".
Now I am starting to ramble.
Doug Dumitru
EasyCo LLC
Doug
Thank for the previous posts.
I have been experimenting with booting (WinXP) from a NTFS formatted RiData USB/ESATA 32GB stick. This configuration is booting Windows XP on my ASUS Netbook via the SATA I (not SATA II) interface (NOT from the USB side) on the RiData SSD stick. I installed the FLASHFIRE driver which seems to help; however, if memory serves, the FlashFire driver simply caches random writes to system DRAM (IE..does not speed up the actual random write performance to the actual NAND flash pages) which can be become exhausted and can also require a long period to flush the data cached in RAM back to the flash device. When/if the upcoming "USB Supercharger" driver supports NTFS and assuming it could work via the ESATA interface, I would consider more experimentation.
The RiDATA stick can write at up to 50MB/sec using the ESATA interface but the 4K random writes are much slower. Not sure what the write rates would drop to when the virgin NAND block have been exhausted (IE.. writing to previously used NAND blocks, requiring an erase before write).
Since I normally boot Puppy Linux (Not a big Windows OS fan) from a factory formatted FAT32 USB stick, it would also be nice if a driver where available to let me mount and access the contents of the USB SuperCharger volume. NOTE: Puppy can read/write Fat16/32 and NTFS partitions.
Last question, any idea why I can't mount the USB SuperCharger when booted into 64Bit Windows 7 RC? I can manually mount the same SuperCharger volume under Vista 64Bit and 32Bit XP without any problem. The error I get during the mount under 64bit Windows 7 RC indicates the stick is not licensed?
Thanks again for your cycles!
Regards, Ron
A couple of comments:
FlashFire does just cache writes. It then re-orders them. The re-ordering can make the writes more efficient. It also will cause data corruption if you pull a stick before it flushes.
SuperCharger will run with NTFS now at a basic level, but there are a couple of issues that we have to deal with on our side before you could use it in a production sense. We need to complete a "dismount" tray icon function so that you can dismount the stick. FAT32 allows for hot unplugs. NTFS always requires an explicit dismount. We also need to sense NTFS and run our "trim" function there. With Vista and later, this is really easy as there is an NTFS IOCTL call to "zero on dealloc". With earlier releases, we will have to trim the same way we do FAT32, but the counters to sense when are different.
We have SuperCharger code that runs on Linux. It is currently designed for enterprise array use, so adapting it to USB will take some effort. Our "plan" is to run USB through the FUSE layer. This should also work on Macs and should allow non-root users to operate. One issue with supporting Linux is "which one", so feedback from you would be appreciated on that.
We have "committed" plans to short-term implement two new features in the current Windows USB SuperCharger. These are:
* a read-only browser so that you can at least access your files without being admin.
* simple encryption. Basically, AES block encryption of each 512 byte sector in the backing store with a single, non-recoverable, passphrase which creates the AES key. Probably not as "good" as TrueCrypt, but really easy to implement and one less drive letter to deal with.
Doug Dumitru
EasyCo LLC
Doug
Might be off base here but I think the Linux kernel version and its compiled in wherewithal is what is most important and not the exact Linux Distro. That said, I would suggest you test your alpha/beta software against Ubuntu 9.04 or later with the kernel delivered in the standard download. Puppy Woof can now be built with the Ubuntu packages and it is called UPup (Ubuntu Puppy). With any luck, if it works with Ubuntu it might work with Puppy Woof UPUP.
Let me know if you need a Beta tester for the NTFS version of the USB SuperCharger or the Linux driver for FAT32/NTFS. USB SC might be faster than FlashFire but not as easy to use because FlashFire is a minifilter driver (I think) that sits directly on top of the NTFS file system (Typically C drive). IE.. not mounted with an alternate drive letter. Also, a USB SuperCharger Linux driver that only works with FAT32 could be useful with NTFS to follow later as a free customer upgrade.
Regards, Ron
well ntfs is ok , is can get more data on big drives, on small drives no big advantage in this direction.
But one has to consider that ntfs will write little bit more to a flash drive thus causing more wear to it. Journaling process needs more small write operations then the updating of fat on fat32 system.
Otto Sykora
Basel, Switzerland
This is a pretty good, simple, app to benchmark. You should pay as much attention to the small random writes as the large linear operations.
For example, a SanDisk cruzer will test at about:
Read Write
Lin: 24.91 11.80
512K: 24.84 4.94
4K: 5.90 0.01
The small random writes really kill the drive. This is
Hi Doug
what would then recommend for the case of running an operating system from rather less sophisticated flash ? (SSD have often nice fatures build in, cheap stuf has not)
Linux running on ex2 rather then ex3? Sure swapping off sure, but would be then running linux from fat (load with syslinux) more economic with respect to wear on average flash?
Otto Sykora
Basel, Switzerland
>>>what would then recommend for the case of running an operating system from rather less sophisticated flash ? (SSD have often nice fatures build in, cheap stuf has not)
Linux running on ex2 rather then ex3? Sure swapping off sure, but would be then running linux from fat (load with syslinux) more economic with respect to wear on average flash?
running os stationary 'for ever'. In small system where things have to run 7/24 for years. In such case ram operation is not very sensible.
The thing is, that ex3 does probably much more writes then it would do on fat, but fat has other problems when it comes to recovery etc.
Otto Sykora
Basel, Switzerland
So, can it run entirely from memory and accept modifications to the flash such as custom settings and saves, say when downloading mail or files?
;>jamvaru
Yes. In Puppy, the base operating system files are unchanged, but all additions (custom settings, added programs, etc.) are put in a separate save file. The save file can be encrypted. If you want, the save file can be put on different media than the OS. For example, you can boot the OS from a CD and put your save file on a flash drive or a hard drive. When booting, it searches all media for save files. If it detects more than one, it will prompt you to select one.
how bout a version of linux that runs from windows and has the features you describe like running from memory and having persistent saves.
so, you go to generic random computer with windows (that doesn't allow for alternate boot options), click "linux GO!" and boom, you are running OS within OS (window).
I suspect memory management would be problematic in most cases.
Perhaps if it didn't absolutely require to be run from memory. Also, could use second usb stick as swap partition. Or, use file on "desktop" as swap file, as the desktop is usually accessable on most public computers.
;>jamvaru
Earlier this month, someone packaged a Puppy Linux distro with QEMU pre-configured in the Puppy forums:
http://murga-linux.com/puppy/viewtopic.php?t=48488
That sounds like it would fit your definition. I haven't tried it, I don't find booting from flash to be a problem.
I recently bought a 16G flash drive from Staples figuring that I would be able to load all the portable apps I could find here just in case I needed (read: wanted) them. I didn't give any thought to performance. After loading it up, and with the beta3 Portable Launcher running I ran the test on mine:
Seq: 30.73 3.527
512K 30.78 1.295
4K 6.424 0.025
I guess it wasn't too bad of a deal. I don't know (yet) who makes the actual flash drive, just that Staples (an office supply store) has it branded. I'll have to do some more digging. I was pleasantly surprised by the results, but now I want it faster!
Thanks for the info for a comparison, and it does appear that there is a significant difference between drives.
I used to sign here, but the ink keeps smudging on my screen.
The read numbers are very good. In fact, they look to be limited by USB 2.0. I am suspicious that the benchmark is somehow getting fooled.
The write numbers are very pedestrian.
What I suspect is happening here is that you are the proud owner of some new, really cheap, X3 or X4 MLC (3 or 4 bits per cell) flash. These have quite good read speeds but really lousy write speeds.
There was quite a bit of "rumor mongering" at MemCon and Flash Summit about these new chips. One concern is that some of the chips are testing as having endurances of [link removed by mod JTH. no signature links or signature-style links at the end of your posts are permitted.]
I appreciate the feedback. Always good to know something about the life-expectancy of one of the most important tools I use everyday. Now can you estimate how long I am going to last? That would be a trick!
Seriously though, it has been my experience that "life-time" warranties aren't really good for anything. It's quite often the data that's more valuable, and when the drive dies, they are only obligated to replace it with the same size and speed, which I am hoping that bigger drives (and hopefully USB 3.0) are available by then. (Moore's law estimates - depending on who you believe doubling every 18 - 24 months gives me a little bit of time before I want something better.)
I used to sign here, but the ink keeps smudging on my screen.
If the stick really is made of
Moore's law says that the technology companies have agreed to ONLY allow multiplication of capacities at a rate of 2x/18months, not that more (ha) is not possible.
It is the "culture shock" phenomenon, so to speak. They say we can't "handle" it, but in reality, it just gives them time to milk the population for all they can.
So, really, it takes 18 months for the population to grow weary of the technology companies feeding them pablum and to demand more!
Don't buy JUNK. boycott junk dealers.
haha... ranting is fun.
take music cd's for example! How stupid is it that "they" don't offer music DVD's with 11.7 surround sound? Because people are still stupidly sucking on the music industry bottle.
;>jamvaru
Like the headline says ..
exFAT is MADE for NAND and smokes virtually everything else in real life usage and
has none of the annoying limits of FAT .
Try "installing" (timed) the PA-suite to a NTFS-formatted drive and then compare to "installing" on a exFAT-formatted device . The problem with exFAT is that only Vista and win7/w2k8 support it natively (there is a linux-driver but it only reads atm) but it can be installed on XP/w2k3 via a m$-hotfix :
http://support.microsoft.com/kb/955704 .
http://en.wikipedia.org/wiki/ExFAT
Unfortunately, as you mentioned, it's on Windows Vista and up only by default, which means it won't work on any of the XP machines you encounter (80% of the world including nearly all net cafes, libraries, hotel business centers, school computer labls, etc). Plus, it won't work on Mac. And it's patent-encumbered, so it'll probably never be built into Linux/FreeBSD.
Sometimes, the impossible can become possible, if you're awesome!
I have a 20GB exFAT drive formatted with a Tux OS...
Is it possible to find stats on various sticks erase block sizes? Would a smaller erase block size help with speed or longevity or both?
;>jamvaru
of the chips, one could find more details. But some of the sticks I opened had no marks on the chips, other had faked type on it, other had some notreally existing part number on it etc.
It is possible to find out the manufacturer of the controller indirectly from the usb device list, but flash chips, you will have to open the device and read the type from the marking on the chip. On some sticks this is not possible any more, since they use directly bonded chips without any enclosure and stabilize it later with epoxy glue.
Otto Sykora
Basel, Switzerland
http://wiki.davincidsp.com/index.php/Get_the_Flash_Erase_Block_Size#NAND
to nice to be so simple
this all does not work when one tries to find out just abt your usb stick. This is when you access the flash directly by the operating system modules, which is not present for generic user. Without direct access to the flash file system, we are not able to get this info just like that.
But yes, particularly under linux, lot of development seems to be underway to have file system to be compatible with the flash structures, even interpreting all via the particular device controller.
So far I did not found a preconfigured linux os having this feature enabled, so I had to do it via data sheets or questions to manufacturer etc.
And then it depends also on the controller if the infos are passed out or not. Direct use of flash on bus is seldom, often they come as DOM with generic IDE controller or as people here use it, as usb bus driven devices.
Otto Sykora
Basel, Switzerland
I am a big fan of vfat for a couple of reasons
1) It does not journal the drive(saving read write from the drive)
2) It does not embed permissions right onto the file like NTFS has a tendency to do.
3) Plus the ntfs-3g driver will not read from a ntfs drive it has not been properly shutdown (ejected), therefore it will not mount the drive. vfat does not have this problem.
Please search before posting. ~Thanks
exactly! for the use on portable drives all other things have more or less problems. OK one can work around the rights problem of NTFS, but the journaling makes more writes.
However one thing we have to remember: on vfat after every small change to any file, the two copies of the file alocation table have to be cleared and rewritten and this is the actual cause of wear stress also on vfat.
Otto Sykora
Basel, Switzerland
for the non *nix people.
Please search before posting. ~Thanks
CrystalDiskMark is a usb speed tester.
I am hoping for a website (ala notebookreviews.com (or something)) that ranks all sticks and their internal memory, providing all pertinent stats, etc.
We should be able to access such a database and sort for feature(s) we want to have to identify sticks we want to buy, rather than relying on newegg or other review sites and word of mouth.
I'd like to see a list with
1. Price
2. Random read/write speed (simulating actual usage, as in web browsing, reading email)
3. erase block size (assuming this matters, i suppose smaller is better)
4. # of channels (shouldn't usb 3.0 support up to 256 channels from one device?)
5. some other stats
6. sortable options including multi-sort by priority
7. a blogiki
;>jamvaru
lol, whoopsie, I just realized somebody else already posted that
Sorry.