New PortableApps.com Platform 12.0.5. Better, stronger, faster. Download or Buy on Drive
Instant access to over 300 free and legal portable apps including the new Caesium (Oct 10, 2014)
PortableApps.com needs your help: Please donate today

Defragment Thumb Drive

BErG123 - September 29, 2008 - 6:46am
Share on Facebook

Just to help people and bring up the subject, and maybe to suggest that it should be written on the support pages for helping portable apps (at least Firefox) run faster, I suggest using a defragmenting program to defragment your thumb drive. I don't know which ones can and can't, but I tried Diskeeper Lite 9 (available in a huge download of accessories (89MB) at http://downloadcenter.intel.com/confirm.aspx?httpDown=http://downloadmir... or wherever else you can find it, if you can), and it did just fine. It has an option to analyze the selected drive to show you how defragmented it is, and defragment it. Although I've never tried making Diskeeper portable.


( categories: )

But

most current flash drives use wear levelling to prolong their life span. On these drives, defragmenting isn't useful, it just makes them die faster.

"What about Love?" - "Overrated. Biochemically no different than eating large quantities of chocolate." - Al Pacino in The Devils Advocate

...

RAM means "Random Access Memory" and has no moving parts .
This means that fragmentation is not an issue with flash-drives,
wear-levelling or not .
You should NOT defragment your flash-drive, it doesn't help anything
and requires A LOT of write-operations ..

I know

but wasn't the original poster talking about flash drives?

"What about Love?" - "Overrated. Biochemically no different than eating large quantities of chocolate." - Al Pacino in The Devils Advocate

I guess he was. I guess RMB

I guess he was.
I guess RMB Fixed thinks that flash is random access memory. It's not, but almost...it's performance doesn't suffer from random access, but addressing unit is too big to call it RAM.

Anyway:
Formatting flash gives nothing, wastes your time and reduces lifespan. I guess that it's a good thing to write in the usage guide.

"Those people who think they know everything are a great annoyance to those of us who do." Asimov

Ah

I get it now.

"What about Love?" - "Overrated. Biochemically no different than eating large quantities of chocolate." - Al Pacino in The Devils Advocate

flsh file system

Some people had first quite right idea to defrg the flsh, since the wear leveling does normally not happen over the whole chip, but rather in smaller islands, their number depending on the kind and size of the chips involved.
So it first looks promissing to get at least some files together, so some cycles of addressing could be saved.
This would be very hard to measure anyway, but the problem furthermore is that the sticks use flash files system, which is the manufacturers idea how to let the included controller read and write to all those chips and islands on them. So in fact, what we see is kind of virtual stuff, the formating and size of the flash etc are often just some parameters stored in the controller.
So one can buy 16gb stick, nice , new , very cheap!! It will display 16gb space on windows, one can write to it, read it. On linux with dd, something starts often go fishy, since the number of sectors it can read out seem to be suddenly much smaller then needed for the 16gb.
Then data written to the drive under win start disappear somehow randomly when trying to write more then 4gb to it. The files are still listed, they have the right size, but when it comes to retrieve them ....

Happened to me recently, lucky the 16gb was just a well meant gift from someone.

OK, the geometry of the real physical storage on a rotating disk is also not so as we can see it on our screens, the controllers are getting so smart and can tell also anything we like rather then the truth. So even here, the things will become somewhat theoretical in near future. Since we can only defragg what our windows or what does tell us, we still do so often some defragging, we have been told so since dos times.
But we have to be avare of the fact, that more and more we defragg only some virtual volumes and not the harddrive itself. This is defragged allready by the controller of the hard drive without we have any controll over it.

Otto Sykora
Basel, Switzerland

Wrong. There are no

Wrong. There are no continuous regions on a flash drive. Everything is being dynamically remapped, you're not changing islands but sets of points.
If you feel like helping your wear leveling you'd do much better by moving everything from your flash and back. Effect way better and because unlike with defrag, you make mostly big writes this way, you'll likely write less.
I doubt it's worth it though.

"Those people who think they know everything are a great annoyance to those of us who do." Asimov

yes, but

zones are defined before all is going to be used. They are not physically insulated or so, but defined from-to.
And wear leveling will happen only inside such zone or island or simply an assigned space of sectors, not over whole chip or array of chips, this would be too complex and probably slow, I dont know.
The question remains how big those sets of sectors are. Some old small flash like I have in some embeded units have some 8, other apparently 16 other more. But those are small compared to present day usbsticks. I have here some of 12mg , some 32mb, some 64mg. from outside all looks similar, inside all can differ depending somehow on some specific ideas of the manufacturer it seems.

Otto Sykora
Basel, Switzerland

sorting

I reformatted my 8gb stick (super talent) as NTFS @ 64k/cluster

it is much faster...

it seems to be even faster (at some things, like transferring whole folders) if I use jkdefrag to sort by file name (folder/file)

however, this means a certain portion (the "front") of the drive is used more frequently

would this "kill" my flash drive, or just the parts that get used more frequently?

also, what about when you have it nearly or all full? what if it is mostly fragmented?

are there any benchmarks for this? or online information regarding benchmark testing with fragmentation and without? with different sorting algorythms?

thanks

good thread, a little angry, though

;>jamvaru

you did nothing good

to your drive.

NTFS writes a lot extra, so it uses the drive more, however this is nowadays more and more less important , the drives and the firmware inside is getting better and better.
Non-journaling file systems as FAT are more suitable unless you need to transport big files (bigger then 4GB).
If something writes to front or not, you will never discover. You have absolutely no way to access any part of your drive from your file system. The file system you format the drive is just virtual, the filesystem of the drive internally is different and depends on manufacturer etc.

There is no way defragment your drive either. All you can do is defrag the virtual file system and this is completely pointless.

Otto Sykora
Basel, Switzerland

Please do never defragment a

Please do never defragment a USB flash thumbdrive. You get no speed gain this way.

Also...

Also like people said up at the top it REALLY shortens the life of ur usb drive. The first time I got one I was stupid and defraged it every week and by the end of the month it was dead -_-

 iLike Macs, iPwn, However you put it... Apple is better ^_^ 
"Claiming that your operating system is the best in the world because more people use it is like saying McDonalds makes the best food in the world..."

Best way to kill your

Best way to kill your thumbdrive. Fragmentation cause no speed loss on these devices.

A quote from MyDefrag (AKA JKDefrag)

According to the MyDefrag website, (Which used to be JKdefrag,)

Yes. Flash memory disks (such as USB memory sticks and Solid State Disks (SSD)) have a limited number of erase-write cycles. The MyDefrag defragmentation and optimization will move files to new locations, which involves erasing and writing, so it will reduce the lifespan of your flash memory.

But there is no cause for alarm. Modern flash memory disks have at least 10,000 write cycles, more expensive types use different hardware that is guaranteed for a minimum of 100,000 cycles. All flash memory disks use a technique called wear-leveling. The controller in the memory disk will automatically reassign blocks in the memory so that all the memory is worn down evenly. For a good explanation of how this works see the * Corsair USB Flash Wear-Leveling and Life Span article on the Corsair website. In order to wear out a cheap 10,000 cycle flash memory disk in ten years, you would have to write to EVERY BLOCK in the device about 2.7 times per day, every single day. This does not take into account error correction, which will extend the life even further, and the fact that the 10,000 cycles is a guaranteed minimum, typical flash memory will handle an order of magnitude more write cycles.

The MyDefrag script to defragment and optimize Flash memory is specially designed to move as little data as possible. Fragmented files are defragmented (this takes just a single write cycle), unfragmented files are not touched at all. Gaps are filled by moving all the files together, if there are no gaps then MyDefrag will do nothing.

Nevertheless, my advice is to use some discretion and not defragment/optimize flash memory disks every day, but only incidentally, for example once per month.

The guy who acts like he knows what he's talking about.

all sounds great

but:

>Fragmented files are defragmented (this takes just a single write cycle), unfragmented files are not touched at all. Gaps are filled by moving all the files together, if there are no gaps then MyDefrag will do nothing.<

there is simply no way for windows software or script or what ever to defragment anything on nand flash, nor is it possible to move files together by filling gaps etc since there is no access to the location of any data stored.

There mmight be an exeption when all run under one of those new jffs3 file system for linux, which is apparently designed specially for the direct opreation of nand flash components, but so fat I have read about it, it seems to be kind of experimental thing, have not met it anywhere also not with any embeded linux system so far.

For all other operating systems including standard linux distros, windows/dos, macos or what ever the operating system we can read and write to is only virtual on the top of the low level file system having different names depending on manufacturer.

Otto Sykora
Basel, Switzerland

well...

nevertheless,

it seems to make a bit of a difference in general for there to be a reorganization of the drive in favor of files/folders alpha/numeric

I have decided to do this and go one (or two) steps further

using a simple batch file and regular jkdefrag (not sure about the portable version)

do a sort by file/folder

then:

whenever you feel like it, as often as you desire, do:

1. a defrag only
2. a move up (to end of drive)

when this is complete and redone, there is no more movement, the files are optimized, there is free space at the "beginning" of the drive (or virtual drive)...

the result is faster access and less fragmentation.

when you do decide to run the batch again (just steps 1,2 above), there will be some minor defragmentation and everything loose will be collected at the top "" of the drive.

If there is "wear leveling", then there should be an internal record of what has been written to and what needs to be written to. When your drive is full only a few bits can be written to, period. So, eventually some bits will go and others will remain.

The question is: does this mean the drive will just die, or just a few bits at a time?

thanks,

j

;>jamvaru

No no no

There is no performance difference whether the data is fragmented or not on flash drives. Unlike a hard drive, nothing has to move to get to another sector.
Also, internal wear leveling makes it impossible to write the bits in a specific order physically, which means you can't truly defrag it anyway. The OS cannot choose where data is stored physically on a flash drive.

You will kill your flash drive without any benefit. It really is as simple as that. It is absolutely unnecessary and nearly impossible to defrag a flash drive. Any attempts will only shorten the drive's lifespan.

Vintage!

not so simple

>when this is complete and redone, there is no more movement, the files are optimized, there is free space at the "beginning" of the drive (or virtual drive)...<

provided you have someone who can find out where is beginning and where end or what is up and what is down....

>the result is faster access and less fragmentation.<
absolutely not. The attempts to defrag flsh drives are well meant, but even in the case of some success, it will have impact on write only, never on read access since the read process so fast that no controller can pass it out in real time. To speed up the data transfer on read, many flash systems have even additional 8bit 'half-bus' which assists the original 8bit bus in throwing the data out to the controller.

>If there is "wear leveling", then there should be an internal record of what has been written to and what needs to be written to. <
sure there is , not only one often, but we are not able to read that info unless one has production testing bench for that particular model.

>The question is: does this mean the drive will just die, or just a few bits at a time?<

depends on what you mean. If the controller chip dies, then all is dead. Otherwise nand flash chips are not that perfect, so there are allowed to have some bad cells on production already. They will have kind of spare cells, well not single cells, but groups of them . And when some parts do not work correctly, they will be marked as bad and may be also replaced with some spares, very similar as what is going on inside normal hard drive. So in that case the drive will die kind of slowly, sometimes the coming dead will not be noticed clearly.

Otto Sykora
Basel, Switzerland

thank you

I appreciate your professional approach to this discussion.

I think 10 to 100 thousand writes (or is it reads and writes?) is enough to consider the point moot. I still believe consolidating free space and making files contiguous is worth the small extra wear involved in the process. Maybe the benefit is a small one, or perhaps nothing at all. I'd like to see some scientific studies under controlled conditions. Speed testing before and after, with lots of fragmentation and little free space, etc.

very interesting discussion

{oh, I am referring to the virtual file system, not the random one)

;>jamvaru

Definite benefits

One member found there to be significant improvement in app speed when he defrag'ed an app's files. See his comments here:

http://portableapps.com/node/20385

Doubtful this would hold true for all apps but it may for those that are large and/or perform a lot of IO.

As for the wear and tear on a flash drive, with the prices decreasing every day they are a commodity. When they wear out replace them with new ones, which is most cases will be bigger and faster and less expensive.

Ed

there is one important point

in the theread 20385.

No defragg was done! it was cleared and then copied back. This is very big difference. This kind of defragg is included in the low level file system of the nand flash.
Removing big bunch of files and pasting them back to now half empty space on the flash will more or less write the files in adjacent eraseblocks if possible, thus kind of defragging them.

Otto Sykora
Basel, Switzerland

So it can be done

So defragging a USB drive can be done, and it can yield improvements in accessing the files on it. It just can't be done in the same manner as Windows uses in defragging a hard drive. Rather a script to xcopy the USB files and folders to the hd, erase the files from the USB drive and then xcopy everything back to it. A piece of cake to set up.

Ed

yes and no

thought it will work only on those very cheap once partially and also it can work only if large part or all will be removed and rewritten.
It will still not do any positioning of anything, it will simply write in adjacent eraseblocks if they are marked as free as long as data stream is coming in. Also our file allocation table will be rewritten during that procedure and moved from one eraseblock to other all the time and disturbing the consecutive data write to the free space. The same will do the compulsory copy of the file allocation table.
(we can not write over something, eraseblock has to be cleared completely first before it can be reused again).
So if many small files will be written in that way, still same or eeven bigger fragmenting can result. If large files are written, this might be more useful.

***It will all make no help to the data written on the flash, the access times will be same still. No improvements to access of files will result at all, no improvements are possible***.

The only thing will be marginally speedup is later write of large data files, since the empty space might be more made of connected eraseblocks then otherwise.

More sophisticated once use a table, where they write down how much was every erase block used in the past and will allocate the writes to eraseblocks which had the least usage up to now, so there all tricks will not work.

Otto Sykora
Basel, Switzerland

depends on

>10 to 100 thousand writes (or is it reads and writes?)<

no reading is not meant here, it is relevant only to writes, or better say erase cycles to be more precise.
If no wear leveling would be included, 100 thousand writes is not much, depending on application, such drive might last only few days or even hours.

>I still believe consolidating free space and making files contiguous is worth the small extra wear involved in the process. Maybe the benefit is a small one, or perhaps nothing at all. <

not right, if you manage to do that there might be a significant result, but since we can not manage to do it, we will not be able to see any results.

>I'd like to see some scientific studies under controlled conditions.<
sure there ae many, but all can not be compared well, since it depends on the actual low level file system and logical architecture of the storage and this is not anything general, but every manufacturer can choose his own way of doing things. There is not any common system like on CD or floppy disk etc. So comparing things will be comparing different products.
Sure the manuafacturer can defrag free space, can do lot of things on the low level, and then see what happens. The point is we can not do it. Since the drives have to emulate normal hard drive or something similar so we can read it by our operating systems. If you read such storage by hex editor for example, you will find simple sectors, partition table relating on cylinders, heads, sectors, you might find MBR and other things as well, simply making it possible to host out FAT or ext2 or NTFS or what ever.
Nothing in it about zones or domains or eraseblocks or similar. That is why we can not acces it and do any manipulations to it.

Otto Sykora
Basel, Switzerland

what happens then?

when you do a defrag, several small chunks are copied to a separate location on the drive, contiguously.

so, isn't this satisfying the concept of adjacent erase blocks? Or, does the drive split the file up into separate, non-adjacent erase blocks randomly, even though you are telling it to write adjacently?

so, the flash drive, if the second is true, does not care what the 'virtual' file system says/thinks, it is always random...?

i think i get it now...

is that right?

so, the virtual fs is only an indicator of the true fs, something windows can manage

the true (random) fs couldn't care less what windows thinks

whatever time it takes to translate from the random to the virtual to windows must be infinitesimally small (wow i spelled that right)

so,

1. the random fs cannot be manipulated at all, because it is random, even in entire move/move operations.
2. no real time is spent in the virtual operations, so none is saved by any modification of the virtual fs, nor any space

i do notice a lag in my firefox, for example, typing this... i'm going to try the move/move test and see if it works.

later,

j

;>jamvaru

yes

>when you do a defrag, several small chunks are copied to a separate location on the drive, contiguously.<

yes on magnetic drives this is the aim. On flash this is not so simple, since you can not give a command to the drive to tell it to do so.

>so, isn't this satisfying the concept of adjacent erase blocks? Or, does the drive split the file up into separate, non-adjacent erase blocks randomly, even though you are telling it to write adjacently?<

Actually I dont like calling the file system random, since it is not, it has certainly proper order.
But I can give you some other examples.
I have an account on the www.box.net. I can mount such account as a drive in windows, it will look like additional drive. If I send them a command to write some clusters next to some other clusters what do you think will happen on the big powerful server of www.box.net company? They will have probably some very big unix system there or what ever. My command will not do anything there.

I have a home file server. It will not allow me to send such commands there, but even if it did, what should happen there? It is small machine with linux partitions on it, no way linux will follow any of such commands sent from windows. The drives I can see are simple samba shares, all just virtual.

Lets take a virtual machine. I have VMware on my XP and on that I am running other copy of XP. The virtual machine has also drives, and also here I can run defragg which is in windows. What will happen? It will make something, it will 'move' files together it will 'defragg' something apparently. But when it is finished, I will look on the partition from the real windows and will find that this partition has lot of heavy fragmented files on it. Surprise? No the defragg was running on the virtual system and not on real drive. I can now defragg the real partition and this will make my system faster since the real seek motions of the hadrdrives head might become much shorter now.

So you see, doing any such thought low level operations on a virtual system has no impact on the real system.

So you can not tell from the side of the virtual system to the real system to place clusters to some particular place, since they will even not be the same size.

The file system on the flash will have certain structure and not be random an fact. Depending on size, price, intended use etc, it will simply try to assign next free eraseblock in particular logical zone when writing and erase a block if considered empty. Alone this will lead to quite some wear leveling already.
Note the wear leveling is not done across whole chip today, they are too big for that, but rather across a logical zone only.
More sophisticated setups will also keep track of usage of the individual eraseblocks in one way or other. Then the write will be assigned according some predefined priorities so the waer leveling is more advanced here.

>so, the virtual fs is only an indicator of the true fs, something windows can manage<

yes our operating systems can make sense of things like sectors, clusters, commomn on magnetic hard drives as base and then build up a file system on the top of that.
They can also handle things like CD file system, probably due to the fact that when the CD was invented, manufacturers did agree on some common system how to present contents to the user system.

The flash people seem to have not done so, therefore everybody can do what he wants, can use more complex wear leveling as suggested by the manufacturer of the actual flash chip or can leave it all completely.

>i do notice a lag in my firefox, for example, typing this... i'm going to try the move/move test and see if it works.<
yes but this is something else, this is kind of bug apparently common to all mozilla apps regardless of where they are stored.

Otto Sykora
Basel, Switzerland