With a lot of use writing/deleting on a hard disk one should defrag regularly. I have started using USB a lot and would be grateful to know what maintenance one should do.
While it is memory used; i imagine that a space is left by a deleted file and space not necessarily used by another. So fragmentation still occurs i would assume and shorten the life of a USB.
thanks
No. Fragmentation only affects allocated data. Unallocated data is meaningless in this regard since it can be overwritten. Not that fragmentation affects the life of a disk anyway. Do you even know what fragmentation is?
Vintage!
Do you even know what fragmentation is?
Good question...I'm not sure of that either.
jessejazza: fragmentation is mainly only an issue on hardrives or media where something has to physically move to get to the data (the heads in a hard drive for example). The more they have to move to read data that is spread out over the platters and non-contiguous, the longer the seek time and the longer it takes to read said data in. Electronically accessed media like a memory stick doesn't really "need" to be defragmented since the media access is all electronic and nothing actually physically moves.
Besides, defragmanting a USB memory stick would just add more write cycles to the memory on top of what you already do and reduce the flash memory's life expectancy.
Cancer Survivors -- Remember the fight, celebrate the victory!
Help control the rugrat population -- have yourself spayed or neutered!
I know what fragmentation is on hard drives. The time isn't such an issue as the space left. As i understand it when one deletes files a space is left. Only a smaller file can be put in that space. With lots of transferring data one can be left with spaces between data which can add up to quite an amount depending on size of files.
Defrag simply lifts files and puts each file end-to-end freeing up space.
But how is a memstick actually storing data. When something is deleted do all the files move to take up the space left or ... what!
thanks
They don't move to "take up the slack". They stay where they are and there's nothing wrong with that as I explained above. Having pieces of your files sprinkled everywhere on a solid state memory device doesn't matter.
One of the BIGGEST software ripoffs ever was computer memory defragmenters that were sold to unsuspecting people years ago (and after looking at this Google search, they're still out there being sold). Read the description of some of these programs for a good laugh.
http://www.google.com/search?source=ig&hl=en&q=defragment+computer+memor...
Cancer Survivors -- Remember the fight, celebrate the victory!
Help control the rugrat population -- have yourself spayed or neutered!
Well, I think I got an analogy to explain what exactly is fragmentation.
You have a desk (storage media) with a lot of drawers (clusters) each having the same space volume (4KB). Imagine that you have for example a bunch of t-shirts (file) to put in. You'll try to put them in the same drawer so you can have access to them all in one place.
But if you have too many t-shirts (big file), you can't put them all into one, but into severals. For easy access, you'll want to regroup them in the following drawers. But if the next drawer was already filled with pants (another file), you'll have to scatter your t-shirts in another drawer.
For future references, you could put in the drawer a Post-It/label with the location of the next drawer with t-shirts (reference pointer) so you can open it without searching too much (the entire file).
Since you have several drawers with t-shirts, it will take you some time to manually open each drawers (seek time in hard-drives) which create some latency.
Back onto flash-drives, imagine that your desk would have an control panel, with several buttons called t-shirts, pants, etc. When you press the t-shirts buttons, all the drawers with t-shirts will open almost instantly. Since you don't have to manually open them, it doesn't matter if they are scattered in several drawers or not.
I hope you got the point.
PS: What makes a USB thumbdrive slow isn't the fragmentation, but the quality and speed of the storage media. USB thumbdrives are relatively cheap nowaday, but the speed isn't great compared to SSD drives.
I think he gets that "defragmenting" an electronic flash media can be done, but not needed at all... that's why the name of this thread is "USB - defrag equivalent"
I, personally, can't help you much, jessejazza. But you're asking if there's "phantom" space left on your USB drive after you delete/modify a file... and if so, if there's a way/utility that can remove the "almost-empty" space...
That sound about right?
American by birth.
Christian by choice.
American by birth.
Christian by choice.
You could always just move everything off the usb drive then move it back. This will defragment the files and move all emty space to one end of the drive.
Just so you know, when you save a new file that's larger than a previously deleted file it will use the first available space and the overflow will go to the next available space until the file is saved, that's how fragmentation happens.
Fragmentation doesn't matter on a flash or solid-state drive. It only affects drives that have to move parts to read different sectors. To defrag a flash drive will not only not improve performance, it will significantly shorten the drive's life.
--
If there were a serial make-up artist putting make-up on random women, you'd expect to see millions of women walking around with make-up on. What do you find? Millions of women walking around with make-up on.
Vintage!
Thanks for your comments and putting me right. I was just wondering that's all.
this post is full of warnings against defraging a usb flash drive of any shape and size.
https://portableapps.com/node/15899
An Old Irish Blessing
May the road rise up to meet you. May the wind always be at your back. May the sun shine warm upon your face, and rains fall soft upon your fields. And until we meet again, May God hold you in the palm of His hand.
MickeyJ4J
some smart defrag software will even simply refuse to do any defrag ops on flash, it will gray out the drive or mark it different colour etc and will so stop the less informed user of further damaging his flash stick.
Otto Sykora
Basel, Switzerland
but not all do so it is nice to give ppl warnings encase as there are many different programs available someone will just download another one that will let them and not realise.
An Old Irish Blessing
May the road rise up to meet you. May the wind always be at your back. May the sun shine warm upon your face, and rains fall soft upon your fields. And until we meet again, May God hold you in the palm of His hand.
MickeyJ4J
it seems to improve performance to sort by file/folder, at least when transferring whole folders, not just one file at a time, of course, or playing a movie.
There must be a time-savings associated with contiguous files.
Are there benchmarks for this? Another consideration is the cost of your flash drive.
A cheap one can be thrown away and another bought.
Though, losing data is a bummer. (see 2nd question)
so, I suppose the idea of using a flash drive as virtual memory is anathema?
will the whole drive die or just parts of it at a time?
;>jamvaru
When it comes to reading files off a flash drive, there is no benefit at all in them being sorted. In fact, for read operations, there isn't even much benefit in them being defragmented. It saves on a few IO syscalls, but the actual read time is barely different. On tratitional disks, it is the seek times that cause such a performance hit with fragmentation, and flash doesn't suffer from seeking.
However, for write operations, the time taken is proportional not to the size of the file, but to the number of controller domains that need to be written. These domains are typically somewhere around 64k in size, whereas the filesystem clusters are 4k in size.
That means that for a clean flash drive, writing a large file to a single contiguous space, the time taken is proportional to the file size.
For the worst possible case on a very badly fragmented drive, it is possible that all the clusters of the file could be written to different domains, meaning that the overall write takes sixteen times longer.
So, basically, for a file that you not going to write to (movie, music etc) fragmentation is unimportant on a flash drive. For a database file that changes a lot, it is more important to defrag it, and for writing new files to the drive, it is most important to defragment the free space, to maximise the chances that it will find a single space large enough.
With regard to lifespan. The flash chips in most drives are good for around 100,000 write operations. USB connectors are required by the specification to be good for around 1500 insert/remove operations. So unless you re-write to the same place on the disk more than 60 times every time you plug it in, your drive will die of mechanincal failure (usually the circuit tracks breaking at the end of the connector) well before the flash chips wear out. Indeed, every dead drive I've seen so far has been a mechanical failure or trashed by removal while writing. I've never seen one yet that had just worn out it's flash.
Note: this all applies to standard consumer grade USB thumbdrive type flash devices. For hard-drive-replacement SSD class devices, the rules are totally different and none of the above relates.
but in any case this is not so simple on NAND flash as they do not have address and data bus known from other computer devices and thus not using a conventional address counter to go to certain position and when finished there go to the next etc. The multiplexed bus does all the ops more less in same time or at least much faster then the cells are then able to do the actual job. You can imagine it as a kind of serial bus or similar, the controller just throwing commands with data onto it. So there is no time difference in writing 3 zones in one corner, then 3 zones in other corner and 1 zone in the middle etc or writing all in one corner.
>and for writing new files to the drive, it is most important to defragment the free space, to maximise the chances that it will find a single space large enough.
Otto Sykora
Basel, Switzerland
there is really no reason to defrag at all, except for speed, and it looks pretty and is hypnotizing
so, what are the speed issues in flash drives?
there is a virtual file system over a random one
each would have a rated speed of operation. The random one might be limited by things a defragmenter cannot address, such as cluster size, domain size (a larger grouping of clusters). On my drive I've seen 8 or so red areas of exactly the same size. Perhaps these are bad domain areas?
the virtual drive manager might benefit from a logical reorganization of the contents of the drive, such as sorting and defragmenting. The speed benefit might not be as great as a disk drive, but it would have to be something, a few nanoseconds, perhaps.
surely there is a benchmarking of these ideas?
harumph...
;>jamvaru
This is a reasonably detailed post that will explain the process of defragmenting and explain why defragmenting a flash drive is pointless.
http://www.worldstart.com/tips/tips.php/4663
We should forget about small efficiencies — Donald Knuth
Hi,
unfortunately, the author of that article is, sadly, wrong.
While most of what he says in correct, he makes one flawed assumption.
He assumes that there is no extra work to do when reading / writing to scattered areas across a flash drive, since there is no physical head to move.
This is untrue.
The smallest area that a hard drive can write is a single sector, 512 bytes.
The smallest area that a windows filesystem (fat or ntfs) can write is a cluster, usually 4096 bytes. Since clusters are made up of contiguous blocks of sectors, these fall together on the disk, and the time to write them is seek-time + sector-write x 8. So, for fragmented files, you get many seeks, and a lot of wasted time over and above the required write time.
The smallest area that a usb flash drive can write is a controller domain, typically around 65,536 bytes. This is much larger than a filesystem cluster.
This means that if you write a small file, say 1k in size, the filesystem uses one cluster for it, and thinks it has to write 4k, but the flash controller ends up writing to a whole 64k domain. The time taken to write this domain is pretty fixes, no matter if you are writing to one 4k clustter with in it or 16 of them.
That means that for a file consisting of, say 128k of data, that is 32 4k clusters, the write time to flash can be anything between 2 x domain-write-time (for the ideal case) through to 32 x domain-write-time, if the file is badly fragmented and each cluster is in a different controller domain.
That means that on typical current hardware, a badly fragmented drive can show anything up to a 16x performance degredation compared to a clean one for write operations.
In other words, if the free space on your drive is fragmented, writing will be slower than it could be.
In usual usage patterns, for most people, it is unlikely that their flash drives will ever get fragmented enough for this to matter, so, in general, it isn't a bit issue if you never defrag.
For heavy users, especially if their drives get close to full, which tends to induce a lot more fragmentation, then it can become useful, and performance enhancing, to defragment the drive on occasion. Even for heavy users, I would recommend that you only defrag after checking that the fragmentation levels are quite bad, and that you only even bother to check this once every 3-6 months.
Also, for the people who claim that defragmenting doesn't help since the filesystem is "virtual" over a "random" layout due to wear leveling. Please be aware that for most consumer drives, if there is any wear leveling done at all, it is done at the controller domain level. i.e. there will be 70-80k of flash behind the 64k domain, which is rotated through when writes are made. This means that within each domain the wear is distributed, but that sequential file system clusters are still always contiguous at the domain level.
but there is simply no known mechanism for the user to enable him to defragment anything on a nand flash, not it is possible to find out if there is something fragmented or not.
While earlier almost all tasks were done by the actual controller of the flash, in todays chips more and more of the organizig is done even directly on the flash chip itself so even the controller has only partial access to what is going on.
All fragmentation we can see or read or change is the one on the file system level since we have no software other then common operating systems which are simply not able to read any fragmentation or similar of a flash drive.
If you take some defrag programm runnning under windows let say, it will show you how the files are placed so that windows can understand it. This is just a fake and gives no picture of what is on the drive at all. The idea that you can somehow see that some file is written over so and so many domains (or what ever it your manufacturer likes to call it ) is is nice, but it is just fake and virtual. Nothing to do with the real life, this you will probably not discover unless you have not only test environment for the controller , but for the nand chips as well.
There is something in development apparently called jffs3 for linux , but in all other cases there is nothing like fragmented drive.
And consider for example 'mixed' wear leveling: the use of some static wear leveling procedures is in fact deliberately producing fragmentation in cyclic realocation of some data.
Otto Sykora
Basel, Switzerland
All of the on-chip smarts, and the controller smarts, and the flash-designed filesystems such as jffs[23] come down to management of what I was calling the controller domains, in the above post. These are what the JFFS people refer to as eraseblocks.
NAND Flash chips, and controllers, do not shuffle data around between these eraseblocks internally, it is fundamental to the chip-design that they can only be access blockwise in these large chunks. The chips themselves only allow entire domains to be erased in toto, not bit by bit (or byte or sector or cluster).
JFFS attempts to optimise flash access (and level wear) by always ensuring that writes are to the next empty block in the cycle. It uses idle time to pick out partially used blocks to concatenate the content together into new blocks on the end of the cycle, freeing up whole blocks for use on the next rotation.
What matters for performance, in a FAT or NTFS environment, is that you do not get into a situation where the free space is scattered around in partial blocks, or where the large file you are working on is scattered across more blocks than it needs to be.
I completely agree that you cannot have control over where a file is, or whether it is totally contiguous or not. In fact you are correct in that the terms are largely meaningless with flash drives.
However, what you can do is to ensure that the file (or free space) is as un-scattered as possible, since the each individual eraseblock is presented to the storage device driver, and to the filesystem as being contiguous within itself.
In other words, if you defragment a file, it will utilise space in fewer eraseblocks than if you leave it as made up of 4k clusters. It doesn't matter if these blocks are contiguous, or if they are scattered across every chip in the drive.
Likewise, compacting the files together to remove small gaps will cause the free space to consist a greater percentage of totally empty blocks, meaning that new files will cover fewer blocks, and therfore be faster to write.
Defragmenting isn't ideal. It does cause unnecessary writes, it does move data that is an entire block-worth. But, in the case of a badly fragmented drive, it can show significant write-performance increases. Until someone writes an app that can interface with the USB mass storage driver to determine the exact boundary points for the eraseblocks, and then defrag/optimize based on those regions, filesystem defragmenting is the only choice. Though, as I mentioned, most people won't need to do it... ever.
Finally, as I've mentioned in other posts, but forgot to include above. All this is about cheap consumer thumbdrives. High speed SSD hard drive replacement units are a totally different beast, with far more complex controllers that behave very differently internally, and all bets are off for working with them.
I undertsnad now what you mean.
>All of the on-chip smarts, and the controller smarts, and the flash-designed filesystems such as jffs[23] come down to management of what I was calling the controller domains, in the above post. These are what the JFFS people refer to as eraseblocks.NAND Flash chips, and controllers, do not shuffle data around between these eraseblocks internally, it is fundamental to the chip-design that they can only be access blockwise in these large chunks. The chips themselves only allow entire domains to be erased in toto, not bit by bit (or byte or sector or cluster).JFFS attempts to optimise flash access (and level wear) by always ensuring that writes are to the next empty block in the cycle. It uses idle time to pick out partially used blocks to concatenate the content together into new blocks on the end of the cycle, freeing up whole blocks for use on the next rotation.However, what you can do is to ensure that the file (or free space) is as un-scattered as possible, since the each individual eraseblock is presented to the storage device driver, and to the filesystem as being contiguous within itself.
In other words, if you defragment a file, it will utilise space in fewer eraseblocks than if you leave it as made up of 4k clusters. It doesn't matter if these blocks are contiguous, or if they are scattered across every chip in the drive.Likewise, compacting the files together to remove small gaps will cause the free space to consist a greater percentage of totally empty blocks, meaning that new files will cover fewer blocks, and therefore be faster to write. Until someone writes an app that can interface with the USB mass storage driver to determine the exact boundary points for the eraseblocks, and then defrag/optimize based on those regions, filesystem defragmenting is the only choice.Finally, as I've mentioned in other posts, but forgot to include above. All this is about cheap consumer thumbdrives. High speed SSD hard drive replacement units are a totally different beast, with far more complex controllers that behave very differently internally, and all bets are off for working with them.
Otto Sykora
Basel, Switzerland
I'm going to defrag my stick
The major annoyance is that an operating system chooses to write a file in a defragmented manner, rather than finding the "first" free space large enough to store it in its entirety.
I suppose it is something they "try" to do. Just as the flash manufacturers and so-and-so's TRY to make their drives work better, save the life of the cells, not need defragmentation, etc.
And we hackers, and users, want to be able to stay ahead of this curve of learning the manufacturers impose on us, hence a plethora of defragmenters.
What can we do? Until some magic bullet appears that can manage this "random" chaotic mess inside a flash drive to improve performance and save space, we have the option of using our "virtual" defragmenters to do some off the work needed.
I have decided to move all my data on my drive to the "end" and run a batch of "defrag-only" followed by "move-up" on a regular basis. Once the data is at the end of the drive, there is plenty of space for windows to do its thing, make its mess.
It has been pointed out that flash memory is not RAM, exactly. There is also a virtual file system between it and the user. Defragging this and moving it to the end of the drive seems to be a logical choice. What else can be done? Isn't doing SOMETHING better than doing nothing? You want a mishmash of files all broken up and making new files break up due to lack of contiguous free space?
You are aiming for intentional fragmentation. Yet you say this is a good thing or at least not a bad thing. It seems there is a consensus that there is some performance to be gained from a defragmentation scheme.
What scheme would you use? If you weren't SCARED to do it!
;>jamvaru
>I have decided to move all my data on my drive to the "end" and run a batch of "defrag-only" followed by "move-up" on a regular basis. Once the data is at the end of the drive, there is plenty of space for windows to do its thing, make its mess.
Otto Sykora
Basel, Switzerland
Defraggler (from the developers of CCleaner) will allocate data to the beginning of the disk. I do not know if this applies to flash drives. Windows 2000, XP, 2003 and Vista. 64-bit OSs are also supported. Defraggler.com
We should forget about small efficiencies — Donald Knuth
there are cylinders, heads, sectors which give approximate idea where those data might be. Approximate because also here this is kind of virtual, we have no 255 heads in a drive after all and spare sectors are kept in reserve in the case some have to be marked as non usable etc.
on flash there is no such thing so I just wonder to which cylinder, head... are the data transfered when it say it brings them from sector 801 to sector 500
Otto Sykora
Basel, Switzerland
Each cell/block on a flash drive can survive tens or hundreds or read/writes before it dies. Defragmenting a flash drive can significantly wear down the life span of the device. Defragmenting a flash drive will not give you any noticeable increase in performance, no matter how fragmented it may be. Much like an iPod, it would be more feasible to simply cut/paste all of the data contained on the device to the computer, reformat/restore the flash drive or iPod and then reload the data back on to the device.
iPods are not recommended to be defragmenred because the hard drive is not meant for such a vigorous operation. A hard drive in a computer is larger and has larger componenets. A hard drive in an iPod is quite petite and is meant for the daily seek of music/pictures/video on the device for a short duration. If iPods were meant to be defragmented, they would come with a defrag program within the device OS.
We should forget about small efficiencies — Donald Knuth
since it depends on how the operating system and its filesystem is 'talking' to the actual storage system on the hard drive.
For example: there was one day someone who created a defrag software for linux and was surprised that none took him serious, since otherwise there is no defrag for linux needed.
Probably same is valid for the iPod too, the system is probably designed so that no fragmentation will occur.
Otto Sykora
Basel, Switzerland
probably
basically, it annoys me to see the yellow color on my jkdefrag (
;>jamvaru
>windows accesses the virtual file system, not the random one, so if you defrag the virtual file system (whatever that does to the random one, i don't know, don't care) then windows will be happier the next time it has to write (or read) from the virtual file system as the file will be all in one piece and there will be lots of contiguous space to write a file in one write operation, not lots of little spaces requiring windows to break the file into separate chunks and write to several locations.
Otto Sykora
Basel, Switzerland
Theory
By this you might get some grade of defragmentation, thought you will never be able to see the results really.
Reality
this helped me greatly and I am back to micropauses instead of 10 second long pauses. https://portableapps.com/node/20385
A 1MB file stored as 1,000,000 separate bytes will take longer to read than one stored as a single 1,000,000 byte file regardless of the media it's stored on. Nanoseconds add up to seconds when there is enough of them.
Ed
some people too, thought there is no way to do it!
>Reality
this helped me greatly and I am back to micropauses instead of 10 second long pauses. https://portableapps.com/node/20385A 1MB file stored as 1,000,000 separate bytes will take longer to read than one stored as a single 1,000,000 byte file regardless of the media it's stored on. Nanoseconds add up to seconds when there is enough of them.
Otto Sykora
Basel, Switzerland
how bout a script that
1. runs jkdefrag analyze only
2. selects text of files that are fragmented
3. copies those files to hard drive
4. erases files
5. copies files back to flash drive
6. erases files on hd
it could repeat this on a timer basis, say every half-hour?
( i know you're going to say: but jkdefrag really has no idea the defragmentation level of any file on a flash drive, ok... had to try)
defrag addict
;>jamvaru
>i know you're going to say: but jkdefrag really has no idea the defragmentation level of any file on a flash drive, ok... had to try)
Otto Sykora
Basel, Switzerland
you will not get any improvement in accessing your files
You must be a professor. You know the theory well but fail to acknowledge the results of real world experiences.
https://portableapps.com/node/20385
Ed
Haha, then you must be a creationist because there is NO evidence
whatsoever that it really was fragmentation causing the problems :
quote : "This more than likely was caused ...."
@ED p
thousands of people in the world who will tell you that their PC works much faster when they did defragg their RAM.
In fact there some people I know, they definitely reached the same by holding Australian opal stone in their mouth during the boot process of their machine!
And well yes you can read the thread full to down, and you will discover what was the secret off all that.
>You must be a professor.
Otto Sykora
Basel, Switzerland