I have heard it is possible to set the cluster size when formatting a NTFS partition. I am going to create a partition for video files which normally always is several hundreds of megabytes and sometimes gigabytes of data. So is it correct that I should use as big cluster size as possible to make the harddrive faster? Any recommendations?
+ Reply to Thread
Results 1 to 6 of 6
-
Ronny
-
ronnylov,
Although I have only set cluster sizes on FAT systems, I would think it is possible for NTFS as well. You might give partion magic a try and see.
However, as far as speeding up your harddrive, I do not think that will happen by making larger cluster sizes.
IMHO the most critical item to keep in mind for disk speed is keeping the HDD defragmented. Generally, average seek time for the entire disk is set by several mechanical and electrical conditions. TO illustrate for example, a 9ms seek drive will still take 9ms to seek a 512 byte cluster or a 64K byte cluster. So in an absolute worse case scenario, where you have small cluster sizes and a highly fragmented disk, the HDD will have to make many, many, more 9ms seeks in order to run all over the HD to retrieve your data. These 9ms seeks do add up. However, if you have small cluster sizes and a completely defragmented HD then the HDD can pull in your data very rapidly because it does not have to jump all over the HD to gather it. The time to retrieve a 512 byte cluster versus a 64K byte cluster is insignificant compared to the 9ms seek times for moving the read head all over the disk.
IMHO the biggest advantage for large cluster sizes is the influence on wasted disk space. If you have many files that are smaller than the cluster size then you will in effect lose available disk space. For example, if your cluster size is 2K and you have 1000 files that are only 1024 bytes then you are effectively losing 1024 bytes per file of available disk space.
However, in video editing where file sizes are quite large then the clustering effect is not very great. Another key point to keep in mind is that of cluster allocation by the OS. The OS is usually very simple minded when trying to figure out how to allocate HD space for files. It generally grabs the first available cluster then the next and so on until the file is completely stored on the HDD. This should bring to mind degragmentation. That is having files in non-continguous clusters/sectors. Remember each seek from a worse case position can take 9ms (for example). The message is that a HDD should be tuned for the application at hand and routinely defragmented in order to keep its performance at optimum. -
With larger clusters each file use less clusters and then the file may be less fragmented. So I gain speed with less fragments but I loose more space but when using large files the loss is not big compared to the file size. But I have also heard that some defragging software does not like cluster size larger than 4K in NTFS. And if I can't defrag then I loose speed. However the built in defragger in WinXP should work with larger clusters. I guess the defragging will be faster with less cluster to handle so if the defragger works then a big cluster size is better?
So I guess it is a compromize. I have googled a bit and it seems that a cluster size of 64K may be a good compromize for video editing and video capture. But the system partition will benefit of smaller cluster size because of many small files.Ronny -
ronny,
right, it is a compromise at best. if video capture and editing of large files is all that is done, then I agree the bigger the cluster the better (from a fragmentation perspective).
Losing space is only applicable if you have alot of small files (smaller than the cluster size or not even multiples of the cluster size) then that is true. But if you have ~1Gb sized files for example and a cluster size of say 64K then the only possible spot for loss is in the last cluster. If the exact file size does not completely fill up that cluster then it is a loss. Which is not very significant compared with the total file size.
Having used the XP defragger from M$, it wants to group your files into three categories of use. System files, executables and data. Those that get accessed the most or require immediate access (i.e., system/exectuable) get primo spots on the disc. While poor data files get the non primo spots because they do not require (presumed) the same timing considerations as do executables and system files. It is a strategy but one that I do not wholly agree with. Now complicate this with the fact that there are many files that have been designated as nonmovable thus they will stay exactly where they are when they were installed. Regardless. So now the defrag sw has to work around these problem children as well. Whether or not the defrag is quicker, is debatable. The more often it is done (i.e., they less likely any severe fragmentatio has had a chance to occur) the less there is to do, hence perceived as quicker.
64K as a cluster size just happens to match most often with the maximum cluster size that can be read with one seek. As a test write a small program to do a direct access of a file into a 128K buffer and try to read it all at once. The system will make at least two reads to get all of that data ddepending on how the file has been stored (i.e., spread across noncontiguous clusters/sectors).
Personally, on a home computer that has many uses, I think that a 64K cluster is abit overkill (unless it is SOLELY dedicated to video work). My system is set to use 8K clusters. I find this to be a good compromise size. While it is not optimum for video processing, video processing is but one small portion of what my computer is used for. Under no circumstances would I suggest going below 2K clusters unless one likes to hear their HDD thrashing all of the time. Take a system with a small amount of RAM and the needs of the page swap file system, it is not difficult to see the problems encountered there. That is a good reason to have a dedicated video disk, separate from the OS disc.
What I am leading to is that even though there are separate partitions one for video and one for the OS, there is STILL only one set of read heads and it must split its time from doing the necessary OS system puts and takes as well as the demands of disc access for video processing. Ever wonder why the HDD light stays on all of the time?
Ed. -
Here is the syntax w/ some explanations in XP. However, there are parameters and consequences which I've made bold and may be useful.
---------------------------------------------------------
C:\Documents and Settings\Ripper>format /?
Formats a disk for use with Windows XP.
FORMAT volume [/FS:file-system] [/V:label] [/Q] [/Aize] [/C] [/X]
FORMAT volume [/V:label] [/Q] [/Fize]
FORMAT volume [/V:label] [/Q] [/T:tracks /Nectors]
FORMAT volume [/V:label] [/Q]
FORMAT volume [/Q]
volume Specifies the drive letter (followed by a colon),
mount point, or volume name.
/FS:filesystem Specifies the type of the file system (FAT,
FAT32, or NTFS).
/V:label Specifies the volume label.
/Q Performs a quick format.
/C NTFS only: Files created on the new volume will be compressed
by default.
/X Forces the volume to dismount first if necessary. All opened
handles to the volume would no longer be valid.
/Aize Overrides the default allocation unit size. Default settings
are strongly recommended for general use.
NTFS supports 512, 1024, 2048, 4096, 8192, 16K, 32K, 64K.
FAT supports 512, 1024, 2048, 4096, 8192, 16K, 32K, 64K,
(128K, 256K for sector size > 512 bytes).
FAT32 supports 512, 1024, 2048, 4096, 8192, 16K, 32K, 64K,
(128K, 256K for sector size > 512 bytes).
Note that the FAT and FAT32 files systems impose the
following restrictions on the number of clusters on a volume:
FAT: Number of clusters <= 65526
FAT32: 65526 < Number of clusters < 4177918
Format will immediately stop processing if it decides that
the above requirements cannot be met using the specified
cluster size.
NTFS compression is not supported for allocation unit sizes
above 4096. -
Thanks for the help!
I have 2 separate drives in my system and have ordered a third disc. I have my OS and programs on one separate partition of the main drive and the rest of the disc as a storage partition. My second hard drive is for video only and my third drive will also be video only. I have noticed that editing is faster when having source files on one physical disc and the destination files on another one. So I want to separate my video files from the main drive. I have ordered a 160 GB Seagate SATA disc with 8MB cache. I don't think the extra cache will help much but the difference in price was not much. I already have a SATA controller on my motherboard but I use one of the channels with an IDE converter to my DVD burner. I will not use Raid because I think it is safer and better to work from one source disc to another destination disc when editing video. So I guess it is quite a good idea to format my video hard drives with a big cluster size like 64K. I will not use NTFS compression and I will disable the indexing service on the drives. I will format to NTFS because I use files larger than 4 GB. I don't think I will run out of RAM memory because I have 1024 MB of RAM installed.Ronny
Similar Threads
-
Optimal tracking for video != optimal tracking for audio
By miamicanes in forum RestorationReplies: 4Last Post: 16th Oct 2014, 11:52 -
Optimal File Size BD-> MKV/MP4.
By nirbhayn in forum Blu-ray RippingReplies: 5Last Post: 24th Aug 2011, 07:31 -
Best h.264 setting for optimal size/quality with handbrake.
By frickfrock99 in forum Newbie / General discussionsReplies: 4Last Post: 1st Oct 2010, 02:09 -
Optimal Video Bitrate and target DVD size
By src2206 in forum Authoring (DVD)Replies: 22Last Post: 24th Oct 2007, 06:54 -
optimal bit rates/codec/frame size for HDV on the web
By lafsunlmtd in forum Camcorders (DV/HDV/AVCHD/HD)Replies: 19Last Post: 15th Aug 2007, 18:09