
DiracDeBroglie
MemberContent Type
Profiles
Forums
Events
Everything posted by DiracDeBroglie
-
HDD performance <-> Allocation unit size
DiracDeBroglie replied to DiracDeBroglie's topic in Hard Drive and Removable Media
Based on your experience, which recovery apps/soft are worth having at hand when it comes to this sort of partition (table) damage? johan PS: In Win7 I deleted the very first (primary and WinXP-created) partition (on my 2TB HDD), and then created a New Simple Volume in the Disk Manager in Win7. After checking the 2TB HDD with GPartED I noticed there was a gap of UNallocated disk space between the newly created primary partition and the next extended partition shell (which has never been damaged) of exactly 1MB. I could only see the 1MB gap in GPartED, but not at all in the Disk Manager in Win7. So, in GPartED I got rid of the 1MB gap by expanding/extending the newly created primary partition. But also the very last partition (which is primary one following the extended (shell) partition) was followed by an UNallocated area of 2MB!!?? I got rid of that too by extending the last prim partition in GPartED. But also here the 2MB gap was not visible in the Win7 DM. -
HDD performance <-> Allocation unit size
DiracDeBroglie replied to DiracDeBroglie's topic in Hard Drive and Removable Media
In ATTO Disk Benchmark I tried out Queque Depth (QD)=4 and =10. On my recently purchased Win7 notebook there was no difference in the performance test between QD=4 and =10. However, on my 8 year old WinXP notebook, the difference in the performance test between QD=4 and =10 is clearly visible in the output graph. I am not sure what the exact relation is between QD in ATTO, and NCQ. Furthermore, my HDDs have an NCQ depth of 32, while QD in ATTO goes only to a maximum of 10. On my Win7 notebook, I also don't see the relevance of NCQ or QD if large chunks of data (from sector X to sector Y) from the HDD are being dumped into the RAM-DMA during the performance test with ATTO; there is no need for complicated searching on the HDDs where the read/write heads need to wobble over the platters. Well, the performance of the HDDs (internal and external) on my Win7 notebook is what it is, there is no way to go beyond it physical limits. To me, the most important is that I can get the maximum out of the HDDs and also understand why and how particular parameters in the hardware (HDDs) related to the performance (optimization). By the way, some time ago we had discussion about misaligned partitions in advanced format HDDs (4K sector drives), like my 2TB exteranl USB3.0 drive. I now performed an ATTO Disk Benchmark test on a partition (on the 2TB HDD) which was first correctly partition-aligned after being created under Win7, and did the ATTO test again after the partition was deleted and re-created under WinXP (so misaligned). The WinXP-created partition had a slightly lower read performance (1% less, not even that maybe) compared to the Win7-created partition. However, the write performance for the WinXP-created partition was something like 10% less than for the Win7-created partition. I get the impression that partition-misalignment may not be that much of an performance issue in advanced format drives. All performance test (including for the WinXP-created partition) where done on the Win7 notebook. In the process of doing those test, I ran into trouble with my 2TB HDD, which is also my data backup drive. The first partition is a primary partition, followed by an extended partition containing 4 logical partitions; all partition were created under Win7. Then, on my WinXP notebook, after having deleted and re-created the first (primary) partition, the second, third and fourth logical partitions disappeared; the first logical partition, however, and the extended (shell) partition remained intact (checked that out with PTEdit32 and PartInNT). I tried to retrieve the data from the lost partitions using GPartED but after almost 10 hours of "retrieving", GPartED gave up. So, all my data on those 3 logical partitions is gone. Lesson to be learned: WinXP and Win7 are not quite compatible when it comes to partitioning (which I already knew) and that can have disastrous consequences (that I have now learned the hard way). johan -
HDD performance <-> Allocation unit size
DiracDeBroglie replied to DiracDeBroglie's topic in Hard Drive and Removable Media
I'm not familiar with SSDs, but I'm very much interested in getting deeper into the workings and fine tuning of SSDs, as I may purchase an SSD in the near future. Hence the question, do you know any documents, websites, links, references, or whatever reading, which could give me a deeper insight about how SSDs are designed and work? Especially, I need to acquire a better insight in notions like line size, page size, partition alligmend in SSDs, clusters size (and the difference with Allocation Unit Size in Win7). I would like to get a deeper insight in SSDs and the difference with HHDs, especially in the context of optimizing their performance. thanks, johan -
HDD performance <-> Allocation unit size
DiracDeBroglie replied to DiracDeBroglie's topic in Hard Drive and Removable Media
I've done the same test with a 2TB external SATA-III (6Gbit/Sec) drive in a USB3.0 enclosure, and the results are the same as with my internal 1TB system HDD: that is, the read/write performance of the 2TB HDD (Write=105MB/Sec; Read=145MB/Sec) seems to be independent of AUS under Win7. So, from your explanation I infer that large chunks of data, linearly scooped up from the HDD (from sector X to sector Y), are being dumped into the RAM area that is foreseen for the DMA on the motherboard!? (Correct my if I'm wrong.) In that view it is understandable that HDD performance should be insensitive to the AUS of the filesystem on the HDD. However, ... I got (nonetheless) a question about how the HDD performance test is implemented in software. I've been using ATTO Disk Benchmark (v2.47) and there are several options like *Direct I/O*, *I/O Comparison* and *Overlapped I/O*. The option *Direct I/O* is always checked during my tests. According to the HELP in ATTO Disk Benchmark, *Direct I/O* means that there is no system buffering nor any system caching used during the HDD performance test. I assume with buffering or caching ATTO means the RAM-DMA buffering on the motherboard (I cannot imagine they're talking about the HDDs cache); I tacitly assumed that ATTO Disk Benchmark tested the performance between the motherboard (RAM-DMA) and the HDD, so meaning the performance over the SATA-III link itself. I've done a performance test on my 2TB external HDD (and earlier on my system 1TB HDD too) with *Direct I/O* UNchecked and the results were stunning; the graphical performance reading in ATTO Disk Benchmark went up to 1600 MBytes/Sec, almost 3 times the maximum SATA-III bandwith!!! Hence that I think that with buffering or caching ATTO means the RAM-DMA on the motherboard. Consequently, in order to see any realistic performance output in ATTO I think the option *Direct I/O* needs to be checked, thereby deactivating any RAM-DMA buffering or caching on the motherboard. As a result, I'm a bit confused here. If the read/write performance of the HDD is insensitive with respect to the HDDs AUS because of the usage of the RAM-DMA on the motherboard, then this argument seems to conflict the assumption that the RAM-DMA is deactivated in ATTO Disk Benchmark by its option *Direct I/O*. I'm sure I got it wrong somehow somewhere, but where exactly did I make a mistake in my assumptions or my reasoning? regards, johan -
HDD performance <-> Allocation unit size
DiracDeBroglie replied to DiracDeBroglie's topic in Hard Drive and Removable Media
I was under the tacit (and maybe naive) impression HDparm was a benchmark, but indeed, it isn't. Although, it can do some HDD performance testing. I think for benchmarking HD Tune Pro is th e most suitable. For getting Disk ID, features, commands, settings of the HDD, HDparm and HDDScan are OK. j -
HDD performance <-> Allocation unit size
DiracDeBroglie replied to DiracDeBroglie's topic in Hard Drive and Removable Media
I tested the Win32 version of HDparm on Win7; note that you need to run HDparm as "(right click) Run as administrator" . A plus point is that it can show the Disk ID, features and settings. Minus point is that there is no grip on the size of the test file, nor on its block size (at least, I could not find any info about it). Also I couldn't find any test features to verify the burst transfer rate over the SATA link of the drive. I just wonder if the most recent LINUX version of HDparm is available somewhere on a Live CD (running on some Linux kernel)?? Maybe the most recent Linux version does more than the Win32 version. I've also found some other, maybe interesting, HDD performance tester called HDDScan: see http://hddguru.com/software/2006.01.22-HDDScan/ Also that app you need to "Run (it) as administrator". It can also show Disk ID, features and settings, but also with this utility I couldn't find any feature measuring the burst transfer rate. johan -
HDD performance <-> Allocation unit size
DiracDeBroglie replied to DiracDeBroglie's topic in Hard Drive and Removable Media
Yes, I just had a look about the Lecroy equiment, turns out very costly: $100,000 !? This is 4 times the price of a normal car, and ... 100 times my notebook. With HD Tune pro v.5, tab |Benchmark|, I managed to get a burst of 155MB/s; the graphical data, on the other hand, shows bursts up to 240MB/s. However that is still a long way from the SATA III burst speed. Maybe I could give it one more last try with HDparm from *allen2*. It seems to be a Linux utility, but is there any recent up to date version for Windows 7 available? johan -
HDD performance <-> Allocation unit size
DiracDeBroglie replied to DiracDeBroglie's topic in Hard Drive and Removable Media
Hi Jaclaz, how're doing, I know that benchmarks can sometimes be only little representative for "real world" applications. I'm using benchmarks merely to get an idea about hardware specs, and check if hardware specs come close to what manufactures claim in their marketing specs. The system drive of mine is a SATAIII (600MB/s) drive, but I haven't seen yet any indication of that in the benchmarks. I did some tests with HDTune Pro v.5, IOmeter, CrystalDiskMark and HDSpeed (all the latest versions, and where possible the 64-bit version). They all give me the same result: Sustained data transfer rate of 120MB/s, which is ok compared to the specs in the documentation of Seagate. The problem, however, is measuring the burst transfer rate on the SATA link of the drive; that measurement should give me at least an idea about the absolute maximum bandwidth of the HDD and the chipset on the motherboard. With HD Tune Pro, tab |File Benchmark|, the top graph shows peaks up to 240MB/s. I just wonder if that can be considered as the burst rate? I have no idea how reliable the graph is. Then, there is also the first tab |Benchmark|, which gives a burst rate figure in a little [burst rate] bar on the right side of the pane, along with Access time and Transfer rate: Minimum, Maximum, Average. In my test the [burst rate] figure was lower than the Maximum Transfer rate, and sometimes even lower than the Average Transfer rate. Has anyone seen something similar with HDTune Pro?? johan -
HDD performance <-> Allocation unit size
DiracDeBroglie replied to DiracDeBroglie's topic in Hard Drive and Removable Media
Just did a test with HDTune pro 5.0 trial version, more specifically I used File Benchmark. The *Sustained data transfer rate OD (max)* comes very close to 120MB/s, as I measured with Atto disk benchmark. HDTune shows burst rates of up to 240MB/s, which is still a lot less than the *I/O data-transfer rate (max)* of 600MB/s (SATA III). I wonder if there are any other software tools more effective (than HDTune) in measuring burst rates. johan -
HDD performance <-> Allocation unit size
DiracDeBroglie replied to DiracDeBroglie's topic in Hard Drive and Removable Media
The HDD I have is a Seagate 1TB, Model ST31000524AS. The specs can be found at http://www.seagate.com/staticfiles/support/docs/100636864b.pdf The 120MB/s data transfer rate I measured is consistent with the specs from Seagate where the Sustained data transfer rate OD (max) = 125MB/s. It's just that I was expecting to see the maximum data transfer rate at an AUS of 64KiB only. Instead, I see that 120MB/s for a whole range of AUSs. (Note that the 1GB test partition is not even located on the outermost tracks of the HDD). Well ok, good then. I just wonder why is there the introduction of large AUSs if a smaller AUS has the same data rate? From my tests I assume ATTO Disk Benchmark software (32 bit) measures the Sustained data transfer rate OD (max). I was wondering if there exists any software/utilities to measure the maximum burst data rate of the HDD? (So using data packs smaller than the HDD buffer size, which is 32MB in my HDDs case)? regards Johan -
Hi, In an attempt to find out the relationship between HDD performance and Allocation unit size (AUS), I formatted a 1 GB partition with a range of AUSs going from 512 bytes to 64 KiB and for every AUS I ran the ATTO Disk Benchmark software (32 bit) to determine the Read/Write performance of that 1GB partition as a function of AUS. Strangely enough the R/W transfer rate saturated at 120 MB/sec for every AUS starting from 4 KiB to 64 KiB. So whatever the AUS (between 4KiB and 64KiB) the HDD transfer rate (performance) remained approximately the same at 120 MB/sec. This is not what I expected; I was hoping to have a performance that would go through the roof at an AUS of 64 KiB, but apparently that is not the case. Is there anybody who can confirm such a "flat" relationship between HDD performance and Allocation unit size?? regards
-
$MFT zone reservation; glitches?
DiracDeBroglie replied to DiracDeBroglie's topic in Software Hangout
I don't expect any *big* or *serious* problems with an MFT zone reservation that flips forth and back from large (GBs) to small (MBs) when one switches between WinXP and Win7 machines, especially when the drive is meant for data storage and not as a boot drive; MicroSoft would've detected it years ago if it were otherwise. But anyhow, you never know that this issue may have some nasty unexpected side effects somehow. Just image, you got a dual boot system with WinXP first installed followed by a Win7 installation on another partition. After booting from Win7, Win7 can see the WinXP partition and shrinks the MFT zone to just over 200MB, from an initial size of GBs. If under a Win7 boot one then drops a file onto the WinXP partition, it may very well be that this userdata file comes right behind the 200MB MFT zone on the WinXP partition, blocking any possible expansion of the MFT zone if one boots next time into WinXP. If the user at a later time installs more apps onto the WinXP partition, no doubt that it will be more likely that the MFT will become fragmented and scattered all of the partition, leading to some WinXP performance degradation. Also for RAIDs and NAS servers, which I have no experience with, I got a question. Imagine a home network with WinXP and Win7 machines all communicating with a RAID or NAS configuration for data storage. Are RAID and NAS servers transparent enough to let the WinXP and Win7 machines change the MFT zone reservations on the drives in the servers? If yes, one second a WinXP machine creates a large MFT zone when accessing the server, a fraction of a second later the MFT zone gets shrunk to just 200MB when an Win7 machine gets access. A process like that could flip-flop the MFT zone forth and back in size many times per minute until there is enough data on the drive to encapsulated the smallest MFT zone reservation (200MB). I'm not sure if this could happen, it is just that in a year or 2 I may consider purchasing a RAID or NAS configuration. BTW. I just tried *fsutil behavior set mftzone <n>* in an elevated cmd promp in Win7. It works, but I got a message saying the new setting requires a reboot to take effect. In WinXP the command works as well but does not give a notification to reboot, although a reboot is required in WinXP too. Note that in WinXP the fsutil command does not appear in the command list after typing help in the prompt. It really makes me wonder about the usefulness and the effectiveness of the possibility to change the MFT Zone Reservation, especially with servers that never reboot (as mentioned earlier by jaclaz). Johan -
Shrinking Extended Partition in Win7
DiracDeBroglie replied to DiracDeBroglie's topic in Software Hangout
I just found what I was looking for and that is called GParted; Ponch, many thanks. I've tested GParted liveCD, and it passed with flying colors; LiveCD version (gparted-live-0.12.0-2.iso) from http://sourceforge.net/projects/gparted/files/gparted-live-stable/0.12.0-2/ See also links http://gparted.sourceforge.net/ http://partedmagic.com/doku.php?id=programs&s[]=gparted GParted can perform operations on the extented partition without touching the logical volumes in the extended partition. I've done the test with an extended partition with 5 logical volumes in it. GParted can shift the beginning as well as the end of the extended partition to the left and to the right, provided there is unallocated space outside the extended partition and free space inside the extended partition. GParted also allows to align partition boundaries to the legacy 63 sector cilinders, or to NT6 rules (Win Vista, Win7). In the Resize/Move pane there is the option for alignment: Cilinder or MiB. (Cilinder = 63 sector offset, MiB = NT6 rule) GParted: great flexibility, up to date, lean and mean; just what I was looking for. Johan -
$MFT zone reservation; glitches?
DiracDeBroglie replied to DiracDeBroglie's topic in Software Hangout
I fully agree with your critical view, jaclaz. Still working on another problem now. I'll be back. Johan -
$MFT zone reservation; glitches?
DiracDeBroglie replied to DiracDeBroglie's topic in Software Hangout
In Win7 the problems I experiend for the 2TB USB3 drive are exactly the same as those for the internal 1TB drive in my laptop. For both drives (internal and external) Win7 does not apply the new MFT zone reservation settings, not even when reformatting or creating whole new volumes. New MFT zone reservation settings are only applied after reboot. I've now done the tests again on my WinXP laptop, only with the external 2TB drive, and the problems are exactly the same as under Win7. Under WinXP I did not perform the tests on my internal (boot) drive, though; this drive is filled up 95 percent and risks are too high something could go wrong with my data on that drive. It would be nice if someone could reproduce my findings about the MFT zone reservation. The processes that read the HKLM\System\CurrentControlSet\Control\Filesystem registry keys are FBAgent.exe, Explorer.exe, SearchProtocolHost.exe, and GoogleUpdate.exe. Johan -
$MFT zone reservation; glitches?
DiracDeBroglie replied to DiracDeBroglie's topic in Software Hangout
I've done severeal tests with ProcMon, but those didn't go very well. With *Enable Boot Logging* being active, the computer freezes very often during boot, I then have to reboot for a second time, which then works fine. Very often the Logfile.PML is corrupted and I have to redo the whole thing. During normal operation, long after the computer has been rebooted, ProcMon freezes the computer; the only option is then to reboot the hard way, that is, Power off as Ctrl Alt Delete does not work. Anyhow, I managed to get several boot-time logfiles with the appropriate filters. I did the test with 2 filters, Filter 0: Path = HKLM\System\CurrentControlSet\Control\Filesystem Filter 1: Path = HKLM\System\CurrentControlSet\Control\Filesystem Detail = NtfsMftZoneReservation In the boot-time log file I didn't see anything referring to the key NtfsMftZoneReservation being read [i got an screenshot but I can't insert it.]. Is there any way to pin-point ProcMon closer to the key NtfsMftZoneReservation? Johan -
Shrinking Extended Partition in Win7
DiracDeBroglie replied to DiracDeBroglie's topic in Software Hangout
Just had a look at the Parted Magic and GParted website. Looks like GParted can aligne partitions along NT6 rules. However, I didn't see any info about the possibility installing GParted, or PM for that matter, on Windows 7. I need to do tests on an external drive; so there is no need for me to start putting time in burning Live CDs or DVDs. According to your experience, is there a version of GParted and/or PM that can be installed on Win7? Have you used GParted or PM before on extended NTFS partitions? Johan -
Shrinking Extended Partition in Win7
DiracDeBroglie replied to DiracDeBroglie's topic in Software Hangout
Shrink Querymax did not help. It simply did not work probably because I selected the extended *partition*, and that is not a volume. As far as I know any direct action of Diskpart is on volumes, not on partitions. With List Partition I can see all my primary partititons, and the one extended partition too with all its logical partitions. Shrink and Extend work on all primary partitions, probably because Diskpart sees those are also volumes. (I've not tried to Shrink or Extend a primary partition without any filesystem it.) I find it a very strange that an extended partition can get extended if need be by the last logical volume it contains, but shrinking an extended partition does not work. I've tried Shrink and Extend also on an *empty* extended volume, meaning there are no logical volumes in it, only 100GB of free space; nonetheless, it still does not work, the extended partition is untouchable for any direct action in Diskpart. Pretty fustrating that an extended partition cannot be shrunk if one needs to free up extra space for an additional primary partition. The end sector of the extended partition must be stored somewhere in the EPBR, or in the MBR, or maybe in both (I don't know for sure). Are there any (free and userfriendly) system utilities that act on the EPBR or the MBR with which the user could shift the end sector of the extended partition? I've tried Partition Wizard, the free edition as well as the boot CD version, but that was a total mess; the beginnings and ends of all my volumes where *not* according to NT6 (Win7) rules, so all the offsets followed the 63 sector legacy offset rule (as in WinXP). Johan -
Under Win7, on an external 2TB USB3 MBR formatted drive. In Disk Management extending and shrinking a logical volume beyond the extended partition encapsulation does not work. However, with diskpart.exe extending a logical volume beyond its extended partition works just fine; the end of the extended partition just gets "pushed" away deeper into the unallocated area outside the extended partition. If then the last (most right-side) logical volume in the extended partition gets shrunk, the right side of the extended partition does not move; some free space at the end and within the extended partition is the result. I've tried to move the right-side end of the extended partition towards the left so to add that (extended) free space to the unallocated area where I want to use it for the creation of a primary partition; it simply does not work!! Why is that? What is the technical reason why diskpart can move the end of an extended partition to the right but NOT to the left, although there is plenty of free space adjacent left?? Why can diskpart calculated a new end position in case of expanding the extended partition but not when it needs to shrink the extended partition? Johan
-
$MFT zone reservation; glitches?
DiracDeBroglie replied to DiracDeBroglie's topic in Software Hangout
I've downloaded Process Monitor v2.96 from http://technet.microsoft.com/en-us/sysinternals/bb896645 I installed it, and ran it, but what a huge amount of data. I've been searhing for the string NtfsMftZoneReservation but no hit. I generated a boot-time log file which is in the many GBs size. How do I proceed from here? Johan -
$MFT zone reservation; glitches?
DiracDeBroglie replied to DiracDeBroglie's topic in Software Hangout
I already did the defrag test some time ago on Win7 and WinXP as well, but no result what so ever; the new MFT settings did not take any effect after defrag on the volumes. I tested it again this afternoon on a logical volume and a primary volume under Win7; still the same result as before: new MFT settings do not take any effect. Reformatting the 2 test volumes resulted also in an MFT zone reservation according to the old MFT settings (at boot time); the new MFT setting were simply ignored. Did the whole test again by deleting the 2 test volumes and re-creating them. Again, the MFT zone reservation for the newly created volumes went according to the old MFT settings (at boot time). I really had to reboot the computer to let the new MFT setting take effect. I was under the impression that upon reformatting a volume the new MFT settings would take effect but now it became clear to me that this is definitely not the case. (Can't remember why I was under the impression before that reformatting would take the new MFT settings.) Anyhow, is there any Microsoft forum, website, email address, or whatever, where users could just drop/post suggestions to the developers/programmers of Win7 to improve Win7? Johan -
$MFT zone reservation; glitches?
DiracDeBroglie replied to DiracDeBroglie's topic in Software Hangout
Subject: Enabling new MFT zone reservation *without* reformatting the volume. In Win7, I've used CHKDSK and MOUNTVOL with several attributes; also Diskpart offine/online I used, and I ejected and physically reconnected my 2TB USB drive several times. All those commands work perfectly but the newly set MFT zone reservation was definitely NOT enabled after using those commands. The only way to get the new MFT zone reservation enabled, was by rebooting the computer. In WinXP I used CHKDSK on the volumes but also here the new MFT zone reservation didn't get enabled; only reboot of the computer did the job. As for the expansion and the shrinking of the MFT zone reservation (without reformatting, and after rebooting), shrinking is never a problem. However, in case of expanding the MFT zone reservation, it did expand up until the foreseen size determined by NtfsMftZoneReservation=1->4, or until the expansion of the MFT zone bumped into at least one cluster with a file (system of user). Hence that any MFT zone that is encapsulated by files, won't expand at all beyond those files; files were definitely *not* moved so to expand the MFT zone. I noticed something else, and I'm not sure what the implications could be. I reformatted the test volumes (2TB USB drive) on my WinXP laptop with NtfsMftZoneReservation=1 (which is the default value). Then I connected the drive to my Win7 laptop and all MFT zone reservations got shrunk (a lot). New userdata was then put right after the shrunk MFT zone, thereby blokking any expansion of the MFT zone when connection my 2TB drive back to my WinXP laptop. I don't know what to think of this; I find it quite obvious that different volumes can have different MFT zone reservations set by the user, and those MFT zone reservation should not change when NtfsMftZoneReservation gets a new value (1->4). I find it obvious that the MFT zone reservation *only* changes during (re)formatting of a volume. The behavior of Win7 and WinXP is consistent with http://support.microsoft.com/kb/174619 , though. However, I am still in the opinion that the automatic resizing of the MFT zone for ALL volumes (wihout (re)formatting) in the system upon changing the NtfsMftZoneReservation value is a serious drawback; the user unnecessarily loses control. Johan -
$MFT zone reservation; glitches?
DiracDeBroglie replied to DiracDeBroglie's topic in Software Hangout
What should the attributes be of CHKDSK? /R or /X or something else? Johan -
Moving the beginning of a Partition
DiracDeBroglie replied to DiracDeBroglie's topic in Hard Drive and Removable Media
Aaahaaa. Ok then, now we're getting somewhere. It is not cluster #786432, but it should be DEFAULT cluster #786432, which disconnects the number #786432 from the user's choice of cluster size. So in my case 786432 X 4096 = 3.22GB(decimal), and that 3.2GB is what I saw all the time, whatever cluster size I took during formatting. I am not sure, but I think I understand why the number #786432 was picked, and not any other number. The cluster sizes are 4KiB, 8, 16, 32, and 64KiB. Dividing those numbers by 4KiB gives you: 1, 2, 4, 8, and 16. Dividing 786432 by 1, 2, 4, 8, and 16 always gives you an integer; in other words, putting the beginning of the MFT at default cluster #786432 means means that the beginning of the MFT is always "aligned" with whatever the user's choice of cluster size. Hence the MFT will never start within a cluster, whatever the user's cluster size. Every number derived from 786432 by dividing (or multiplying) it by 2 would do the job too. When testing Partition Wizard I noticed PW puts the MFT at the very beginning of the volume (almost no offset), so at a default cluster number that was a lot smaller than #786432. For *user* cluster sizes below the default cluster size, I think any number could be used, even if it cannot be divided by 2, because the issue of "alignment" doen't raise itself. Still have to do some digging with tiny hexer. Johan -
Moving the beginning of a Partition
DiracDeBroglie replied to DiracDeBroglie's topic in Hard Drive and Removable Media
I've just done the test again with a 100GB and a 1.7TB volume, with two different cluster sizes: 4KiB and 64KiB. The offset of the MFT and the size of the MFT zone are insensitive to cluster size; in UltraDefrag the MFT remains at a fixed position. If the start location would be fixed at cluster #786432, then with increasing clustersize, the start location would shift closer to the "end" of the volume, but that is clearly not so in my case. I did the test in Win7. Do you have the time and infrastructure to try to reproduce my findings, jaclaz? johan