Content Type
Profiles
Forums
Events
Everything posted by jaclaz
-
A strategy based on a theory without any real world, practical, reliable data supporting it is not "reasonable". Again, magnetic bit rot exists (in theory), but it may also exist (in practice) so that hard disks have built-in ECC and many modern hard disks have other additional self-check routines, of which we know nothing or next to nothing and that may vary greatly with the several different technologies used in different generations of hard disks and different makes/models of them. In theory, some of this bit rot may (somehow) escape the ECC and/or the other provisions. In practice, there is no real world data supporting (or denying) this. Refreshing the data on disk would logically prevent this if done "often enough", but noone knows how often is "often enough", if performed not often enough it serves no purpose whatsoever (as the data is already lost). And we don't know if this (rewriting a same value over itself) actually makes the sector "weaker"[1] or affects this (or that) other aspects of the hard disk functionalities. About S.M.A.R.T., we actually have real world data supporting the finding that it is mostly meaningless as a predictor of hard disk failures: For all we know you don't need to refresh data to have the disk do whatever it does (that we don't know), powering it on periodically[2] and leaving it idle for some time [3] will make it do whatever it is supposed to do (that we don't know) and this might (or it might not, we don't know) update some relevant S.M.A.R.T. attributes that are anyway largely meaningless. Now the real questions are: Should non powered hard disks be stored horizontally or vertically?[4] Should they be protected from cosmic rays? [5] jaclaz [1] there are also theories about "weak" sectors that can be "revived" by writing to them different patterns [2] but we don't know how often [3] "time enough", that we don't know how much it is [4] which is loosely linked to the question whether they should be mounted horizontally or vertically in our PC's [5] Heck, if we have muon tomography and it works, what will muons do to our bits (and does parallel recording affect the way we should store our disks)? https://en.wikipedia.org/wiki/Muon_tomography
-
Copying the boot files manually should be irrelevant. The intermittent cursor - generally speaking - is caused by a mismatch between CHS and LBA addressing, it can happen even with original MS bootsectors on some hardware, and there is a patch for FAT32 and NTFS bootsectors (FAT16 ones are not affected) to avoid the issue. It is possible that one version of Partition Wizard only introduces this mismatch (or *something else*) that the Bootrec /fixboot can fix while the other version introduces *something else* that Bootrec /fixboot cannot fix. No way to know without comparing the various bootsectors. Vista and later - again generally speaking - expect partitions to be aligned to the megabyte (and NOT to a cylinder), for primary partitions this is not usually an issue but using disk manager can lead to logical volumes inside extended to be lost. jaclaz
-
Yep, I would also like to have scientific answers (possibly backed up by real world data), with all due respect to the respective Authors, a thread on Vogons or a page on io.bikegremlin.com are only (baseless) opinions. The whole point revolves around the lack of proper data about real world "bit rot" on hard disks. Many people believe that the built in ECC in modern hard disks is enough to prevent them and that the reasons why you lose your data are largely not related to bit rot (i.e. the hard disk will fail for a number of other reasons long before magnetic bit rot comes into play). Some other people believe in the magic of data refreshing (since we have nothing to use for comparison, we have no way to know if it is effective, how much it is effective, for all we know it could even be worse than doing nothing[1]) . Both kind of people should actually have redundant backups for their important data, as it is the only surely working mitigation policy. jaclaz [1] only as a purely fictional example, you take a hard disk that has been kept unpowered for several months (if you follow the advice of one of the two software makers in the world of a program for this) or years (if you follow the advice of some random internet user publishing a blog or technical journalist writing an article on an online computer magazine) and you start reading and writing data to it for several hours continuously, would this stress accelerate its failure (for other reasons[2])? [2] the "other reasons" are relevant as if any of these "other reasons" happen, you have lost your data anyway, and it makes no real difference, once you have lost your data, whether they were lost due to bit rot/lack of refreshes or for any of the "other reasons".
-
Maybe there is some misunderstanding going on, I only have seen and read (via google translate) the single German article you posted a link to: https://www.computerwoche.de/a/der-langsame-tod-von-festplatten-und-ssds,3549906 And it was written to inform users *like you*, not *like me*, as you can read German and I can't (I actually can but only a little bit - ein bisschen ). BTW that article first says: and later states: powering up the hard disk once a year or every two years is recommended to (supposedly) prevent stiction (which may only happen on some given types of motors/bearings), the refreshing is recommended.. If you want to follow the recommendations (opinions) in that article and use Diskfresh, then you should use it 3-4 times a year, as recommended by the manufacturer of the tool. Surely the answer to the ultimate question about Physics and Magnetism (and Life, the Universe, and Everything) is 42. jaclaz
-
Evidence is not a (single, apocryphal) article, that expresses opinions. But no problem whatsoever, everyone is free to believe whatever he/she wishes to believe, as long as he/she is happy. And now, for no apparent reason, the Get Perpendicular movie by Hitachi (2005) featuring the super-para-magnetic effect: jaclaz
-
Well, 1) run bootsect /NT60 C: 2) verify that the Vista boots normally (via BOOTMGR) 3) make a copy of the active partition bootsector 4) run Bootrec /Fixboot 5) verify that Vista doesn't boot anymore 6) make a copy of the active partition bootsector 7) compare the file copied in #3 with the one copied in #6 It is very possible that (for *whatever reasons*) the bootsector that bootrec /fixboot is supposed to be writing is "bad" (or outdated/not suitable to your specific hardware/partitioning) whilst the one contained in the version of bootsect.exe you are using is "right", as seen here: different versions of bootsect.exe may have different versions of the bootsector embedded (though they should all work) the same could be true with different versions of bootrec, and *for some reasons* the one in the bootrec you are using is not good. jaclaz
-
Well, you completely missed the point. There is no doubt that data - over time - may be lost due to failure of the media/device it resides on. The point is that there is no reliable real world data on what exactly causes the media or device to fail, nor how soon (if ever) this may happen. ALL other possible reasons are relevant, as the point is whether the data is readable or not. With CD's and DVD's the media is separated by the device, so if the media is prone to failure, it makes a lot of sense to copy the data from the media and write it to some new media. On hard disk the media and the device are the same thing, refreshing data (on a "same" device) at an elemental level (one single byte) you are going to read (say) AA and write in the same exact place an exactly same AA. There is no evidence that the newly written AA is "better" than the previous AA or that it will last any longer, and anyway there is no way to know in advance if any of the tens of other issue the device may experience (both at hardware and software/firmware) may make that data unreadable. Now, if you copy the data (as files, or as a dd copy or image) to a new device you have some better chances that the new device will last more, but no certainty whatsoever. Since you (we, noone) do not know how often this is needed (could be a year, 2-3 years, 5 years, we simply don't know) how often are you going to do this refresh? Let's take two years, you go for your refreshing and find out that for *some reasons* your AA is not anymore readable, your data is lost and you cannot refresh anything. Let's take one year, you go for refreshing and perform it successfully. Then you do it again, one year later, but this time you find out that for *some reasons* your AA is not anymore readable, your data is lost and you cannot refresh anything. Now, if you had (as theoretically needed) other two copies of the data you could attempt making a new third copy from one of the other two, without this you are essentially flipping a coin every n months/years. So if we had thousands of (accurate/correct) reports by thousands of people that use ALL these approaches: 1) never refresh 2) refresh every year 3) refresh every 2 years 4) refresh every 3 years with the SAME data using the SAME make/model of hard disk, kept in the same room/climate, then after a few years we would have some data to decide which strategy is the best one (and that will be accurate only for the one, or possibly even two, generation(s) of hard disks and possibly not applicable to the current generation of disks). In a nutshell, choosing this (or that) strategy with the reports we actually have (none) is simply an act of faith in something intangible. Then we will start with anecdata, people that lament losing their data (and made no data refresh will be criticized by those that refresh their data every 3 years and never lost their data, people that lament losing their data even if they do a refresh every 3 years will be told that they should have done it yearly, etc., etc. @D.Draker Unlfortunately I am old enough not only to know about the the deskstar/deathstar, but also had quite a few of them failing in the firm I was working with at the time. @Astroskipper Here is one source (recommending DiskFresh) stating that data refresh should be performed much more often (the page is referred to on the diskfresh page by puran software but it is long dead): https://www.puransoftware.com/DiskFresh.html https://web.archive.org/web/20160413062810/http://www.fact-reviews.com/info/diskfresh.aspx >In order to keep the data signal from fading, you need to re-write the data. This is often known as “hard disk maintenance”, and should be done 3 or 4 times a year. ... >A regular (quarterly) refresh of all hard disk drives will help the drive detect and fix errors before they turn into problems, and keep the data integrity intact. Don't forget to refresh any external USB drives you may use for backup purposes. The procedure is recommended (by the actual seller of the software that can do it and that should know about how it works) every 3-4 months. How were the 1 year or the 2-3 years determined then? (3 to 12 times the recommended interval) Isn't it queer that if the procedure is so needed and needed so often there is only one program to do it (besides Spinrite[1])? jaclaz [1] that has its own list of doubtful claims, there are endless critiques of Steve Gibson and his works/programs, the most benevolent ones saying that he tends to exaggerate greatly (either the seriousness of the issues or the capabilities of his software to fix them)
-
Yep, but we need also (at least) one that works without copying to (maybe) spot some differences. I quickly checked only the first (Freedos Live) and the last (Aros Live). The first one is made with (good ol') mkisofs, the latter with genisoimage. Both are NOT hybrid images (i.e. first 16 sectors are 00's). Both are bootable (BootId 88) and bootmedia 0 (no emulation boot). First one has no joliet (but Rockridge), second one has both Joliet and Rockridge (and additionally uses ISO 9660:1999 Relaxed Format). One of the good things about mkisofs (that later programs stupidly removed) is the default feature of writing in the image the actual command line used to build the .iso, so the FreeDos iso has been made with the (rather "normal") commands: jaclaz
-
This whole stuff: 1) makes no sense whatsoever 2) should it make sense in some very specific, niche situation, we don't have any meaningful, valid, data to support the method, let alone the frequency at which it should be implemented If you feel good refreshing your data, do it. If you feel good refreshing your data every year, do it every year, if you feel good doing it every 2-3 years, do it every 2-3 years. You will anyway lose some data before or later (or possibly you will never lose any) but there is no way to know in advance nor any way to know whether this strategy contributed in any meaningful way to the outcome (whatever that happens to be). Replicating data (having multiple copies, on different media and stored in different location) is an effective strategy, though it is difficult to implement, let alone maintain over the years. The only thing that promised (but has to be seen if they delivered) long enough data retention were (are?) M-DISCs: https://en.wikipedia.org/wiki/M-DISC jaclaz
-
Yep, and I am around here since almost 20 years and have nearly 21k posts, 1.8k reputation[1], what gives? To be picky (as I am) the user you linked to did nothing but repost the opinion of a known expert (Kim Komando). I prefer Armand Gracious as an expert: https://www.dedoimedo.com/computers/experts.html jaclaz [1] which is anyway meaningless metric.
-
Risk is relative, there are much more risky things. Only as an example I use as a browser Opera, Kasperski as antivirus, and Softmaker as office suite (without having checked the german ancestry of the programmers of the latter company). jaclaz
-
A good alternative could be avoiding altogether posting this kind of news on MSFN which is (or has been till now, or should be) a technical board and not a social media site, nor a news aggregator (of course IMHO). jaclaz
-
Still essentially OT, here: https://msfn.org/board/topic/182116-winsetupfromusb-problem-installing-xp-on-legacy-system/?do=findComment&comment=1191406 is a small batch that allows for testing iso creation with different parameters and with "tricky" file names (but this won't touch other possible issues such as el-torito emulation or no emulation, multivolumes, etc.) Though the batch could be useful (maybe) as a base to develop on, the issue remains that we don't actually (yet) know <what exactly> is triggering the misbehaviour, so it is not reproducible. jaclaz
-
It could be well something connected with the various standards (original ISO, joliet, rockridge, but also iso-level and el-torito emulation or no emulation, multi-volume disks and what not). There are many possibilities, and the actual references/documentations are far from being "crystal clear", as a completely unrelated example, it took us (actually rloew and cdob) years to find out that the part specifying size of floppy images for el-torito emulation had been read "wrong" by everyone and we could actually have 36 MB superfloppies, JFYI: https://msfn.org/board/topic/152399-on-bootable-cds-floppy-emulation/ jaclaz
-
Yep, you are using hex offsets, 0x8000=32768=16x2048. After the CD001 and up to relative offset 0xBE or 190 there is normally a huge (until the end of the sector) data field and many iso tools use space or 0x20 to fill it. This should be "normal", the last 512+653 bytes should be zero, but before those the presence of 0x20's is OK,. see Volume Descriptors here: https://pierrelib.pagesperso-orange.fr/filesystems/iso9660_simplified.html http://www.dubeyko.com/development/FileSystems/ISO9960/ISO9960.html It must be *something else*, not those values in the Volume Descriptor. jaclaz
-
It seems more like *something else*, offset 8000 is within the 32,768 leading (normally) empty bytes/16 sectors, there would be no reason to have part of them filled with spaces, around the volume label (I presume you mean the volume descriptor, i.e. "CD001") could well be something like a fixed length field filled with trailing spaces. I have no idea what "Smart Storage, Inc." could be related to, Ultraiso is a known tool that may well be not fully compliant to specs (and actually I seem to remember a few issues with it when using it to edit a .iso) I have no experience with Gear software. An easy test that you can try is zeroing the first 32,768 bytes of a copy of a (non-working) .iso and see if it still misbehaves. Also, checking the actual .iso files with isoinfo (from the cdrtools package) might (or might not) find some particular inconsistency in those not working files. Cdrtools port to windows should be available here: https://sourceforge.net/projects/cdrtfe/files/tools/binaries/cdrtools/ jaclaz
-
It is strange, but I am not sure to understand. 20 means "space" in ASCII and "20,00" means "unicode space", neither are used as "empty" (which remains 00 or 00,00), this is not related to ISO (ISO 9660) file format, AFAIK. I have never seen a .iso where 20 is used for empty sectors (if this is what you mean)[1]. What would happen with a .txt file filled with spaces (either ASCII or Unicode)? Or, if you prefer, what happens if you change the file extension from .iso to <something else>? jaclaz [1] the standard has the first 16 sectors (2048 byte each, i.e. 32,768 bytes) "unused" or - maybe better - "reserved for other uses", they are (for normal CD/DVD images) all 00's or (as an example) have a MBR or other data for "hybrid" images, though it could probably be fine according to the standard to fill this area with 20 hex, it never happened to me to find one such .iso image and it would anyway make very little sense.
-
Personally (JFYI) I use (an old version of) Cute PDF writer for that (needs Ghostscript installed, that is anyway an useful tool). But AFAIK/AFAICR it doesn't need a service running, it appears as a normal printer in the print dialog. jaclaz
- 77 replies
-
- Autostart
- Startup Manager
-
(and 1 more)
Tagged with:
-
Windows XP 2022 Edition is everything Windows 11 should be
jaclaz replied to msfntor's topic in General Discussion
And if I may, a crappy article, with a (intentionally I believe, to gain traction/clicks) misleading title. The XP 2022 is only a mock up of what a non-existing OS may look like, this crap should be NOT reposted, especially on a technical board like MSFN is (or should be, or was until the recent trend by one or a few users to submerge it with random articles about anything they find or crosses their mind ). jaclaz -
It could well be that all TTL devices are not suitable, or you are connecting them improperly, or whatever else, but if we assume that at least one of them works and connections are fine, the PCB is most probably defective or bricked beyond what you can do. I see you have asked where to look for professional repair : https://msfn.org/board/topic/184610-where-to-send-drives-for-repair/#comment-1242291 It is very difficult to give a suggestion, there are a few large firms, besides smaller (that could be either better or worse) shops. Seagate has their own data recovery service: https://www.seagate.com/products/rescue-data-recovery/ Drivesavers seems to have a lab in Houston: https://drivesaversdatarecovery.com/cms-category/location/ jaclaz
-
I seem to remember that the 7200.12 boards work when completely detached from the head and motors. You could try (after searching and verifying the above) to connect to the unmounted PCB and see if you get a terminal response. Shorting the pins is - again if I recall correctly - not used if the terminal does not connect/respond, but rather when the terminal continuously outputs an error message (a sort of loop) and the shorting allows to interrupt the loop. On other models the two points to short correspond to the head read channel, here is some info: https://msfn.org/board/topic/157329-barracuda-lp-no-not-a-720011-nor-a-720012/?do=findComment&comment=1003759 But I have no specific, let alone surely working, info/details. jaclaz
-
With all due respect, you seem like mixing all together a lot of things of which you have only a minimal understanding. It is very unlikely (please read as "it won't work) that you will ever be able to start a later OS with the boot files of a previous one. As well your attempt (if I get right what you attempted doing) to repair a Windows 7 boot using Vista files/tools is completely futile. It is the first time I read about partitions disappearing when deleting the boot files, it makes very little sense, the NT 5 boot files (NTLDR, NTDETECT.COM and BOOT.INI) are completely offline/non accessed once the OS is booted. the NT6 file BOOTMGR is as well completely offline/not accessed BUT the \boot\BCD (which is actually a Registry hive) is instead online when the os is booted as it is mounted as HKLM\BCD00000000, so only deleting this latter file may (though I doubt it) create issues (like preventing the deletion or crashing) but I cannot see how it can modify anything connected to partitioning/filesytems. Partitions DO NOT disappear, what may happen is that data is corrupted/changed in the MBR partition table, the Magic Bytes become invalid or - much more rare and the symptom is having RAW partitions - that some data in the PBR is corrupted/changed. It is possible, since you are using (why) XP on a disk with partitions aligned to megabyte (2048 sectors) that it corrupts something, but it is unlikely, the only reports we have is about the XP disk manager that corrupts logical volumes inside extended, never primary partitions. About this there are only two "conventions" about disk partitioning: 1) up to XP partitions start and end aligned to heads and cylinders (and since the most common HS geometry is 255/63 this means that first partition starts on 0/1/0 - i.e. 63 - and end on n/254/63) 2) starting from Vista partitions are aligned to whole megabytes (actually mebibytes) i.e. 2048 sectors x 512 bytes each = 1048576 bytes, the first partition starts at 0/32/33 - i.e. 2048) Clusters are related only to filesystems and their size have nothing to do with disk partitioning. The NTFS default cluster size is 4096 bytes (no matter which OS, on any practical size of a volume), FAT32 cluster size affects (or is affected by) volume size, the FAT32 default cluster size of 4096 bytes cluster size applies only up to around 8GB, larger volumes will have larger cluster size. If you want to troubleshoot the issue, you will need to learn how to make backups/copies of the MBR and of the PBR's and compare them in order to pinpoint what actually changes. When you use "automagical" tools (easus/eassos/etc,) you don't know what they do, for all we know they might fix the actual issue but create another one. About bootrec, you have to understand what it does: https://ss64.com/nt/bootrec.html Bootrec /fixboot writes new bootsector CODE, leaving the DATA in it untouched Bootrec /fixMBR writes new MBR CODE, leaving the DATA in it untouched so if the issue is with either the MBR or PBR DATA, they won't do anything. The above link explains also how to use the bootrec /rebuildBCD (i.e. making a backup and then deleting the existing BCD in order to force the tool to completely rebuild a new BCD), still, given the mixing of files and operating systems, it is entirely possible that the installed OS is not detected or not detected properly, so often it is needed (though they both need some experience) to use BCDboot and/or BCDedit manually, in any case you must use the tools coming from the exact same OS you are trying to boot/repair. A possible alternative (easier to use because it is GUI, but that still needs some knowledge of the way the BCD works) could be BOOTICE: http://reboot.pro/files/file/592-bootice-v1332/ jaclaz
-
Boot.ini is ONLY a settings file, it tells NTLDR what to boot [1]. You are loading the NTLDR of your Shorthorn that (evidently) can boot the (previous) operating systems (2000 and XP), the inverse does not work, i..e. the 2K NTLDR can boot both NT4 and 2K but not XP, the XP NTLDR can boot XP and 2K (and possibly also NT4 but it would need to be tested) and so on. Vista and later can ONLY boot via BOOTMGR. You can add to your 2K or XP BOOT:INI an entry loading GRLDR (the grub4dos bootmanager) and have in its configuration file (menu.lst) a choice to chainload the BOOTMGR. Other point: the "press any key to boot from CD" message comes from a file on the CD called bootfix.bin. The idea is that you may (or may not) being booting a CD to install the windows for the very first time, i.e. on a blank/unpartitioned hard disk (intended use) OR you might be booting it on a machine with an already partitioned disk OR (common enough) you forgot the CD in and left boot priority unchanged and you booted to the CD by mistake on reboot after install, so, once the CD has been booted, the bootloader/bootsector of the CD runs bootfix.bin that checks the MBR being valid and only if it is, it shows the message. If the MBR is blank or invalid, the CD continues booting without need for a keypress. Some more details here: http://reboot.pro/index.php?showtopic=9540 jaclaz [1] to be fair, though unrelated, non-arcpath entries in BOOT.INI are also parsed by BOOTMGR, I believe up to the one in Windows 8 or 8.1, surely up to 7.