Jump to content

jaclaz

Member
  • Posts

    21,294
  • Joined

  • Last visited

  • Days Won

    53
  • Donations

    0.00 USD 
  • Country

    Italy

Everything posted by jaclaz

  1. Spinrite may (or may not) be the best program around, but in any cases it has now become very, very outdated. The current version, 6.0 is now around 18 years old, and there are many reports (even on its Wikipedia page): https://en.wikipedia.org/wiki/SpinRite about issues with some newer BIOS/Firmware and "large" disks (which may even mean larger than 128 GB ones in some cases). In any case, whatever "proprietary methods" it uses (maybe) it is improbable that those methods are fully compatible/suitable to the new hard disk technologies that in the meantime have been used in the manufacturing of hard disks, so it either reverts to using "normal methods" when it finds something new or it risks to do more damage than good. All in all, using it (today) on a disk manufactured in the last 15-18 years, even if one manages to access large disks with it, is risky, while its use could still be attempted as a last resort (say in attempting to recover data that any more recent tool cannot) it isn't (IMHO) advisable to use it for "maintenance" tasks (such as this disk refreshing) Diskfresh (as well not particularly current, last version is seemingly 2013) on the other hand seems like a "normal" program (i.e. not using any particular proprietary method) so it should work just fine on more modern hard disks as it likely only uses conventional read/write procedures. It remains a mistery (to me) why, if this disk refreshing is so useful/advised by "experts", only two dedicated programs (for Dos/Windows) exist that perform this disk refreshing. The badblocks program (Linux) in the read-write test mode makes *something more* (from what I can understand): 1) it reads the contents of a given amount of blocks from hard disks and caches them 2) it writes to the blocks a "random" pattern 3) it verifies that the pattern has been written correctly 4) it writes back the cached contents 5) it verifies that the cached contents have been written correctly so it seems like it will be much slower (three reads/two writes), though, if we follow the line of reasoning about the need of bytes on hard disk needing to be "exercised" (which personally I consider being a form of voodoo) it would actually make sense to test the random pattern. Besides, like many (Linux but not only) command line tools there is the risk to issue by mistake a destructive command. Once re-said that (in my opinion) it makes no sense to "refresh" data on disk, a hypothetical suitable procedure is more like: 1) procure a (new) disk bigger than the original 2) make a RAW (dd-like) copy of the original on the larger disk 3) calculate the checksums of both the original and of the dd-copy and verify that the checksums are the same 4) dd back the image on the original 5) re-calculate the checksum of newly written "original" and verify it is still the same 6) keep the RAW image on the new disk as a further backup This will take LOTS of time on today's large hard disks and put a lot of stress on the two hard disks involved, so you'd better have a setup where the hard disks are properly cooled (fan pointing to them directly). jaclaz
  2. OT (but not much) I sleep on the left side of the bed and always put down the left feet first when I get up. I started this in the early 1960's and so far has proved very effective. For a small period of time in the late 1980's I had to sleep on the right side (and thus had to put down my right foot first) and my laptop hard disk suddenly started developing bad sectors. I am not saying the two are connected, still as soon as I was able to re-arrange the bedroom and got back to good ol' left foot first those disk errors stopped. To be fair, maybe it was due to the new laptop I bought at the time, though since then I had many laptops and their disks never started developing bad sectors again[1] . jaclaz [1] there was an exception in late 1993 (or maybe it was early 1994?) when I found 3 (three) bad sectors on a Compaq laptop hard disk (if I recall correctly it was a Seagate 120 Mb in size)
  3. It is not like you have thousands of disks stored horizontally and thousands of disks stored vertically and you found a substantial difference in their lifetime that can be connected to their storage spacial orientation (i.e. independent from the various drives make, model and year of manufacture), for all we know (say) Seagate's from 2007 are better stored vertical whilst WD's from 2006 are better stored horizontally. What you offer (like any of us can only do) is some anecdotal, very limited, data, I am very happy that you have very old hard disks still working (horizontally) and that you keep an eye on their S.M.A.R.T. values, but for all we know this may have nothing connected to the disks longevity, it could well be that they are properly cooled, that you have good air where they are kept/used, that your PSU's are very good, we cannot know for sure. jaclaz
  4. Yep, that GUID decodes to uuid -d 72c19bc3-c024-11ed-ad1e-806e6f6e6963 encode: STR: 72c19bc3-c024-11ed-ad1e-806e6f6e6963 SIV: 152537264095622615413589416556765276515 decode: variant: DCE 1.1, ISO/IEC 11578:1996 version: 1 (time and node based) content: time: 2023-03-11 15:50:29.500000.3 UTC clock: 11550 (usually random) node: [redacted] so it definitely has been recreated when you originally posted about it. But what are the contents of that key in the Registry under MountedDevices? If in regedit you double click on the key it should open a pop-up window with a title *like* "Modify binary data" that may contain something readable, on the left side you will see the actual bytes hex values, on the right side how they render as ASCII, since the values in that section are usually Unicode you will see each letter separated by a dot, but it should be readable. The drivecleanup tool is not particularly difficult to use. You open a command prompt, navigate to where the porgram is, then you issue the command: drivecleanup -t and it will list all the devices it finds "orphaned". If your "ghost device" is among them, you re-run it as: drivecleanup and it will cleanup the entries listed before. According to the docs, it checks these Registry paths: so, if you cannot find your ghost device GUID in those, it won't do anything. jaclaz
  5. A strategy based on a theory without any real world, practical, reliable data supporting it is not "reasonable". Again, magnetic bit rot exists (in theory), but it may also exist (in practice) so that hard disks have built-in ECC and many modern hard disks have other additional self-check routines, of which we know nothing or next to nothing and that may vary greatly with the several different technologies used in different generations of hard disks and different makes/models of them. In theory, some of this bit rot may (somehow) escape the ECC and/or the other provisions. In practice, there is no real world data supporting (or denying) this. Refreshing the data on disk would logically prevent this if done "often enough", but noone knows how often is "often enough", if performed not often enough it serves no purpose whatsoever (as the data is already lost). And we don't know if this (rewriting a same value over itself) actually makes the sector "weaker"[1] or affects this (or that) other aspects of the hard disk functionalities. About S.M.A.R.T., we actually have real world data supporting the finding that it is mostly meaningless as a predictor of hard disk failures: For all we know you don't need to refresh data to have the disk do whatever it does (that we don't know), powering it on periodically[2] and leaving it idle for some time [3] will make it do whatever it is supposed to do (that we don't know) and this might (or it might not, we don't know) update some relevant S.M.A.R.T. attributes that are anyway largely meaningless. Now the real questions are: Should non powered hard disks be stored horizontally or vertically?[4] Should they be protected from cosmic rays? [5] jaclaz [1] there are also theories about "weak" sectors that can be "revived" by writing to them different patterns [2] but we don't know how often [3] "time enough", that we don't know how much it is [4] which is loosely linked to the question whether they should be mounted horizontally or vertically in our PC's [5] Heck, if we have muon tomography and it works, what will muons do to our bits (and does parallel recording affect the way we should store our disks)? https://en.wikipedia.org/wiki/Muon_tomography
  6. Copying the boot files manually should be irrelevant. The intermittent cursor - generally speaking - is caused by a mismatch between CHS and LBA addressing, it can happen even with original MS bootsectors on some hardware, and there is a patch for FAT32 and NTFS bootsectors (FAT16 ones are not affected) to avoid the issue. It is possible that one version of Partition Wizard only introduces this mismatch (or *something else*) that the Bootrec /fixboot can fix while the other version introduces *something else* that Bootrec /fixboot cannot fix. No way to know without comparing the various bootsectors. Vista and later - again generally speaking - expect partitions to be aligned to the megabyte (and NOT to a cylinder), for primary partitions this is not usually an issue but using disk manager can lead to logical volumes inside extended to be lost. jaclaz
  7. Yep, I would also like to have scientific answers (possibly backed up by real world data), with all due respect to the respective Authors, a thread on Vogons or a page on io.bikegremlin.com are only (baseless) opinions. The whole point revolves around the lack of proper data about real world "bit rot" on hard disks. Many people believe that the built in ECC in modern hard disks is enough to prevent them and that the reasons why you lose your data are largely not related to bit rot (i.e. the hard disk will fail for a number of other reasons long before magnetic bit rot comes into play). Some other people believe in the magic of data refreshing (since we have nothing to use for comparison, we have no way to know if it is effective, how much it is effective, for all we know it could even be worse than doing nothing[1]) . Both kind of people should actually have redundant backups for their important data, as it is the only surely working mitigation policy. jaclaz [1] only as a purely fictional example, you take a hard disk that has been kept unpowered for several months (if you follow the advice of one of the two software makers in the world of a program for this) or years (if you follow the advice of some random internet user publishing a blog or technical journalist writing an article on an online computer magazine) and you start reading and writing data to it for several hours continuously, would this stress accelerate its failure (for other reasons[2])? [2] the "other reasons" are relevant as if any of these "other reasons" happen, you have lost your data anyway, and it makes no real difference, once you have lost your data, whether they were lost due to bit rot/lack of refreshes or for any of the "other reasons".
  8. Maybe there is some misunderstanding going on, I only have seen and read (via google translate) the single German article you posted a link to: https://www.computerwoche.de/a/der-langsame-tod-von-festplatten-und-ssds,3549906 And it was written to inform users *like you*, not *like me*, as you can read German and I can't (I actually can but only a little bit - ein bisschen ). BTW that article first says: and later states: powering up the hard disk once a year or every two years is recommended to (supposedly) prevent stiction (which may only happen on some given types of motors/bearings), the refreshing is recommended.. If you want to follow the recommendations (opinions) in that article and use Diskfresh, then you should use it 3-4 times a year, as recommended by the manufacturer of the tool. Surely the answer to the ultimate question about Physics and Magnetism (and Life, the Universe, and Everything) is 42. jaclaz
  9. Evidence is not a (single, apocryphal) article, that expresses opinions. But no problem whatsoever, everyone is free to believe whatever he/she wishes to believe, as long as he/she is happy. And now, for no apparent reason, the Get Perpendicular movie by Hitachi (2005) featuring the super-para-magnetic effect: jaclaz
  10. Well, 1) run bootsect /NT60 C: 2) verify that the Vista boots normally (via BOOTMGR) 3) make a copy of the active partition bootsector 4) run Bootrec /Fixboot 5) verify that Vista doesn't boot anymore 6) make a copy of the active partition bootsector 7) compare the file copied in #3 with the one copied in #6 It is very possible that (for *whatever reasons*) the bootsector that bootrec /fixboot is supposed to be writing is "bad" (or outdated/not suitable to your specific hardware/partitioning) whilst the one contained in the version of bootsect.exe you are using is "right", as seen here: different versions of bootsect.exe may have different versions of the bootsector embedded (though they should all work) the same could be true with different versions of bootrec, and *for some reasons* the one in the bootrec you are using is not good. jaclaz
  11. Well, you completely missed the point. There is no doubt that data - over time - may be lost due to failure of the media/device it resides on. The point is that there is no reliable real world data on what exactly causes the media or device to fail, nor how soon (if ever) this may happen. ALL other possible reasons are relevant, as the point is whether the data is readable or not. With CD's and DVD's the media is separated by the device, so if the media is prone to failure, it makes a lot of sense to copy the data from the media and write it to some new media. On hard disk the media and the device are the same thing, refreshing data (on a "same" device) at an elemental level (one single byte) you are going to read (say) AA and write in the same exact place an exactly same AA. There is no evidence that the newly written AA is "better" than the previous AA or that it will last any longer, and anyway there is no way to know in advance if any of the tens of other issue the device may experience (both at hardware and software/firmware) may make that data unreadable. Now, if you copy the data (as files, or as a dd copy or image) to a new device you have some better chances that the new device will last more, but no certainty whatsoever. Since you (we, noone) do not know how often this is needed (could be a year, 2-3 years, 5 years, we simply don't know) how often are you going to do this refresh? Let's take two years, you go for your refreshing and find out that for *some reasons* your AA is not anymore readable, your data is lost and you cannot refresh anything. Let's take one year, you go for refreshing and perform it successfully. Then you do it again, one year later, but this time you find out that for *some reasons* your AA is not anymore readable, your data is lost and you cannot refresh anything. Now, if you had (as theoretically needed) other two copies of the data you could attempt making a new third copy from one of the other two, without this you are essentially flipping a coin every n months/years. So if we had thousands of (accurate/correct) reports by thousands of people that use ALL these approaches: 1) never refresh 2) refresh every year 3) refresh every 2 years 4) refresh every 3 years with the SAME data using the SAME make/model of hard disk, kept in the same room/climate, then after a few years we would have some data to decide which strategy is the best one (and that will be accurate only for the one, or possibly even two, generation(s) of hard disks and possibly not applicable to the current generation of disks). In a nutshell, choosing this (or that) strategy with the reports we actually have (none) is simply an act of faith in something intangible. Then we will start with anecdata, people that lament losing their data (and made no data refresh will be criticized by those that refresh their data every 3 years and never lost their data, people that lament losing their data even if they do a refresh every 3 years will be told that they should have done it yearly, etc., etc. @D.Draker Unlfortunately I am old enough not only to know about the the deskstar/deathstar, but also had quite a few of them failing in the firm I was working with at the time. @Astroskipper Here is one source (recommending DiskFresh) stating that data refresh should be performed much more often (the page is referred to on the diskfresh page by puran software but it is long dead): https://www.puransoftware.com/DiskFresh.html https://web.archive.org/web/20160413062810/http://www.fact-reviews.com/info/diskfresh.aspx >In order to keep the data signal from fading, you need to re-write the data. This is often known as “hard disk maintenance”, and should be done 3 or 4 times a year. ... >A regular (quarterly) refresh of all hard disk drives will help the drive detect and fix errors before they turn into problems, and keep the data integrity intact. Don't forget to refresh any external USB drives you may use for backup purposes. The procedure is recommended (by the actual seller of the software that can do it and that should know about how it works) every 3-4 months. How were the 1 year or the 2-3 years determined then? (3 to 12 times the recommended interval) Isn't it queer that if the procedure is so needed and needed so often there is only one program to do it (besides Spinrite[1])? jaclaz [1] that has its own list of doubtful claims, there are endless critiques of Steve Gibson and his works/programs, the most benevolent ones saying that he tends to exaggerate greatly (either the seriousness of the issues or the capabilities of his software to fix them)
  12. Yep, but we need also (at least) one that works without copying to (maybe) spot some differences. I quickly checked only the first (Freedos Live) and the last (Aros Live). The first one is made with (good ol') mkisofs, the latter with genisoimage. Both are NOT hybrid images (i.e. first 16 sectors are 00's). Both are bootable (BootId 88) and bootmedia 0 (no emulation boot). First one has no joliet (but Rockridge), second one has both Joliet and Rockridge (and additionally uses ISO 9660:1999 Relaxed Format). One of the good things about mkisofs (that later programs stupidly removed) is the default feature of writing in the image the actual command line used to build the .iso, so the FreeDos iso has been made with the (rather "normal") commands: jaclaz
  13. This whole stuff: 1) makes no sense whatsoever 2) should it make sense in some very specific, niche situation, we don't have any meaningful, valid, data to support the method, let alone the frequency at which it should be implemented If you feel good refreshing your data, do it. If you feel good refreshing your data every year, do it every year, if you feel good doing it every 2-3 years, do it every 2-3 years. You will anyway lose some data before or later (or possibly you will never lose any) but there is no way to know in advance nor any way to know whether this strategy contributed in any meaningful way to the outcome (whatever that happens to be). Replicating data (having multiple copies, on different media and stored in different location) is an effective strategy, though it is difficult to implement, let alone maintain over the years. The only thing that promised (but has to be seen if they delivered) long enough data retention were (are?) M-DISCs: https://en.wikipedia.org/wiki/M-DISC jaclaz
  14. Yep, and I am around here since almost 20 years and have nearly 21k posts, 1.8k reputation[1], what gives? To be picky (as I am) the user you linked to did nothing but repost the opinion of a known expert (Kim Komando). I prefer Armand Gracious as an expert: https://www.dedoimedo.com/computers/experts.html jaclaz [1] which is anyway meaningless metric.
  15. Risk is relative, there are much more risky things. Only as an example I use as a browser Opera, Kasperski as antivirus, and Softmaker as office suite (without having checked the german ancestry of the programmers of the latter company). jaclaz
  16. A good alternative could be avoiding altogether posting this kind of news on MSFN which is (or has been till now, or should be) a technical board and not a social media site, nor a news aggregator (of course IMHO). jaclaz
  17. Still essentially OT, here: https://msfn.org/board/topic/182116-winsetupfromusb-problem-installing-xp-on-legacy-system/?do=findComment&comment=1191406 is a small batch that allows for testing iso creation with different parameters and with "tricky" file names (but this won't touch other possible issues such as el-torito emulation or no emulation, multivolumes, etc.) Though the batch could be useful (maybe) as a base to develop on, the issue remains that we don't actually (yet) know <what exactly> is triggering the misbehaviour, so it is not reproducible. jaclaz
  18. It could be well something connected with the various standards (original ISO, joliet, rockridge, but also iso-level and el-torito emulation or no emulation, multi-volume disks and what not). There are many possibilities, and the actual references/documentations are far from being "crystal clear", as a completely unrelated example, it took us (actually rloew and cdob) years to find out that the part specifying size of floppy images for el-torito emulation had been read "wrong" by everyone and we could actually have 36 MB superfloppies, JFYI: https://msfn.org/board/topic/152399-on-bootable-cds-floppy-emulation/ jaclaz
  19. Yep, you are using hex offsets, 0x8000=32768=16x2048. After the CD001 and up to relative offset 0xBE or 190 there is normally a huge (until the end of the sector) data field and many iso tools use space or 0x20 to fill it. This should be "normal", the last 512+653 bytes should be zero, but before those the presence of 0x20's is OK,. see Volume Descriptors here: https://pierrelib.pagesperso-orange.fr/filesystems/iso9660_simplified.html http://www.dubeyko.com/development/FileSystems/ISO9960/ISO9960.html It must be *something else*, not those values in the Volume Descriptor. jaclaz
  20. It seems more like *something else*, offset 8000 is within the 32,768 leading (normally) empty bytes/16 sectors, there would be no reason to have part of them filled with spaces, around the volume label (I presume you mean the volume descriptor, i.e. "CD001") could well be something like a fixed length field filled with trailing spaces. I have no idea what "Smart Storage, Inc." could be related to, Ultraiso is a known tool that may well be not fully compliant to specs (and actually I seem to remember a few issues with it when using it to edit a .iso) I have no experience with Gear software. An easy test that you can try is zeroing the first 32,768 bytes of a copy of a (non-working) .iso and see if it still misbehaves. Also, checking the actual .iso files with isoinfo (from the cdrtools package) might (or might not) find some particular inconsistency in those not working files. Cdrtools port to windows should be available here: https://sourceforge.net/projects/cdrtfe/files/tools/binaries/cdrtools/ jaclaz
  21. It is strange, but I am not sure to understand. 20 means "space" in ASCII and "20,00" means "unicode space", neither are used as "empty" (which remains 00 or 00,00), this is not related to ISO (ISO 9660) file format, AFAIK. I have never seen a .iso where 20 is used for empty sectors (if this is what you mean)[1]. What would happen with a .txt file filled with spaces (either ASCII or Unicode)? Or, if you prefer, what happens if you change the file extension from .iso to <something else>? jaclaz [1] the standard has the first 16 sectors (2048 byte each, i.e. 32,768 bytes) "unused" or - maybe better - "reserved for other uses", they are (for normal CD/DVD images) all 00's or (as an example) have a MBR or other data for "hybrid" images, though it could probably be fine according to the standard to fill this area with 20 hex, it never happened to me to find one such .iso image and it would anyway make very little sense.
  22. Personally (JFYI) I use (an old version of) Cute PDF writer for that (needs Ghostscript installed, that is anyway an useful tool). But AFAIK/AFAICR it doesn't need a service running, it appears as a normal printer in the print dialog. jaclaz
  23. And if I may, a crappy article, with a (intentionally I believe, to gain traction/clicks) misleading title. The XP 2022 is only a mock up of what a non-existing OS may look like, this crap should be NOT reposted, especially on a technical board like MSFN is (or should be, or was until the recent trend by one or a few users to submerge it with random articles about anything they find or crosses their mind ). jaclaz
  24. It could well be that all TTL devices are not suitable, or you are connecting them improperly, or whatever else, but if we assume that at least one of them works and connections are fine, the PCB is most probably defective or bricked beyond what you can do. I see you have asked where to look for professional repair : https://msfn.org/board/topic/184610-where-to-send-drives-for-repair/#comment-1242291 It is very difficult to give a suggestion, there are a few large firms, besides smaller (that could be either better or worse) shops. Seagate has their own data recovery service: https://www.seagate.com/products/rescue-data-recovery/ Drivesavers seems to have a lab in Houston: https://drivesaversdatarecovery.com/cms-category/location/ jaclaz
  25. I seem to remember that the 7200.12 boards work when completely detached from the head and motors. You could try (after searching and verifying the above) to connect to the unmounted PCB and see if you get a terminal response. Shorting the pins is - again if I recall correctly - not used if the terminal does not connect/respond, but rather when the terminal continuously outputs an error message (a sort of loop) and the shorting allows to interrupt the loop. On other models the two points to short correspond to the head read channel, here is some info: https://msfn.org/board/topic/157329-barracuda-lp-no-not-a-720011-nor-a-720012/?do=findComment&comment=1003759 But I have no specific, let alone surely working, info/details. jaclaz
×
×
  • Create New...