Jump to content

SlugFiller

Member
  • Posts

    127
  • Joined

  • Last visited

  • Donations

    0.00 USD 
  • Country

    Israel

Everything posted by SlugFiller

  1. @Queue: "Visual", by definition, denies any "speed and resource benefits". It is probably many times slower than JIT-compiled Java code, and while I can't testify as to the library size, the memory usage is undoubtedly much higher. It was one thing if this was C, then you'd have good performance, and it wouldn't be proprietary. It would also be easily portable, although not necessarily cross-platform (depends on whether the C runtime is used, or the Win32API). Also, I hope I don't have to explain the issues with a language with a proprietary dynamically linked run-time (although I suppose Windows may also be counted as a "proprietary runtime"). Especially not on this forum, in this thread (*cough* VCRun2008 *cough*).
  2. Why are you using Delphi? JDK is available for free from the Sun Java website. And it's a much better language too, although that doesn't really say much. I mean Pascal+Visual+Proprietary-Any chance of cross-platform compatibility=Worthless language. According to the documentation I got, the original 95 IFSMgr had a bug in that it cache the header of the unicode.bin, but its cache was only large enough to contain 18 language entries. The documentation was a bit unclear, but it might be able to work with more languages, provided the OEM and ANSI code pages are on the same 18-language block. Certain localized versions did not have the bug. I'm not sure about the status of the 98 driver, although I imagine it probably fixed the bug already. The format itself should support up to a 4GB file. I didn't feel like pushing it to the limit. My original unicode.bin only had 5 code pages, and I've added an extra 4 (all East Asians). Your 30-pages bin should probably go on mgdx or something. Could you list the code pages you have there? The code page data itself is from the Microsoft contribution to unicode.org. I made a small javascript page which converts the text files into the necessary Java commands. Obviously, I didn't sit down and manually input 60000 character codes. So any "blanks" in the data are the official Microsoft stance on the code page. There are no coincidences there, if it matches what you already have.
  3. I've finally managed to compile and run the VxD equivalent of "Hello world", coded in C++ using 98DDK and VS6's CL. This will give me the ability to experiment a bit, and see if I can get something akin to Unicode file access, one way or another. One issue is that once the VxD is loaded, it cannot be unloaded or reloaded, even if the file is modified. The only solution is to restart the computer for every revision. That gets old really fast. P.S. 8 downloads, 0 comments
  4. I've made a small Java program for creating my unicode.bin. Attached below. It is roughly 400 lines of code, plus 60000 lines of CP data. It would be easier than trying to make such a program yourself. The bin uses a tree structure to define lookup ranges, and my program automatically sorts it out as a binary tree. It could be slightly improved by not creating a new range for a gap of less than 6 characters, but instead padding with underscores. This would create a slightly smaller bin. MakeUnicodeBin.rar But, like I said, I'm looking for a more comprehensive solution. I am contemplating using the file system hook to convert base64 or hex filenames into unicode ones. To ensure compatibility, though, I want it to use a special device name which wouldn't collide with real devices. But I'm not sure whether the hook is even called for device names which don't exit. The alternative is to use Ring0 FileIO. This is a bit more complex, since it requires quite some book-keeping, and consideration of edge-cases. Unfortunately, developing VxDs is VERY complex. I still have a lot to learn before I can properly start testing. If I am successful, though, patching KEx would be incredibly easy. The new architecture is even easier to extend and deploy than the previous one. Making file APIs which attempt to detect a hex-to-unicode helper device is actually quite simple.
  5. @Tihiy: I was actually wondering more about your experiment process. Did you try to create a unicode-named file, only to see it created with underscores? Did you verify the underscores were actually in the filename, and not just in display? (If it is just in display, the file would not load in any ANSI program) @Joseph_sw: I've already replaced my Kernel32. I've also messed around with the NLS files. They effect the GUI behavior, but not the file system. Having a mismatch between the GUI code page and the FS code page just causes files of either locale to be inaccessible. Well, I've done some more research. Apparently, IFSMgr has an ANSI API which is used by Kernel32. (I think it also has a Unicode API, but it's hard to get concrete details on IFSMgr's API) When it receives an ANSI filename, it makes a conversion using tables located in Unicode.bin (One file for all CPs, now there's a bottleneck). I've modified it to add a few extra code pages, and successfully gotten access to localized files. It uses the same registry key as the GUI code page, but is more limited since it uses a single data file (instead of the pluggable NLS files). It does require restart every time I want to change code pages, and every time I only have access to some of my files. It only support encodings up to 2 bytes per character, so creating a UTF-8 code page is out of the question (unless I want to recode IFSMgr itself). Still, there should be a way to access IFSMgr using Unicode calls, either directly from the kernel, or from a dedicated VxD. If I could find more information about its API, maybe I could get true Unicode file access to work. One thing I did notice was that Explorer isn't exactly "well behaved" when it comes to double-byte filenames. It displays them okay in the list, but improperly when renaming. It also fails to start up localized files when clicked (DnD into an application works, though). I wonder if there's an XP Unicode version of Explorer I could use or something.
  6. What do you mean by "turns them"? Doesn't it call IFSMGR directly (Which, AFAIK, takes unicode device names)? What did you test exactly? Fat32 uses UTF16 long file names, so somewhere between CreateFileA and the IO subsystem, the ANSI filename is converted to UTF16 (using a distribution-dependent codepage). The only question is where. If I knew the answer to that, I could patch it area to use a more comprehensive codepage, such as UTF8. Then supporting arbitrarily localized filenames would be simple. Since I've recently tested replacing my ifsmgr.vxd with localized versions (those were hard to track down), to no effect, I think it is safe to assume it is locale-independent. This does leave the possibility that the conversion occurs -after- ifsmgr.vxd, inside vfat.vxd. I did try replacing it, to no effect, but I didn't choose highly varied locales there, so maybe it's just a coincidence. Still, it doesn't seem too likely, considering it gets its input as a unicode string already. Unfortunately, I couldn't get a good description of the path taken between Kernel32 and ifsmgr. Like I've said, I've tried finding a good DLL-capable disassembler to get a better view of the CreateFileA implementation, with no success. I did start poking around my copy of the DDK to try and see if I could write a file-system driver. I figured, I could, at the very least, write a driver that takes the filenames as hex-encoded or base64-encoded strings, and converts it to file name using some UTF convention. I could then use that to either call another driver (one which supports unicode filenames), or implement my own version of the FAT32 file-system. Unfortunately, I could find any good examples of file system implementations using the DDK. I did find one which uses VtoolsD, so maybe that can help. It is also quite disturbing that not one of the DDK examples is written in pure C or C++, all use assembler code.
  7. Do NOT use uTorrent. Regardless of your OS. uTorrent has a buggy upload manager. It fails to upload to other clients. Presence of uTorrent clients in a tracker usually result in the torrent being much harder to download. The increasing proliferation of uTorrent clients is killing the BitTorrent network. For torrents I usually use BitTransmission on Linux. Keeping a Linux box or dual-boot especially for your torrents is worth the effort. I've personally observed significant performance differences with the same torrent files. If a second OS is not your thing, try FDM. It has torrent support, isn't too big, and works decently well.
  8. Yet to be tested. I'm afraid it's not that easy. I'm basing this on the FileMon code. I've actually ran FileMon on my 9x, so I know it works. If I knew how to load and call VxD functions directly from KernelEx, it wouldn't be too difficult for me to create my own VxD to do the Zw stuff, if necessary. Hmm... In fact, maybe I can create a namespace which takes hexa-decimal strings, and converts them to unicode, then relays to the appropriate sub-driver. Then, the only thing I would have to do is add a bin2hex in CreateFileW, and pass the result to CreateFileA. The question is, would that method be enough to support all basic file operations?
  9. Hmm... I've been looking up information on VxDs recently, and found out something interesting. Apparently, Win9x's file system manager, ifsmgr.vxd, as part of Microsoft's effort to be backwards compatible with NT4, uses unicode strings to identify resources. The kernel-mode ZwCreateFile takes a unicode string, which may be generated from a wide-char string, and this apparently applies to 9x. So, the theory is, if these calls are made directly from a KEx API, 9x can be made to support real unicode versions of CreateFile and similar functions. In other words, 9x can support filenames in any given locale on any given version. And this does not require any VxD patching or rewriting. Now, obviously there are a few issues. The first of all being, can Kernel32 truly call VxD or DDK functions directly? Secondly, what sort of conversion might be required between a handle returned by the kernel-mode ZwCreateFile, and the handle used by user applications for calls such as FileReadA? Are they identical? Does Kernel32 keep its own objects and/or handles? Short of decompiling Kernel32 (I've yet to find a half-descent PE-file disassembler), I guess the only way to test is with trial and error. Of course, kernel-level error can be very risky. One thing I was wondering about is if the pre-application KEx configuration can be extended. If true unicode file access is a possibility, it would be nice to set the code page for ANSI file functions, such as CreateFileA, on a per-program basis. But there is a question of whether that configuration can be read from the overridden CreateFileA itself (or if that would create some sort of infinite loop). P.S. Does anyone ever read my posts? I never seem to get a reply, and I can't help but wonder if they are even visible to other people.
  10. Woo! Inkscape finally works! Pango-Cairo finally works! The bad news are, I don't need it anymore. I've found a much better SVG library for Java, and for my browser I've switched to K-Meleon, the Win32 port of the Gecko engine, which works just fine on 9x (actually, I should say "it works faster than any browser I've used to date", but, you know...). Well, apps have never been a good reason to do hard OS works. Someone, somewhere, has already made a more compatible and better featured version. So I'll be testing out various games soon. Will report if I find anything of interest. By the way, if I'm reading the source correctly, there is no more need for code generation to create a new API in v4. Rather, the APIs are compiled directly. Is that correct? I may consider porting my filename extra compatibility filter to the new architecture. Actually, from the looks of it, it won't require too much porting.
  11. It's been a while since the last update. I hope this is still being worked on. With SeaMonkey setting its official version to 2.0, and recent games requiring GfWL, the need for KernelEx is becoming greater than ever. In its current incarnation, it doesn't actually run any of the real XP-only programs (The ones that need more than a Windows version change and a W-to-A). Perhaps work should proceed in the "Pick program - get it to run" methodology. I mean this in a more public form - users on this board could help test versions of KEx with a specific program, chosen by Xeno86 or tihiy, thus giving a clearer objective and focus, which would help make programs fully usable on a maximal number of configurations.
  12. Two questions: 1. Could the new KEx architecture be conceivably used to stub-out missing imports from non-system dlls? For example, could I choose for a gtkmm dll placed in an application's binary folder to stub-fake a non-critical void-return function? 2. How often does KEx update now? When should we expect RC 3?
  13. Set it to run in XP mode. Well, tried that, got a weird looking over-sized empty dialog, and the program never started up. It didn't exactly work perfectly on KEx 3, but it worked better than that. Maybe I'm missing something...
  14. How do I activate AdvancedGDI for a program? I want to try Inkscape in the new KEX, but it seems like it still doesn't come with AdvancedGDI enabled by default, despite being a GTK2.8 program.
  15. This form already has improved generic ATA drivers. Generic SATA drivers are coming soon, thanks to the efforts of Xeno in porting their NT counterparts to 9x. Contrary to intuition, generic drivers are usually better than vendor specific ones, probably because they are desired by a vaster audience, causing greater motivation for development, and have a far wider testing audience. That, and focusing on standard compliance in hardware use is not that different from applying standard compliance in code paradigms, which commonly adds stability. In other words, "less hackity-hack-hack". That being said, you are somewhat exaggerating in your claims that Intel may have been influenced to discontinue their drivers. Intel where always behind on drivers, even before certain OSs reached EOL. They hardly ever release any worthwhile drivers for any OS, and the ones they do release are usually buggy as hell. If you ever had the misfortune of coming to use Intel hardware, I got one tip for you: Third party drivers. Would have given the exact same tip 8 years ago, too.
  16. Man, who wrote this crap? FAT traditionally uses UTF-7 for filenames, not a system-variant "System Character Set". The VFAT expansion adds LFN which are stored in UTF-16, meaning true unicode (well, depending on whether it supports surrogate pairs, though that's implementation specific really). So, no version of FAT uses "System Character Set". Most of the rest is misinformation as well...
  17. I've worked on eMule for over 5 years, and I did it on a 98SE box. I did have to use a remote box for compilation, but other than that, I've never had any problems running it. Not one compatibility issue. That's an exaggeration. The I64 is a clear flop. The backwards-compatible AMD64 is clearly superior. Obvious reasons being that current developers market the existing 32bit architecture which the AMD64 is compatible with, since they don't want to lose that potential pre-existing market. The "upgrade leap" methodology attempted by the I64 is just not practical, and is therefore destined to fail. The existence of a prominent yet incompatible competitor with fair market domination and better compatibility with current software just puts another nail in that coffin. It is likely, however, that programs requiring AMD64 will begin to emerge with time, but not until I64 is completely deceased, since aiming for that half of the market is no better than aiming strictly for Intel processors. And double-compiling is never a fan-favorite among developers. However, even when AMD64 requirements begin to emerge, it won't effect 9x, since the processor will handle the instructions while 9x will still run on the 32bit compatibility layer of the AMD64 CPU. The multi-core thing, though, that will have an impact, but only on performance. Compatibility will remain unaffected.
  18. Considering it needs the VS2005 files, it will probably require KEx, though another topic recently teaches of another method to get those working using PE-editor or something... To really detect missing library files and/or imports, though, you should check out "Dependency Walker". It gives you a complete list of static imports used by any executable or library. It won't give you dynamic imports, unfortunately, but those are far rarer, and usually come around when they are optional, not required.
  19. So the question becomes "why do you still use an 80486?"
  20. As said, 9x does not use HLT if there are no ready threads, it instead goes into a busy loop which uses 100% CPU. The solution is to use an alternate tool which implements an idle thread which HLTs. I use "rain". It has good performance, and is fairly simple to use. I think it doesn't support AMD cool-off though, so it would only be useful on Intel chipset, or inside VM.
  21. How is this different/better compared to CoralSoft's task manager?
  22. IIRC the XP recovery console is extremely limited in functionality. It's tough enough just copying a file from folder A to folder B in it. With DOS, I get something that is intended to be a full operating system, and I get all the command line tools and 16-bit software I could possibly need. I always keep Norton Commander on-hand, making me hardly miss the absence of the GUI. Log reading, text file editing, copying, zip and cab extraction, etc etc... I have a complete toolset. With Linux, the root console gives such complete support for non-X programs, that you wouldn't believe it doesn't load any kernel modules. With nano I can easily (well, relatively...) edit the configuration files, fixing any malconfiguration or error, something I can only dream about in Windows. It would have been an awesome feature, had it not been the only way to get the system working (forget X-based, even a curses-based configuration dialog would have been sufficiently awesome). XP's recovery console is no replacement for cascaded kernel responsibility. In XP, if one driver goes, everything goes. This just gets worse with every new release.
  23. Personally, for me, it's the architecture. First, Windows itself included, there are very few programs proliferating my HD's root folder with unsolicited files and folders, and most of those are GNU ports anyway. Having control over the folder structure starting at the root is quite important to me. Anyone who says I shouldn't care where my files are stored should just switch to a Mac. That and native Win32 support are probably the only reasons I haven't switch to linux yet. Though recent experience demonstrates linux is just now topping 9x in driver support (by "just now" I mean "just this week, with the most recent kernel update"), with 9x's end-of-term having much of the responsibility for that (as opposed to linux kernel development). Now, with regard to the above, switching to XP would, indeed, only cost me two extra folders on the root, and vastly increase software support (at least until the next KEx). With some work, I could also remove all the unnecessary components that comprise of the disadvantage to using XP, e.g. genuine, firewall, SP2, etc... I would still lose something I'm not ready to lose: Guaranteed startup stability console. That good old DOS prompt that tells me that even if the screen settings go haywire and even if win.com suddenly disappears, I can still restore my system without a boot disk. 9x has it, Linux has it, but the so-called "modern" Windows versions utterly lack it. Well, setting all of these aside, even if I chose to upgrade now, I would only be forced to continue upgrading as new versions of Windows continue to appear. Right now I have my good spot, and I can stay in it for the next few years. Besides, all of these "down sides" you've mentioned? Haven't seen any of them for at least a couple of months, if not years. My system is as stable as I can imagine a system being... I've reached the point where I'm searching for features, not fixes.
  24. I used to use NDD because it was more verbose, and seemed more reliable (unlikely to crash itself, at least), but lesser performance and consistent issues with localized filenames prompted a return to good old scandisk. Overall, NDD isn't so great. It has a plethora of issues, and doesn't apply fixes smoothly enough. At any rate, with my current disk size, I prefer only to run a scan when there is valid suspicion of a file-system error, and do so manually. After all, it can easily take the better part of an hour with modern disk sizes.
  25. I seem to recall that Win98 (any flavor) didn't support USB mouse in Safe Mode, but WinME did. [Edit: I think it depended on type of USB controller.] True/untrue? Not true. I have a USB mouse, and it works fine in Safe Mode. Works fine all around. My main problem with it is that Windows thinks the legacy driver is a second mouse, so I had to manually disable the PS/2 driver. To me, this indicates Windows uses BIOS calls for the PS/2 driver rather than actually implementing a PS/2 driver. Anyway, I think I've since disabled the legacy mode, since I hardly ever use my mouse in DOS. It is true that USB HIDs are problematic, though. On occasion, certain USB HIDs, or even USB controllers, freeze up on any flavor of Windows (98, XP, etc), forcing either a re-plug of the device, a restart of the driver, or a restart of the whole computer. It's probably some sort of heat or power consumption issue, but it's still annoying when it happens.
×
×
  • Create New...