Jump to content

SlugFiller

Member
  • Posts

    127
  • Joined

  • Last visited

  • Donations

    0.00 USD 
  • Country

    Israel

Everything posted by SlugFiller

  1. I thought supporting PCI cards can actually add support to the BIOS. I've seen cards that seem to add BIOS features not usually available. Setting up the bootserver is probably the tough part. "*.*" means "all files in the folder. You don't need to add "C:\", since copying to the current folder (in this case "C:\Win98") is the default behavior. incidentally, you might also add: sys c: To make sure your HDD becomes bootable (necessary prior to Win98 installation). To that end, you might also want to make sure the partition you're installing on is active, although that is normally the case if you had any other OS previously installed there.
  2. Doesn't ffdshow support Indeo? And with better quality and speed at that?
  3. Isn't the problem here that it is one of the settings a theme sets? Would the solution be, therefore, to reconfigure this setting into your individual themes? Either that, or switch themes less often.
  4. I have a 250GB single partition. I had the same problem of "Out of memory" for Scandisk and Defrag. After installing a new dskmaint.dll (v4.90.3000) they both worked.
  5. Why? 98 is practically a service pack. I can't imagine a reason to stay on 95. It's not like it has something 98 hasn't, or any noticeable differences aside from lesser hardware support and stability. It can't really be compared to a move to the NT series. When I first heard about it, I thought that would be the OS I would upgrade to after 98. When I next heard about it, I realized it's not. Actually reduced stability+features I don't need+disabling things it should have = not too inviting. Too bad that after that screw-up MS figured "screw it" and threw away the whole series.
  6. Buffer overflows in user-mode programs cannot cause blue screens. Verified. If you try to write into "no mans land", an exception is risen. If uncaught, a general protection fault closes your program. As for DLLs, if loaded from a program they would not cause that. If loaded from the kernel (e.g. User32.dll), they are drivers, and may be considered a part of the Kernel. What a program can load to bring the system down is a vxd, which is essentially a kernel-mode driver. Mind you that this is possible in NT as well, although using different methods. The result is the same. CMD.EXE isn't a DOS emulator. It is a true command line, as pure 32-bit applications can be run from it I didn't say it is a DOS emulator, I said it is a command-line emulator. A true command line runs in text-mode. In 9x, a virtualization layer translates the text into a graphical display, as it does for dos programs. "cmd.exe", on the other hand, is quite aware of the graphical console and the fact that it is in graphic mode. It is a terminal. The fact that it runs 32-bit application would only demonstrate that farther, although "command.com" also does that thanks to virtualization of int21h. Rather than consider it's performance with 32-bit applications, I would consider the sort of trouble it can have with 16-bit ones. It's a terminal, not a command line - a graphical program that knows it's a graphical program and simply allows text input like a command line but without actually being in text. "A second OS"? The recovery console is not a part of NT. It's a part of the installation CD. If it was installed along with Windows (non-optional), it would qualify. Well, somewhat, at least. And what does QEMU run on, smart-a**? I'll try the "Windows port", though I'd have higher expectations from Bochs. Except that none of them work. All of them create some sort of "user-unfriendliness." Again, hello: Java VM. Works 100% last time I used it. Only problem is, Java is not an OS, it is an environment that settles on top of other OSs. Perhaps that's why it's design is so good: They knew it can't be as all-controlling as an OS, since it has to share its space with an existing OS, so instead they came up with a more minimal but effective design. OS developers should learn from that design. One of the biggest problems people have with Vista is it's "User Account Control" that demands people approve almost every single action a program takes. I have a similar grief with popular firewalls. Still, how is having to log out and log in as root, and enter a password, better than this? But it can suddenly be overwritten. In fact, when you switch to root, all of the defenses you had when being not root suddenly disappear. Rather than getting access to one folder to install your one program, the installer suddenly has access to everything everywhere. An "all or nothing" strategy differs from "nothing" only in being more annoying. In that case, I might as well just have "nothing". The only "hole" is the one in the idiotic user's head. I was referring to the possibility of having an elevated-rights program. How hard would it be to bundle an installation with some other elevated-rights program that doesn't require any confirmation, let alone a password? After that random programs can elevate themselves at will, giving you the same condition as an unsecured system. Well, not exactly the same condition, since legit programs will still be a bother to install and use. As for idiotic users, if you're smart enough to get Linux running, you're smart enough not to run worms. Right there you assume Linux consistently does that right. It doesn't, that's not how its sandboxing system works. Only specific scripts know how to sudo for things they need. If you so much as want to copy a program into /usr/bin, assuming you'd rather use a graphic file explorer rather than cp, you'd still need to sudo the file explorer. Besides, even if you desire a password, you don't need an account system integrated into every file to do that, you just need an administration password (just like for your BIOS). You know, one that isn't associate with a username, just with the general concept of "having access". Different programs and users can be given different rights. No. Different users can be given different rights. Different programs can be given different users. Making overlapping but not equal rights is down-right impossible. You do get the "grace" of 3 levels - root(=all rights, if you can call that a level), group, and user. If all of your overlaps and use-cases are so clearly defined that these would do, then you're in the clear. Generally, though, it won't exactly work like that. There are too many resources that need to be given or denied access to from different programs, and they are not cascaded: One program may require files and internet, another internet and sound, and yet another files and sound, and you don't want one using a resource it shouldn't. And how is Windows any different? Most Linux distributions will warn right off the bat never to log in as root. Windows actively encourages it by making the default user an Administrator, instead of giving him a normal account and have him specify a separate system password. Not saying that Windows is more secure, or encourages not having full right. Only that if you're going to always be the root anyway, there's no reason to pay the overhead involved, and Windows allows that. Linux does not. And no, a multi-user system does NOT create enough of a performance overhead that can't be fixed by efficient coding. There were multi-user systems long before the average individual could afford their own computers. Okay, stop right there. Breath. Count to ten, then re-read what you wrote here. Let's try this again: Re-read the last 9 words. You just said it outright: Linux is for servers, not individuals. Multi-user systems in personal computers are left-over debree from when all computers were servers. It's intended for server-use only. Incidentally, servers are usually super-computers anyway, and waste a lot more overhead on logs and, you know, client-server access, to care about the cost of having a multi-user system. For a personal computer, having "root -rwxr--r--" appended to each file is not acceptable overhead. It should not be there to begin with. We're not developing an OS here. If we were, there are much better paradigms than cloning Windows. In fact, I've designed my own paradigm that puts all existing OSs to shame. It has the unfortunate flaw of requiring years of development to be accomplished. The title of the thread, if you'll note, is "Open Sourcing Windows 9x". In other words, it's about taking an existing OS, and re-making it as open source. It's sort of an "upgrade", if you will. A "hotfix" to that nasty bug of it being closed-source (well... and crash-prone). Given that, starting from the existing version and altering it to open source step-by-step is a perfectly legitimate way to go about it. Well, as much as "legitimate" applies here, considering copyright laws. Exactly. They needed many DLLs just to get a working start, which took a lot more work just to get an alpha version. And even after that they don't have reasonable compatibility. The ability to have the unofficial DLLs integrate with the official ones would make the process both faster and more compatible. You only have to worry about the compatibility of one DLL at a time, and all programs already work due to the support of the existing official DLLs. Your entire argument here relies on the last sentence which is false. My current installation is a mix of DLLs with a very wide variety of versions. It's a localized Windows using US drivers and security updates. Most of the VXDs have been replaced with versions that, according to you, should not work. The only "intertwining" I've noticed is of "user.exe" and "user32.dll", which cannot be replaced with non-localized versions. I've even made a thread about not being able to update these two for that reason. Quite frankly, these are the two I would start with, since an open-source variety would finally give me the chance to replace them with "universal" ones. Alternately, by just sticking to all other DLLs(Kernel32, GDI*, AdvAPI32) you can avoid any incompatibilities. Of course, the replacement DLL would need to implement all functions on the alpha release, but that can be alleviated with stubbing (keeping the official DLL for relay calls on yet unimplemented parts of the API). Still, getting the API of a single DLL done right is much easier than trying to get the entire Win32API right at once. Depending on how much of the program you want to run. If you have a DLL to is fully compatible with the DLL cross-requirement, you know it will also work with any program. The fact is, many programs still don't run in Wine, or only run partially, or run for a while and then crash. Maybe the full export table of a single DLL is larger than the full import table of a single program, but take 5 or 6 programs together and suddenly you have an overturn. This may not seem like a fair comparison, but consider this: If you have a partial API the fully runs a program, but still fails to run several of your program, or even one of your favorite programs, it's just not ready for regular use. On the other hand, if an entire only manages to plug-out a single DLL, it still has its own worth. And since the plug-out of that single DLL does not prevent even one program from working, you can use it right away with all of your programs, and never have to go back to the original DLL. In other words, one plugged-out DLL has more use than a system supporting 5 programs.
  7. I would imagine it already has better stability, performance, and hardware compatibility than 9x. Software compatibility is the only issue, and that is probably the #1 thing they work on already. Only problem is, it is a difficult task to achieve, so inevitably it will take them long to get there. It is already on their todo list, though, no need for a special mention or fork.
  8. (Code used instead of quote in some places due to forum limitations) You completely missed the point. The point isn't that I can easily look at the Windows folder, but that I can easily not look at the Windows folder, since the rest of the harddrive is my free playground. Well, save for 3-5 boot files in the root directory(e.g. Io.sys). Linux's abysmal failure in this regard is farther demonstrated by the following point: In Linux, all user executables are placed into one folder, with rare exceptions. /usr/bin. I very rarely install my programs in "Program Files". I have my own Apps folders, and almost all applications accept being installed into it. In Linux, applications do not except being installed where I want them. They have the linux folder system built-in and hard-coded. That's why Cygwyn, for example, has to create a matching fake folder structure just for programs to run. Besides: There's 5 already. Way too much. And you also have a mistake there: C:\ /boot Last I've checked, you can't create "/boot/Apps", or "/boot/setup". To me, the equivalent of "C:\" is "~/", which is actually a shortcut to a much deeper folder on the system. That Linux insist on controlling the root of the file-system on my drive is my #1 grief with it. An OS is supposed to give me control of the computer, a sort of a middle-step between the MBR and the programs I really want to run, so that I can select the latter, preferably with a comfortable UI. It is not supposed to use my computer for its own needs, forcing all of my files into a sub-folder. It should have as minimal a foot-print as possible. This is why Linux is more suitable for server use. It doesn't need server software, it is server software. On a personal computer, the OS should not be a program on its own. A personal computer is much like a gaming console, with the files being the CDs/cartridges, and the OS should simply be the equivalent of the BIOS (or at most of a swap disk). Now that's just completely the wrong way to go about things. It's like "my driver for this hardware is not fast enough, let's switch an OS". Oh wait... Seriously, though, a second partition is quite the massive step. It's not exactly comparable with, say, changing a registry key. It seriously impacts low-level work, in terms of file position on the drive, available space, ability to use other OSs, thrashing potential, etc etc etc. Besides, I'm willing to hack a bit to get extra performance and stability, but not to get basic functionality. Also, the home folders is also used as a "My Documents", and most programs don't allow configuring an alternate default folder (or an alternate folder at all). In other words, this "replacement root" will contain most of your documents, settings for all of your programs, your mail, junk files, and pretty much anything random programs throw in there. Not exactly a place I could put my installed programs in. The only times I have that issue is when using programs originally made for Linux(e.g. GnuCalc placing its history in the root). I have one "user" on my computer. ...snip... All other "accounts"... Um... Oxymoron? Besides, that each individual file has a user associated with it, well, that's a huge foot-print right there. Even if "root" is your only use, it's still there. That's one hell of a foot-print, you must admit. I'll accept giving certain specific folders special permissions, ones saved in a separate file (e.g. System.dat in Windows), but per-file permissions as part of the file-system? Can you say "overkill"? Not that these actually give better security, mind you, usually they create more holes than they plug, with all the workarounds they end up requiring. "X" is not a driver. Linux can start up without loading "X", but it can't startup without loading USB, PPP, sound-blaster drivers, graphics drivers (even if it remains in text mode), and hundreds of other drivers utterly useless in a console. The fact of the matter is, Linux has no "Safe mode". It either loads the entire kernel with all attached modules and services, or it loads nothing at all. There's no gray area in between. If you happen to have the wrong module installed, or the wrong component plugged in, the whole kernel will come crashing down, even if "X" isn't loaded. That's exactly what Wine allows you to do. Let me re-emphasize: With no more than a click. Or double-click, depending on configuration Normally, Wine requires two months to configure. To run a single program, that is. Once that's done, all you need to do is type in a 300-characters command-line to get it started. Or write a really long shell script. Again, for one silly little program. Alternately, I could run Windows, use WinRAR to unpack the binary into a folder of my choosing (under the root, not stuck in some "home" sub-folder forced upon me by the OS), then open Explorer, and go click the exe. Linux isn't so comfortable even with its own native programs, let alone with ones meant for another OS. Only times installing and running something in Linux didn't require hours of messing with package managers, compilers, and general headache-makers, was when they were actually Java programs. Not that installing the latest VM was that easy. Incidentally, it took me 5 second to get Windows to run JArs on a click as well, more than I can say about Linux. Even if you put all of that aside, Wine still can't run the vast majority of my programs. I'm not talking multi-threaded debuggers here, just normal text editors, FTP programs, and the occasional DirectX game. It just doesn't work. It is still a decade too soon to be actively used. And DOSEmu, and BHole, and ... Again, I place a high value on "it working". Linux DOS emulation is slightly better than its Windows emulation. The emphasis being on slightly. Quite frankly, until recently DosBox was also insufficient, but since a couple of versions ago it has proven to have all the compatibility I require. I highly dispute the idea that Linux is a "toy" because it can't run Windows programs out-of-the-box, even though there are plenty of equal or better software for Linux. One could challenge that "equal or better" claim. Then again, one could mention games or specialty software - two classes where you can't claim an equivalent in Linux. It either has a port, or doesn't, and more often it doesn't. But one would be better off mentioning how little the term "out-of-the-box" applies to Linux. My experience with Ubuntu tells me Linux programs are either "bundled" or "too much of an effort to bother". That's probably why it can't be installed (in an intuitive manner, at least) without OpenOffice, even if I seriously don't want it. Sort of reminiscent of Windows and IE. And for what reason do you need DOS compatibility, other than games? Well, I've yet to find a Windows or Linux equivalent to Ripper5. Not that "games" isn't a sufficiently adequate reason. Windows (any version) lacks freedom because they require partitions. In fact, any operating system you install to a hard drive lacks the freedom not to use a partition. Partitions are required by the BIOS, not by the OS. If you are referring to live CDs here, they technically also have partitions, although these are usually called "Tracks". There's a difference between files simply marked with a "hidden" flag (or starting with "."), and hiding actual system information, and hardware features, and basically anything a HAL does. Not that Windows lacks a HAL, just that Linux doesn't settle for abstracting just the hardware, and goes on to abstract most of the file-system. Okay, not that I'm protecting the NT series (which has its own issues, aside from the hype), but name me the difference between "opt-in" and "opt-out". I'll give you a hint: File access in Linux is "opt-in", in Windows with protection/encryption it's "opt-out". You might have made that argument 10 years ago. It may then be true if by "Linux" you meant Ubuntu as released 20 years in the future. Today, it is wrong from both ends: "Moron users" or, as you call them, "average", have already heeded the end-of-service and switched to Vista (and are now suffering for it), or had the half-brain required and switched to a Mac, as all non-geeks should. The current 98 community is most likely 100% niche, and cannot settle for Linux, each for different reasons. Linux, on the other hand, is still to much of a "niche" in itself. Even the distros aimed at end-users fall far short of ancient versions of Mac and Windows. It's great for IT professionals. It's crappy for John and Jane Smith, and there are thousands of really long articles explaining why using real-life use-cases to do so. Can a user play games on Linux? Yes. Can they surf the internet on Linux? Yes. Can it run on my old computer? Probably, yes. Can I listen listen to music on Linux? Yes. The first is questionable, unless you're not talking about specific games, in which case it's not exactly a correct argument. As for the rest, so can my cell-phone. Or an XBox360. I'm not gonna throw away my computer to use those instead though. When you think about it, and consider your own argument: The majority of people using their computer for just e-mails and surfing can just as easily switch to a cell-phone and not look back. Given that, the real reason to have a PC is the variety of programs - the ability to download, install, and run, any number of specialty or unique or lesser known programs. Unlike their cartridge and CD based siblings, PCs have the freedom of running whatever, not just what large companies made. The ones that use the PC for this, they are the ones who truly need it, and should truly care how it runs. Linux, despite being on the good side of the FSF, actually makes it quite difficult to use whatever is not available from the package manager. As mentioned in various "Linux will always be a niche" articles, making a program "compatible with Linux" is near-impossible at best, and ill-defined at worst. What more, since this discussion is about Linux as an alternative to 9x, consider this: If I have my own favorite programs which I use in my 9x, and Linux can't run them, then it is not an alternative. Replacing an OS is one thing, replacing all of my programs is another. The above explanation of what constitutes "my program" should make that much clear. The OS is there to run programs, the programs are not there to be run on the OS, they are there to run for my use and enjoyment. Therefore, an OS that does not run the programs I enjoy is useless to me. So, to conclude: -An OS must run the programs of my choosing. If I have happened to choose Windows programs thus far, then that's what my OS must run. -An OS must have minimal foot-print. It should not proliferate my drive. It should not take over my system. It is merely a bridge between me and my programs, and just as I would not allow a program to place random files everywhere, nor would I allow the OS to dictate the structure of my harddrive.
  9. You can use any of the large service packs using this method: 1. Copy user32.dll and user.exe from your System folder to a back-up folder. 2. Run the pack. 3. Follow the instruction until it restarts your computer. If it closes without error and without restarting your computer, restart you computer manually. 4. When your computer restarts, use Shift-F5 to return to exit to DOS mode instead of running Windows. 5. Copy user32.dll and user.exe from your back-up folder back into your System folder. 6. Use Ctrl+Alt+Del and restart Windows. 7. If a blue-screen occurs at startup, use Ctrl+Alt+Del and return to step 4. 8. If Windows start and the pack installation resumes, return to step 3. You will still need to locate a Chinese version of Q291362 to get a fully upgraded system, although it's not necessary for the purpose of installing the pack.
  10. Isn't it possible to boot from the network directly, instead of using a floppy?
  11. DOSBox may be 2-5 times slower than the real thing, but most programs/games written for DOS targeted CPUs 10 times slower than the cheapest thing you can find today. Sure, that doesn't apply to the computer you dug up in the attic, but then you're most likely to put Linux on that computer and use it as a router or something, not use it as a gaming platform. Or you could just install regular DOS on it and do away with the modern OS need altogether.
  12. I have a single SATA HDD with 250GB(or 232, depending on how you count). It has allocated a single 250GB partition allocated. It was created using a linux-on-CD (Ubuntu, I believe). Chkdsk reports: The BIOS detects the drive just fine. DOS loads it up fine also. Running in Safe mode works fine, Windows shows no problem with the device. Same with "Force Compatibility mode disk access". Same if I have the wrong driver installed for the SATA controller(e.g. D343Port), forcing the HDD to work in compatibility mode (while still allowing ASPI-mode CDs and other 32-bit drivers). Thanks to recently-installed new versions of scandisk and defrag (I'm guessing from ME, although with all the unofficial service packs I'm not entirely sure), I can report that so long as the disk is in compatibility mode, it works 100%. When I install the correct driver, though, Windows freezes at boot-up when loading the mpd. Verified with both bootlog and step-by-step confirmation. Since I'm using a rather new SATA driver (not default ESDI), I find it unlikely this is an LBA48 issue with the driver. The problem also occurred with two distinct SATA controllers, with two distinct drivers (both claiming to support 9x). I've also tried various VCache and virtual memory settings, to no effect. Though, I don't see how these would fail on a driver but work in compatibility mode. Currently, I only have two theories as to why this is happening: 1. There is a partition size limit in 9x which none of the LBA48 articles Google returns for "large partition win98 limit" care to document (nor do Microsoft). Somehow this partition size limit does not apply in compatibility mode (Does not check size? Cares only about DMA access and not PIO? etc...) 2. I've recently noticed my chunk size is 16k (see above chkdsk report), while the default for large partitions is 32k. I'm not sure if this could cause an issue, but I do know correcting it would be an issue, as I would have to repartition my drive. So before I start reading on how to change the chunk size with minimal back-up requirements from Ubuntu, I would like to know if anyone here knows about Win9x size limits that apply strictly to partitions and not harddrive (That means not LBA48), or has any similar experience, or generally has some information which would shed a light on the matter, and help me figure out what would or wouldn't work.
  13. What makes you think 98 doesn't have that basic concept of protected mode? I can tell you, empirically, that trying anything you shouldn't be doing with a program quickly results in an exception being thrown. If not caught, the program, and the program alone, closes. Blue screens and other crashes in Win98 are the result of faulty kernel functions, or use of kernel-mode drivers. For example, you can get a blue-screen from ejecting a CD while files are open, due to faulty error-handling in the CD drivers. If there is a "blurring flaw" between modes in 98, its in the fact the memory address spaces of separate processes are not completely separate. Trying to write into another process's memory still causes a protection fault instead of being allowed. If you ask me, though, I think two mode are too few, as third-party drivers may also have flaws, and those should not be allowed to take down the kernel too easily. "cmd.exe" does not qualify as a "command line". It's a terminal, a program emulating a command line in an existing graphic environment. It depends on the presence of Windows to back it up. It is not fully compatible, either, as some programs which may run in DOS cannot run in it. The only way I can run a full-compatibility command-line in NT is with a boot-disk. I'll have to test that. Too bad VMware in 98 is still an issue. Guess I'll look for work-arounds. There are plenty of "sandboxing" paradigms possible. The one used by Linux is by far the worst possible. A kernel, by default, protects all hardware from user-mode access. It may also be expected to protect its driver files and startup information from unauthorized editing. A tighter sandboxing paradigm may entirely limit the creation of executables, or have each process limited to its own sandbox(e.g. within its installed folder). Incidentally, a kernel-sponsored "would you like to access this directory", or "would you give elevated permissions to this program" need not require a password. The Java VM is a perfect example of that. Java Applets can run sandboxed, or they can request permissions as appropriate. More widely controllable permission setting interfaces on a per-program basis could also be an advantage. The Linux sandboxing and permission system, on the other hand, is draconian. It has per-file per-user permissions ingrained into the very file system. To install something you must either log out and log in as the root(which, mind you, opens up the system to more than just that one installation, so that's really a hole rather than an advantage), or use sudo, which is essentially an elevated-rights program to begin with(a concept which also reeks of holes). Additionally, despite the granularity and partition of permission settings it forces on the file system, it still applies an "all or nothing" paradigm in allowing privileged access. In general, its various design flaws would encourage a general user to just run in root all the time. However, even if you automate as much as the log-on process as possible, the multi-user system still leaves too large a foot-print on your system to ignore. A proper, even full-featured, sandboxing system could usually leave almost no foot-print on the system. All it has to do is not be based on users, but on programs and usage instead. Of course it isn't. But it is a lot farther along than this "project" will be in any reasonable amount of time. As far as compatibility goes, even Microsoft can't guarantee that, and neither could this "project" of patchwork's. My current 9x runs programs for 9x just fine. Installing KernelEx doesn't change that. Replacing a single dll and developing it to the point where it does everything the original did would take far less time than creating an entire OS, and once it is done it can be used immediately with full compatibility with what the system had before. So, contrary to your claim, by using the existing OS as a base, and focusing on plugging-out one file at a time, usable results with full compatibility can be arrived at much faster than ReactOS possibly could. Okay, maybe compatibility with a few quirky programs would be hard to get at first, but major programs, like Explorer, Office, or common games, would be sure to fully work quite early in the development process. The key word is "clone". ReactOS, from the get-go, tried to be "from the ground-up". Open-sourcing Win9x could start from the existing system and work one file at a time. This gives much better short-term results, and the solid basis gives a clear-cut "it can't get any worse" compared to the system currently present. Afterwards, the plug-outs can have things like bugs, security, and possible improvements, investigated by a larger open-source community. Instead of patches that add to existing dlls, code-patches the dll sources can be used. What more, the same would be possible for vxds and even win.com, so even files which thus far received no upgrades, official or otherwise, despite the fact that they are not necessarily "in perfect working order", would finally be candidates for improvement as well.
  14. Kernel-wise and driver-wise Linux is great, perfect even. But calling hardware compatibility "features" is just silly. "Features" are the things that go in the foreground, the things that effect usage, not the things that go in the background. Stability and hardware compatibility range from "convenience" to "expected", but do not qualify as "features". To me, the "features" of Win98 are: 1. System files contained in no more than 2 folders("C:\Windows", "C:\Program Files", though I wouldn't mind taking it down to just 1). 2. Root folder is otherwise free for my use(e.g. "C:\Docs" or "C:\Apps" instead of "/use/local/home/accessible_folder/computername/username/somerandomserialnumber/~usernameagain/~homefolder"). I can even have a "C:\random_stuff.txt" without issues. 3. No built-in users system, except for SMB usage. I'm one guy, and this is my personal computer, I don't need a 3-user minimum on this thing, and I don't need nor want to set ownership and permissions on each individual file. 4. Built-in mandatory real-mode command line, for all of my fall-back needs. It's the perfect dual boot I don't need to install, nor configure. Linux can't even open a text console without first loading all drivers. 5. Runs Windows programs. (With no more than a click. Or double-click, depending on configuration) 6. Runs Windows games. 7. Runs Dos games(although with the advents of DosBox this is somewhat less of an issue). If there was a Linux distro or some other OS that provided all(or at least most) of that, I would go for it right away. But currently, Linux is better suited for dedicated servers, or generic toy-boxes, than to act as my main OS.
  15. What part of the design, exactly, implies instability? Starting up in a console? Try telling that to any linux distribution. NT's console-less-ness is an issue for me, which is one of the reasons I keep to 9x. From the moment the GUI starts up, though, I have no qualms about it having certain NT-like features, such as extended Win32 API support, etc... I also dislike trying to force multi-user paradigms regardless of actual usage. The P in my PC still stands for Personal, and I don't think it should have the same OS as a public library or a university computer lab. I couldn't track down sufficiently detailed information on ReactOS's file-system, user, and boot-up models. "A cross between unix and WinNT" leaves alot to the imagination. Still, last I've checked, ReactOS was far from being ready to use, and still doesn't have sufficient compatibility to even run programs that run just fine on 9x. The missing operative word is "yet". The point would be to create them, rather than just gather existing, usually pointless, software. If you would instead spend some effort on gathering developers, such a task would not be impossible.
  16. If the person ends up with exactly the same thing afterward as they had before, whats the point of all the effort to get it working in the first place? Stability and compatibility? At least if you focus on system files rather than useless side-utilities. Also, rather than trying to keep 9x drivers working, it would be nice if the 9x kernel could be replaced with something that support linux drivers, but still runs Win32 software(both 9x and NT), without losing the 9x desktop and its file-system model. Still, the more I read, the more it seems a bit OT for this thread. Too bad...
  17. KernelEx 3.3a was this -> <- close to running a Cairo proggy on my 98SE, but it failed to initialize scaled fonts. If KernelEx manages to run Cairo, all GTK+ programs should quickly follow, including Firefox 3. I don't think that day is far away.
  18. Okay, new question: Where in the User* is the local-specific information stored? What does it include? Can it be hex edited to change the localization of one?
  19. That's not too descriptive...
  20. It's not a question of "which", but a question of "what". I, for example, would begin work with "win.com", being the most obvious place to start, but without knowing exactly what part of starting the OS it plays, and what libraries it uses, and how it loads them, it would be impossible to write a replacement. No, it provided code for emulating Windows OS. The code itself is targeted for linux and could not be run on Windows. It certainly cannot replace any of the Windows libraries. It has the same frontend(which requires no reverse-engineering, since it's sufficiently well documented in MSDN), but a completely different backend.
  21. As I've already mentioned, I'd be willing to rewrite parts of 9x, but don't have the time to do that and the required reverse engineering combined. I'd also be willing and eager to write a new OS from scratch, which would support clean 9x-like filesystem architecture(if not cleaner), linux-like driver support and stability, and compatibility with Win32 programs including NT-specific, provided I get support from at least one or two other developers, at least on the driver and/or MBR end. Having to do roughly 5 different systems to turn it into a viable OS on my own would, considering my available free time, take far too long to see anything half-usable, for me to just get started on it. In other words, if you can recruit one or two other developers to help, preferably ones with experience in kernel/MBR development, I'd be willing to actively work on such a project.
  22. Output from Java programs can't be redirected in 9x, something about the way System.out works. However, rather than trying to hack command, just something like this in any class loaded in your program's startup: static { try{ System.setOut(new java.io.PrintStream("out.txt")); System.setErr(new java.io.PrintStream("err.txt")); } catch(Exception e) {} } Alternately, you can create a small OutputStream or PrintStream subclass that outputs to a list in a swing window or similar(if you need the results in real time). Maybe even one that adds a timestamp to every line. It will be infinitely more effective than getting a different console window either way.
  23. And if OSs were all the same, then that would be awesome. Don't get me wrong, I like linux, I use it on various occasions for various tasks. But I don't use it on my personal computer because it simply does have the proper architecture for a single-user computer. Even if you take away configurable stuff like the different appearance and control scheme, and future rectifiable aspects like not running any of my favorite applications and games, you're still left with all sort of various practical differences that start from its insistence to create 5 folders on(/as) your file system root, and just goes on from there. Don't get me wrong, the linux kernel is great. I much prefer seeing Ubuntu on CD auto-configure drivers for all of my hardware in 30 seconds on boot-up, then wait 3 times as much for Windows to load pre-arranged configuration for most of my hardware. As a whole, I would rather have my Windows run with linux drivers and threading model. But I'm not ready to switch to linux's file system model, not to its user model, not to its boot up sequence, or any of the other things that are unique to my 9x. For that reason, I would rather rewrite 9x to be stable and fast, than to try a migrate. That, and it'd take significantly less time, and have significantly greater short-term rewards, than attempting to create a new OS suitable for all of my needs. Is a nice start, but still a band-aid solution. It replaces kernel32.dll, mostly a stub library. The big stuff are win.com, user.exe, user32.dll, and the vmm32 folder. That's where the real action is at, and those are the files that need open-source replacement. Quite frankly, if someone can get me input-output format specs for these, I would recode them myself. Io.sys and command.com also welcome, since its about time DOS7 got upgraded to something natively supporting larger files and long/localized filenames.
  24. I can post a screenshot of Windows Update flipping me off if you really need it, but either way attempts to get updated files from the MS site have consistently been a bust in recent years. I've simply come to the conclusion one must search for unofficial mirrors, since the official ones are simply not sufficiently reliable.
  25. I figured as much. Same seems to apply to user.exe, or they are cross-dependent. I have been unable to replace either. If I had localized versions, I'd use those, but I simply don't. If there's a way to patch(i.e. hex-edit) one from another language, I'd love to hear of it. On a completely unrelated note, I do have problems with a large partition not working other than in compatibility mode, but it's LBA48-unrelated. Regardless of whether this or that specific update requires user32.dll updated, the fact of the matter is that all the large update packs replace it. This is simply because the one provided with Windows is buggy and contains security risks that need to be addressed. The fact of the matter is, my user* are 4.10.2222, when the official latest is 4.10.2231 Actually, for localized versions, it's been clearly reporting "Your version of Windows is no longer supported" since about a year before 9x official became deprecated. If there was any way for me to still use windows update, I would have never found this forum. I get all my critical updates here.
×
×
  • Create New...