Jump to content

somewan

Member
  • Posts

    73
  • Joined

  • Last visited

  • Donations

    0.00 USD 
  • Country

    Finland

Everything posted by somewan

  1. It takes some time to uncompress compressed data, but my hunch is that in practice, it's unlikely to ever take long enough to be an issue on a 200 MHz or faster PC. Remember that people used to compress their whole harddrives in the heyday of Stacker and DoubleSpace and nevertheless found the performance hit acceptable. The only complaint I can recall is about the slowness of the DR-DOS disk compression on a 486DX-33Mhz.
  2. Yeah, and sharing the printer worked fine - connecting to it from the local computer did not.
  3. That's a clever idea, but not too clever for Win98. I wonder if the other versions are as alert. C:\WINDOWS>net use lpt2 \\c400\clp550n Error 2106: This operation cannot be performed to your own computer; it can be performed only on a server. For more information, contact your network administrator.
  4. I just replaced a 10+ years old NEC laser printer because it supported only PostScript level 1 and was horribly slow by today's standards. The replacement is a Samsung CLP-550N colour laser with 1200 dpi resolution, PS level 3, 10/100 Mbps Ethernet and as a bonus that wasn't on my list of requirements, namely a built-in duplex printing support. I just printed 11 pages of a PDF document faster than the old printer could choke on a single page! However, it remains to be resolved how to print from DOS boxes. The old printer was attached with a parallel cable to a Linux machine running Samba, and shared as \\COMPUTER\NEC - the Win98 printer config dialogs have a "Capture Printer Port" option that may be used to associate a DOS "LPTx" device name with a network path such as the one mentioned above. However, the new printer is attached using Samsung's software to a "Samsung network printer port" - no network path is reported - but it ought not to matter, as all Windows needs to do is to maintain the virtual LPT-port and merely forward the data to printer without further processing - just as it would do if it did have a MS-style \\ network path... Does anyone have any clues about this matter? Of course I could attach it to a computer with a parallel cable and share it, but that would defeat the purpose of having a network printer in the first place - namely to avoid the need for a machine to serve as a print server. (The printer listens on the Unix LPD port (515), and supports the protocol, so making it work with Linux/BSD should at least be easy...)
  5. Remember that you can dump the registry to plain-text (from Win/DOS), and rebuild it (DOS only): regedit /e reg04dec.txt You can keep as many revisions as you please, of course, and using text-based utilities like "diff", you can easily track changes made to the registry by viruses and other software. diff -u reg28nov.txt reg04dec.txt Rebuild registry from plain text: regedit /c reg04dec.txt This has the added benefit of compaction.
  6. It's a very reliable and convenient way to boot Linux (and FreeBSD / NetBSD) too. As all these systems support FAT, installing a new kernel is a simple matter of copying. goto %config% :Linux C:\SYSTEM\LOADLIN\LOADLIN.EXE C:\bzImage root=/dev/sda3 ro parport=0x378,7 parport=0x278,9 noisapnp=1 sb=0x220,5,1,5
  7. Are you experimenting to find the upper limit, or do you need a fixed size swap file that large?
  8. That does sound pretty impressive - about as fast as my 98 SE install on a Celeron 400 with 10k rpm SCSI. What kind of hardware are you using?
  9. Alas, other challenges seem more exciting as well as compelling in terms of potential results. A great example of the latter is the annoyingly slow "New" context menu of the Windows Explorer. Do you know what it does before showing you the menu? It searches one of the largest parts of the registry - HKEY_CLASSES_ROOT - for "ShellNew" subkeys! And they didn't even have the common sense of feeling embarrassed enough to leave it undocumented. Having searched the web numerous times for quick and easy solutions, a few months ago I finally gathered sufficient motivation to find and fix it myself. A few well invested days of setting and clearing breakpoints, disassembling, rebooting and patching I arrived at the following, nearly perfect set of modifications, generously presented for your hex-editing pleasure: C:\WINDOWS\SYSTEM\shdoc>comp shdoc401.dll .. Comparing SHDOC401.DLL and ..\SHDOC401.DLL... Compare error at OFFSET 1E9FF file1 = 53 file2 = E9 Compare error at OFFSET 1EA00 file1 = 53 file2 = 9F Compare error at OFFSET 1EA01 file1 = 68 file2 = 1 Compare error at OFFSET 1EA03 file1 = 8 file2 = 0 Compare error at OFFSET 1EBAA file1 = 3 file2 = 2 Compare error at OFFSET 1EC18 file1 = 1 file2 = 0
  10. What leads you to believe it's not in a DLL? Aren't MS applications notorious for relying on large numbers of subtly incompatble DLLs? Generally speaking, any software can be hacked into shape, with sufficient time, skill and motivation.
  11. The critical component with regard to Long File Names (LFNs) is the INT 21h (DOS system call), AH=71h API. There is a lot more to say about that, but for the purposes of this discussion, it's probably sufficient to note that 32-bit file access is required. Remember that the DOS versions* of PKZIP, ARJ, RAR, etc. will use LFNs under the same conditions, and thus might be interesting alternatives to XCOPY.
  12. The hosts file is not the correct place to block sites, your firewall should handle that. Regardless, the most likely reason for the delay is not parsing the large hosts file, but attempting to contact some of the sites you have null-routed. You should get an immediate "connection refused" or "network/host unreachable" error for such attempts, as on the other systems. Besides, why would waiting for a reply from unreachable sites consume 100% CPU? My tests don't seem to confirm that. However, there are numerous variables involved (thehardware itself, the drivers, TCP/IP config. parameters, choice of application to perform the transfer, etc.), so all in all, not much (if anything) is evident at all about Win9x network transfer priorities or other details from such limited data. On the other hand, if you must have XP, I think it would be a better idea to put it on a separate partition (or disk) rather than overwriting your Win9x install. It's a very good idea if have 128 MB or more, as it will drastically reduce unnecessary swap usage. Basically, Win98 (not 95) tries to be smart and allocate swap space ahead of time, during times of low system load, which may improve performance on low-memory systems.
  13. But on the other hand, it's fast enough with just one CPU, even if its only a Pentium 133... The original Win95 was designed to run in 4 MB, and with 8 MB it performed well.98 SE runs fine with 32 MB, and pretty much perfectly with 64... That compares well to FreeBSD and Linux, let alone XP. Sure, all else being equal, the flexibility of putting such breathtaking amounts of memory as 1 GB to better use would be a bonus, but the capability of doing more with less is a lot more important than the capability of consuming more. That is a more valid point, and of course inevitable in the long run, considering that it's no longer maintained. For mass-installation, it can't get much easier, can it? Andit takes less time to ghost a more compact system than a more bloated one. DOS VMs have been pre-emptively multi-tasked since Windows/386, first released in the the late 80s. Win95 introduced threads and another category of pre-emptively multitasked applications - Win32 - a feature that essentially worked well. Most "crashes" that Win9x users experience is probably the result ofexhausting the so-called "resources" of the Win16 subsystem, and that rarely brings down the whole machine. Usually the kernel and interrupts are still operable, and more often than not, the DOS VMs as well. That said, the NT-based Windows series probably *is* more stable, and it ought to be, if only for the simple reason that stability and security have had far greater priority in its design and development. It was also designed more or less from scratch, with few constraints. The 3.x and 9x series, however, had to be marketable, and thus had to run on affordable hardware with acceptable perforance and maintain compatibility with existing (DOS and later Win16) applications. It's certainly no co-incidence that it was far more successful than NT until Microsoft pulled the plugs on it, after hardware price and power finally caught up with their pet project. It took tireless and skilled marketing efforts and clever introduction of technologies such as Win32 and the WDM (Windows Driver Model) on Win9x to prepare the ground, funded by the cash-flow from DOS, Win3.x and 9x. That's why NT made it and OS/2 failed, although the latter had a head start and many other advantages. Security updates are not an issue. However, updates in terms of hardware support and various kinds of functionality would be desirable, and in some cases essential. You would not want to create a FAT32 partition that large. In fact,anything over 4 GB or so comes with an unreasonably large cluster size. This is an example of where updates are essential, ideally XFS (Linux/SGI Irix) or ReiserFS (Linux only). That has more to do with self-control than OS features. Strictly speaking, you should never run untrusted software with sufficient privilege to do damage, but that tends to be impractical except with the help of a PC emulator/virtual machine. After all, most software comes in binary form only and/or is so bloated that no-one has the time to check it for privacy/security violations. I haven't noticed, with a single drive and a mere Celeron 400. That is a bit surprising, considering that both the NIC and SCSI driver are so-called "NT miniport" drivers, as opposed to proper VxDs. A perfect demontration of the flexibility of the Win9x architecture - we have yet to see VxDs on XP! At the same time, it's a great example of how MS have been preparing the ground for NT/XP.
  14. That may have been true for Win95, but the Win98 version of VCACHE is a lot smarter about adjusting the size of its cache dynamically, and will not hold on to RAM needed for other purposes. An additional improvement is the capability of running programs directly from the cache (without first "reading" the program into some other place in memory). The problem is just that neither VCACHE nor other parts of the system was designed for the quantities of RAM in question. As I suggested in my previous message, it may have to do with address space limitations - somewhat similar to the 640 K limit of DOS, if you remember. Also, there are other issues than system stability. Such as performance: bigger is not always faster - especially not if the data structures and algorithms are optimised for a much smaller cache. Imagine how long it might take to search half a GB of data for a random disk block if done inefficiently - imagine searching the whole cache only to find that you have to read the block(s) from disk - imagine how fragmented a typical file system is... How the data is organised and accessed can make a great difference. But I'm only speculating. Has anyone actually tested?
  15. A 32-bit or 64-bit MS-DOS is closer to rewriting than recompiling, but that is far from impossible, as demonstrated by Microsoft: most of DOS is already re-implemented in 32-bits in the form of the core Win9x VxDs. As an example, in order to provide file access to Windows applications, KERNEL32 puts the MS-DOS function number in a register and calls the VWIN32 VxD through an undocumented API. VWIN32 forwards the request down the chain of installed INT 21 handlers... There are inherent problems with multi-gigabyte amounts of RAM on the 32-bit 386+ architecture. A maximum of 4 GB is directly addressable, and at the time that architecture was designed, no-one ever imagined that more than a tiny fraction would be actual, physical RAM. It was expected that advanced operating system would use it for providing virtual memory - that is, paging/swapping. Windows 9x reserves the top 1 GB of the address space for the kernel (VxDs), at least 1 GB for DLLs, VMs, XMS, DOS-extended and Win16 applications, and at least 1 GB for the "private" arena of the current Win32 process. Also, hardware devices - especially modern graphics adapters may require hundreds of megabytes. I suspect that VCACHE tries to map those huge quantities of unused (and unneeded) physical RAM into the 1 GB kernel address space (or possibly the shared DLL/DOS/Win16 area) - eventually running out, not of memory but of space to map it into - which can be reasonably expected to render the system unstable. The good news is that even 384 MB, which I have, is more than enough and I don't recall running out of memory while I had 128 MB either. If you have too much money on your hands and want to spend it on hardware, go for something more useful. Do you have a 15000 rpm SCSI disk yet? How about a SDLT drive for backups? (I have 73 GB 10000 rpm disk and haven't partitioned it all...)
  16. The answer to the question of whether it is a "windows program" depends on what you mean - it's definitely not part of any normal Windows installation. It might be a virus or piece of spyware, but it may as well be part of some poorly written application. That is very common, unfortunately. A problem that seems to affect Windows to a much greater degree than other operating systems is that most software suppliers, all the way from the Symantecs and Adobes to anonymous free-time programmers, appear to think that their software is so critical and so important that it must be part of the Windows startup procedure, and must be running 24/7. It might be worthwhile to check whether that is the case with the program in question, by searching for it using REGEDIT, and take a look in the Start Menu > Programs > Startup folder as well. While it's always a good idea to have a few boot disks with useful utilities at hand, unless you have a boot sector virus, holding down Shift (or is it Ctrl?) as the system boots will probably suffice, for the purpose under discussion. I imagine that a bootable Linux CD with NTFS support would come in very handy.
  17. It may not be as OT as you think... Lately, I've found that assembly language is often much easier to factor into suitable units of functionality, partly due to the verbosity of assembly source code making code reuse attractive, and partially due to the unbeatable flexibility at hand. Perhaps even more importantly, it's much easier to debug. Whenever something goes wrong, it's usually obvious from the register and memory dump. No need to insert printf() stratements and rarely does one have to code special routines for debugging data structures that printf() can't handle directly. For third party code written in assembly language, you almost have the source code, even if it was never released. Most of the Win9x kernel modules, such as the VMM, are written in assembly language, so it's possible to step through them with a debugger in a finite amount of time, and understand what they're doing. Try that with XP some day. While the speed of assembly language programs tends to be impressive, it's compactness is even more so. For example, a trivial implementation of multi-threading for DOS took about 800 bytes. A simple "hello world" program in even the most concise of 16-bit DOS compilers I've tested - Microsoft C 6.00 - takes over 4000! Imagine what that does for a whole operating system kernel. As far as I'm aware, the Win9x series includes the only non- experimental and reasonably capable 32-bit kernel written in assembly language that the world has seen up to this point. Thus, when both completeness and conciseness are taken into account, it has no competition. It is also the end of the line, so it should hardly come as a surprise if Win98 seems to be in relatively wide use. There is simply nowhere to move on, as were the case with Win95 and others. Moving on is inevitable, however. Considering the speed of CPUs today, it's not unreasonable to older operating systems in environments such as VMware, and the obvious host platform would be an operating system like FreeBSD or Linux. Although rather bloated, they are far better than XP, and they come with full source code. Although contemporary operating systems, they are still based on over 30-years of Unix evolution - to be compared with an experiment that has only barely gained market acceptance despite the perseverance and marketing power of Microsoft. The Alpha, PowerPC and MIPS versions of it went the way of the dinosaurs years before Win9x. Alas, much like XP, the design philosphy of Unix is not one of compatibility, and its goal of portability, although reasonable makes optimisation for size or performance more or less impossible. Thus, my long term plan is a new platform, and it should be easy enough to beat even Win9x in compatibility and compactness. The only reasonable approach to that, of course, is not to attempt to write an operating system from scratch, because such a project is doomed. Just like Windows started as a little DOS shell, a new operating systems that aims for any kind of usability should start from a similarly stable platform. A simple approach is to begin with a DOS-extender and to build debbugging and performance monitoring tools into the kernel from the very start. A reasonable next step is to begin implementing environments required by various kind of useful software - possible including Win9x VxD kernel modules, perhaps graphics drivers from the X- Window System, and network drivers from XP, and so on. DOS-based drivers should remain fully supported and there should be no hurry to replace them, in the absence of better alternatives - quite the opposite of the NT approach.
  18. Cool...this is good to know (although I currently only have 512mbs of RAM on my boxen who knows what tomorrow will bring?) What I'd like to know is what are the feasibility of telling the OS to load just about everything into a ram disk and running from there...this is a question especially aimed at those guys running one of the miniwinis... Any one have personal experience with this? --iWindoze I haven't heard of ram-disk software for the Win9x-series, although I'd be surprised if none exist. In any case, you could try one of the numerous DOS-based ram-disk drivers - a great example of the flexibility available to users of operating systems that take compatibility seriously.
  19. somewan

    98lite

    As far as I know, removing IE does not secure a 9x box from vulnerabilities adressed by a firewall. Quite right. The main risk associated with IE is using it to browser web (or other) content designed to exploit its many bugs, whereas firewalls can be useful to intercept incoming and outgoing connection attempts. For most Win9x users they're probably more useful for the latter (such as trojans/viruses/spyware attempting to transmit sensitive information). It's a network protocol originally for DOS, and now part of Windows file and printer sharing. NetBIOS over IP is described in Internet RFC documents, such as: ftp://ftp.rfc-editor.org/in-notes/rfc1001.txt
  20. Win98 exit is not a reboot, unless you chose the restart option, or unless there's something wrong. Problems with the shutdown sequence are common.Typically Win98 will try to turn off the computer on shutdown. However, it can be made to return to the DOS prompt pretty easily. Right on the money. And here is how to do it, in case anybody is interested: http://www.mdgx.com/98-4.htm#98ATX http://www.mdgx.com/newtip1.htm#DOS http://www.mdgx.com/last3.htm#DOS2DOS You don't seem to mention this registry setting: [HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Shutdown] "FastReboot"="0" Or, at least I think that's the one - it can be changed with the MSCONFIG utility too, and will make the system return to WIN.COM, which - if patched - will terminate and return to DOS. There may be one or two alternative approaches, which I haven't tested well enough to say anything definite about: 1. According to Andrew Schulman's classic "Unauthorized Windows 95", WIN.COM is not really needed, and you can launch VMM32.VXD directly by renaming it to *.EXE. However, Schulman tested an early version of Win95 and this may no longer be entirely accurate. 1.1. The big picture is still true, however, and it shouldn't be hard to cut WIN.COM out of the boot sequence if you know where to hack, because the DOS portion of VMM32.VXD is the true bootstrap loader of the 32-bit ring-0 kernel (VMM + the other VxDs). 2. Look in Control Panel >> System >> Device Manager 2.1. Select "View Resources by Connection" 2.2. Open the properties for "Advance Power Management Support" and disable it. 2.3. Go back to 1.1 and expand the "PLug and Play BIOS" tree 2.4. Expand the "PCI Bus" tree. 2.5. Open the properties for "Intel Power Management Controll" and disable it. Approach 2 probably wouldn't do anything about WIN.COM, however, so you'd still have to patch it. I assume you're referring to the fact (if my recollection is correct) that MS-DOS did start its life as a "quick-and-dirty" (QDOS by Seattle Computers) clone of the 8-bit CP/M operating system. What I find more amazing, however, is the amount of Unix concepts and compatiblity introduced into versions 2.x/3.x - for example: * file descriptors/handles * standard input, output, error descriptors * sample system calls that map directly to Unix equivalents: unlink(), ioctl(), lseek(), dup2() * user-installable character and blocked devices, that can be opened as files * forward-slash recognised as path separator * config.sys switchchar= setting, typically used for setting "-" rather than "/" as the command line switch-indicator * setting for requiring \dev\ (or /dev/) path prefix for opening devices Amusing how MS later tried to distance itself from Unix, and how some of the above functionality was subsequently reduced - the last two, specifically. Almost equally amusing is how rarely this Unix heritage is one of things Unix advocates have to say on the topic of MS-DOS.
  21. Win98 exit is not a reboot, unless you chose the restart option, or unless there's something wrong. Problems with the shutdown sequence are common. Typically Win98 will try to turn off the computer on shutdown. However, it can be made to return to the DOS prompt pretty easily.
  22. odd, The only application people have told me that has not worked is win3x, what other dos programs don't work in dos7.1.??? Every dos game and applications I have used since 1991 on Dos 5 all work 100% on realmode Dos7.1. That is my experience also. It's preferable over 7.0 for most purposes due to its FAT32 support. And I'd like to hack the logo out of the 7.1 IO.SYS. Does anyone have more info on that?
  23. Actually QEMM 8.0x and 9.0 [last one] releases are [or should be] compatible with all Win 9x builds, according to the Quarterdeck docs.This also depends which of QEMM extra "features" one uses. If used strictly as a plain EMS/XMS memory provider, without using any of the "stealth" modes or "advanced" switches, QEMM 9.0 should be almost 100% compatible with all Win 9x builds. I seem to recall that I used the stealth feature successfully with Win98, as well as quickboot. However, after finding UMBPCI.SYS and HIRAM.EXE I've been using those. If I recall correctly, I found them via links from your site.
  24. In the first place, I'm rarely ever at the XP forum to begin with. In the second, I DID add something of value, with a brief overview of WHY they don't make VMWare for non-NT or Linux systems. Those are good reasons, and they're worth knowing. Jason It would be possible to implement a virtual machine with much better performance and dependable real-time characteristics on top of DOS or from scratch - including boot sector program and all. However, it would require a lot more work, and anyone who is more interested in quick results and commercial success than in building the best possible VM would certainly prefer to do so on top of NT or Linux. I'm not saying that's unreasonable, but let's be honest about it.
  25. I don't have the full story, but some interesting hints regarding the file system: Long filenames, whether on NT/2K/XP or 95/98/Me, are stored in 16-bit Unicode format. The IFSMGR VxD (kernel module) that implements 32-bit file access knows how to read them and how to convert them as needed to the format expected by DOS and Windows applications. Implementing APIs that omit the conversions ought to be easy enough. In fact, I plan to experiment with that from a DOS and or VxD point-of-view. The Win16/32 APIs are currently beyond me, I'm afraid. Also, while the VMM (the true kernel), which implements registry access uses a regular character set for the purpose, it should be easy enough to implement hooks that provide Unicode to applications asking for it, and converting on the fly to the normal character set - perhaps URL %xx encoding might be useful or possibly UTF-8.
×
×
  • Create New...