Ascii2 Posted Wednesday at 02:37 AM Posted Wednesday at 02:37 AM 7 hours ago, Dave-H said: OK, I decided to have a try with daniel_k's patch to see if it produced any different results to the one I was already using, which was linked to here. ... It's certainly a much more complicated install routine than the patch I was using first. How the other patch does it by just using two extra files added in System32 and a simple modification to boot.ini is a mystery to me. In the daniel_k patch, you have to specify what sort of machine it is (ACPI Multiprocessor in my case) to get the right HAL files. How the other patch does it without having that specified is impressive! Yes, with the daniel_k's solution, he is not providing a pre-patched binary of a single kernel variant version and a single HAL variant version like that "other" patch you referenced; rather, he provides a patcher that can be used on any compatible variant or HAL and any version of such. The daniel_k's solution is generally better and provides more compatibility options. 7 hours ago, reboot12 said: Modern computers have multi-core processors and the use of RAM patch on old PC does not make sense because old computers with a single-core processor do not support a large amount of RAM and a patch is not needed. Therefore, the 64G patch does not contain other kernal and hal files. 7 hours ago, Dave-H said: So it would be a waste of time installing daniel_k's patch on a uniprocessor machine anyway? I wonder why he bothered to include the option! I would disagree. Older PCs could have relatively large amounts of RAM. It is also important to note that it is not RAM that needs to be considered, but addressable memory (which is inclusive of RAM and other things like allocations for video card memory). A old PC system utilizing only 4 GB of RAM would still likely benefit from the PAE fix patch. Even then, there were computers with much larger amounts or RAM and addressable memory; these were often rather expensive at the time. It is also best to think of the patch as a PAE (Physical Address Extension) fix patch rather than a RAM patch; the intentions and products are different. It also seems like there is a presumption that multi-core processors must use a Muti-processor HAL; however, that presumption would not be correct. Multi-core processor can still use Uniprocessor HAL. It is a configuration I often use (one may also often take the opportunity to scale a single core well beyond where it would tend to scale in a multi-core conflagration while having the other cores turned off). Many CPU architectures since about 13 years ago have had this capability and design consideration. 7 hours ago, Dave-H said: So, do you think the two patches I've tried are actually doing exactly the same thing, so there's no advantage of one over the other? One isn't making the extra RAM any more accessible to programs or the system than the other, for instance? The daniel_k patcher provides a patch tool, while "other" patch you linked to is someone's product of patching and in a limited case.
Ascii2 Posted Wednesday at 02:44 AM Posted Wednesday at 02:44 AM 8 hours ago, Dave-H said: OK, I decided to have a try with daniel_k's patch to see if it produced any different results to the one I was already using, which was linked to here. It took me a while to get it installed. As I have a multi-boot machine, my file paths are not standard, which caused the batch file to fail. For instance, the batch file contains many instances of the path "%SystemDrive%\Windows\". As my Windows XP installation is in D:\WIN-NT this did not of course work! The author should really have used "%WinDir%" I would have thought, unless I'm missing something. Do be aware of this if you try to use this patch on a system which doesn't use the standard paths. Yes, the batch script is rather defective in that it is unreasonably presumptive as to where resources or files are and can result in not finding the something it should find, finding the wrong thing (which can become a data corruption problem after write operations are performed on originals, such as boot.ini), presuming one wants to use kernel or HAL from sp3.cab, etc. However, the script does provide some level of instruction of a potential method to patch files and deploy those patched files.
user57 Posted Wednesday at 03:26 AM Posted Wednesday at 03:26 AM well "4 GB is the ram limit for a 32 bit operating system" is finally busted i wonder why even big company´s are still making that fault: https://www.asrock.com/mb/Intel/B560M-C/index.us.asp "**Due to the operating system limitation, the actual memory size may be less than 4GB for the reservation for system usage under Windows® 32-bit OS. For Windows® 64-bit OS with 64-bit CPU, there is no such limitation." to explain the selector/segment or maybe segment selector in the past like 8 bit or 16 bit (sometimes both - aka having 8 bit instructions and 16 bit data) the combination of segment + offset = "real address" also the instruction pharser that read the next opcode, automaticly use the next segment selector (for continuous flow - without chucks) - why this would not be the case for 32 bits ? (it is a cs:eip combination (cs for code segment)) you could write a code either with a selector+offset or just the offset - for "just the offset "the cpu automaticly then choosed the next one (where it then just executes the next instruction) if that isnt doing it there would be still the explained instruction codes that concretly use a selector:offset combination either way if the cpu isnt doing the next segment selector automaticly you still could write a jump at the end of the the 4GB address(or just changing the selector to the next one) https://wiki.osdev.org/Segmentation for a data allocation this would also be of use you could get a DS(data segment) + offset , and directly you would have a 4 GB chuck thats why i always (in like the past i wanted to have a virtualalloc2 function that also gives out the selector) - but me writing that info forums (including microsoft ones) where just ignored the others where already explained , like 4 mb pages (instead of 4k), PSE, PAE - PDE´s , PTE´s and why there is a chuckwise solution (and why this chuckwise solution is useable for app´s/executables/modules)
Ascii2 Posted Wednesday at 03:52 AM Posted Wednesday at 03:52 AM (edited) 23 hours ago, reboot12 said: @Ascii2 I make big tests: using Macrium Reflect clone OS to other HDD - still problem replace RAM modules - still problem use only 2GB RAM 1+1 - still problem using Macrium Reflect make disk image, then restore on other PC - Haswell 8GB RAM (4+4 Dual Channel), OS boot, find new devices, I install drivers. I test and it looks like there is no problem and WinXPPAE 3.5 works well :-) I use exactly same patched files and settings: ntkrnl2.exe SP2 5.1.2600.2180 + hal2.dll SP3 5.1.2600.5512, noexecute=optin I have to use the system for some time to make sure 100% that there is no problem. It seems that the problem is only on SandyBridge machines. My impression is that there might be HAL issues or CPU errata/microcode issues on at least the SandyBridge machines. I have tested the WinXPPAE.exe 3.5.0.0 patch tool on newer HAL files than the initial Service Pack 2 version, and they are also rejected as invalid by the patcher. Perhaps the SP2 versions do not need the patch; it is possible that whatever is patched for the Service Pack 3 HALs might not be an issue on the SP2 HALs. Have you tried using the SP2 HAL with the WinXPPAE.exe 3.5.0.0 patch tool patched SP2 kernel? Regardless, you might wish to try newer SP2 HAL. The update packages that I have are all intended as the English language (United States) versions, however, I would expect the HALs should still work in another language distribution if the HAL file is specified explicitly in boot.ini. I have uploaded the HAL updates for English Windows XP with Service Pack 2 to: https://www.mediafire.com/file/5ibjk0jsnltisxa/SP2_HAL_Updates.zip/file File Name: SP2_HAL_Updates.zip Size: 7,597,675 bytes CRC32: BA5633F7 MD5: 170844F1EDE2CE551C9E5F213A283059 SHA-1: E29ADE7FE24B8C7D10CDCF1EB30D197733B57DFA SHA-256: B911093A86B272BFDADCEF9F27335DD6E677E0DE60E4D00BBFEFD83C3AC3AB54 SHA-512: E7F08887D5E1DC432C269A40CD5E977F2806FF9D14DDFBBE86B6C58BFB519A3D738F9E98665F3D110619D314A8D6E6385BED5C5520CD97559A7058F62DB55A68 Edited Wednesday at 04:03 AM by Ascii2
Dave-H Posted Wednesday at 10:38 AM Posted Wednesday at 10:38 AM In case anyone is interested, here attached are the two pairs of files used on the system by daniel_k's patch (hal2.dll and ntkrnl32.exe) and the '64G' patch (hal64g.dll and ntkl64.exe). I'm afraid I don't know how to analyse them to see just how similar they are. Looking with CFF Explorer, the dlls certainly do look very similar (they are exactly the same size), although of course not identical. If anyone with the necessary skills can compare them properly, I would be very interested to know the results! RAM Patches Files.zip
reboot12 Posted Wednesday at 11:51 AM Posted Wednesday at 11:51 AM 7 hours ago, Ascii2 said: Have you tried using the SP2 HAL with the WinXPPAE.exe 3.5.0.0 patch tool patched SP2 kernel? WinXPPAE 3.5 can't patch any SP2 hal version: Patching halmacpi.dll (ACPI Multiprocessor PC HAL)... Invalid file!
Ascii2 Posted Wednesday at 05:54 PM Posted Wednesday at 05:54 PM 5 hours ago, reboot12 said: WinXPPAE 3.5 can't patch any SP2 hal version: Patching halmacpi.dll (ACPI Multiprocessor PC HAL)... Invalid file! Yes. I was getting rather tired when I wrote that and the post to you and did not proofread the post well enough. I had originally started writing around that quoted question, but then tried to patch the SP2 HALs myself to check. Something I wanted to ask, but forgot, is: Whether it is important for you to stay on an old Windows XP Service Pack 2 level kernel? If not, you might try a newer kernel version (and other possibly other updates).
j7n Posted Thursday at 01:45 AM Posted Thursday at 01:45 AM Does this promise anything more than has been possible in Server 2003 without patching?
user57 Posted Thursday at 05:55 AM Posted Thursday at 05:55 AM is someone making that test spoken of ? like j7n did with server 2003 x32 bit´s (its a XP based OS) someone has to disable the pagefile (because theoretical windows can also pass the 4 GB using a pagefile, a pagefile is just data on a harddrive/HDD/SSD) then someone has to use up more then 4 GB of ram, it would be a good indicator, but perfect would be to look of the pages are really in the upper 4 GB ram i meant to turn off the pagefile quite often - but then it didnt do that anyways the pagefile was still not offline maybe daniel k. should say something about this he wrote in a other post that in his opinion both ntoskrnl and hal has to be changed to make xp able to do the 64 GB (dibya said the oposite for example) - who knows who is right daniel k.writes that he modified hal and ntoskrnl (that suggest that that ntoskrnl´s and hal.dll are patched) also he mention symbol files - that also suggest that a ntoskrnl version and a hal version was used to do so you cant just select the patched ntoskrnl.exe and hal.dll by a symbol file searcher you have to figure out what version that ntoskrnl and hal are... then you have to force the disassembler to use the file from the non patched version the filecompare tool will find the changes if you have the right ntoskrnl and hal therefore also daniel k. wrote that he used the free space method for doing the patch this can be risky because some codes only access some free space sometimes, or on certain conditions (this often makes random crashes) a better solution is to add a section or increase a section - you have to know what you do here but - you quickly can kill the files like that if you dont know what you are doing are there more then 3 versions to do so ? its gambling around like "dibya´s" "the russian one" and the "daniel k. one"
Ascii2 Posted Thursday at 05:40 PM Posted Thursday at 05:40 PM 15 hours ago, j7n said: Does this promise anything more than has been possible in Server 2003 without patching? It does not.
Dave-H Posted 16 hours ago Posted 16 hours ago Having now reviewed things, I've come to the conclusion that increasing the apparently available RAM won't do anything at all to help with my crashing browser. Correct me if I'm wrong, but from what I've researched, the problem is fundamental, and it's because it's a 32-bit browser on a 32-bit system. My understanding now is that 32-bit processes are limited to accessing 2GB of RAM, and that remains the case regardless of how much RAM is on the system. All the RAM patch will help with is running more processes simultaneously, it won't help at all with giving more memory to individual processes. The Facebook browser tab is constantly running out of memory because of the FB Purity add-on, which is eating huge amounts of it. It appears that there's nothing I can do about that. Although it caused other problems, unfortunately, the only thing which helped was using the /3GB switch in boot.ini, which I gather allows a process to access 3GB of memory instead of 2GB. I've tried using the /3GB switch with the extended RAM patch, but the system then just BSODs on startup, so I assume that configuration isn't an option. Am I right here, and there's actually nothing I can do about the browser problem? Thanks, Dave.
user57 Posted 15 hours ago Posted 15 hours ago (edited) hmmm in the past we had often a discussion about "32 bit is the 4 gb limit" or = 32 wires 32 bits can form 2^32 bits therefore 4´294´967´296 what represents 4 GB the problem with that is that even in 8 or 16 bit´s there are so called selectors those do actually still exits in 32 bit too also very important is that these 4 GB of possible "addresses" are not converted into that rather the representing PTE´s and PDE´s represent to a 4 KB page (in old normal 32 bit style) so going with that if you have 4 GB/ 4 k pages you end up in only using 20 bits ! (20 bits * 4k = 4 GB) a idea (very similiar to the discussion about 4 k sectors and 4 MB sectors with the harddrive(hdd/ssd)) would be just to increase the 4 K pages to 4 MB pages that would be 1024 times as much RAM the discussion about this are elder thats why that bit (PSE (page size extension)) was 1 candidate to pass that 32 bit limit the other candidate was the PAE bit (physical address extension) here is 1 catch PSE can extend the 4 GB by increasing the page size (even with a 32 bit PDE´s and PTE´s) in PAE mode PDE´s and PDE´s are 64 bit (yes even in 32 bit mode), that´s one of the reasons why PAE can address more then 4 GB of ram (to be more precise paging are mechanism´s there also some more things they play a role the CR3 register (also called the page directory base register) or the GDT, it would be to long to explain all but lets stay like this for now) so let us take the next important step in the old 32 bit system without PSE,PAE or "concrete use of segments", an app/executable had to use up the 4 GB ram because there was a user-mode (ring3) and a kernel-mode (ring0) there had to be a part in kernel mode and a other part in user-mode offsets are not a problem (as you can map a address like 0xFFFFFFFF to page 0 if you wanted) deviding the kernel mode and user mode is important, if that would not be the case every crash on a normal app/execuble would be a entire OS crash (BSOD or wrose) so the solution was simple usermode 0x00000000 - 0x7FFFFFFF - kernelmode 0x80000000 - 0xFFFFFFFF thats how the eldest 32 bit norms work it is not to bad to have kernel mode ram because many modules actually need space in memory (its not like an app is just made out of the app itself the app consits of many modules that also use ram internal) - also the video frame buffer often has a kernel mode area that takes memory for example so here we finally have our split 0x00000000 - 0x7FFFFFFF (2 gb) + 0x80000000 - 0xFFFFFFFF (2 gb) = 4 GB the app/executable now has the problem that it cant directly use the kernel mode offsets (it certainly use them but rather passiv (its some memory you need less in your app)) for this microsoft then made the /3 GB option in this one usermode has 3 GB and kernelmode 1 Gb (0x00000000-0xBFFFFFFF) + (0xC0000000-0xFFFFFFFF) = 4 GB but now the important part those offsets are not physical addresses ! actually you can map every of that "offset´s" to different places in memory (including the ones above 4 GB RAM) so thats why multiple applications/executables can use up the entire 64 GB of RAM (in j7n example he had 20 GB of 64 GB possible) thats more then 4 GB of ram and the proofes XP 32 bit can do it that way now we also have to mention the ramdisc solution, if you have like 20 GB of data and map then into the physical ram then the ramdisc use a chuck-wise solution it then maps for example 1 GB of that 20 GB data into an offset (that goes like fast) (when it wants the next part it just maps the next part - but it is chuckwise) now we have to mention the pagefile, in the pagefile windows stores memory data on the harddrive (HDD/SSD) when windows wants it use exactly that trick (it loads the data from the harddrive into that virtual offset) but this time there is an important differenz - the data is not from a RAM memory - it has to load that from the harddrive (and that is slow) for a SSD its faster - but still not as fast as it would be in memory so i guess some have reconized the increase in speed with SSD`s ? now you have a little more precision why this is the case so now to the part why a "normal app/executable" actually dont pass that 4 GB ram ram limit i already explained why it is rather like 2-3 GB ram + some kernel mode memory that is rather used like passiv but now the reason why it actually dont go beyond the 4 GB ram in "just one application/executable" here you have to know that all compilers form the past-now dont use segment selectors to pass that 4 GB ram i never seen a compiler that would be able to do so. probaly windows xp (even up to win10) dont have code for this either. but here is the thing normally in a code flow of a debugger you see a offset (or better called vitual offset this time) for the code flow it use a register called EIP (instruction pointer) - the E stands for 32 bits - for example only IP for 16 bit and REIP for 64 bit (R means 64bit) but here is the next catch ... the most common debuggers dont show the selector but for the code flow the CPU use a combination of a segment selector + virtual offset (the one you see in a debugger for example) this is called the CS:EIP combination - where CS is the segment selector (called the CODE SEGMENT(short CS) selector) EIP E for 32 bit and IP for instruction pointer now might be a good time to call out a wiki page about that: https://wiki.osdev.org/Segmentation that CS selector is 16 bits wide (in 32 bits) and 16 bits means 65536 possible selctors ... so the combination 16:32 or CS:EIP would be 65536 * 4 GB - that would be theoretical 262144 GB that it could select (+ you have 6 of segment selectors all in range 0-65536) but here is the catch windows (either xp - 10) dont use this to pass the 4 GB limit the only things windows actually did with these registers is that they point to small pieces of memory (like the EPROCESS structure) that is possible - but it leaves out the use to pass the 4 GB of ram limit by just using a few kilobytes with that method ... here is an example where microsoft accesses a small piece of ram using a selector : https://devblogs.microsoft.com/oldnewthing/20220919-00/?p=107195 (the problem with the way this is used, it not used that to pass the 4 GB ram limit, rather is use that as small memory storage) so the next step: back to the app/executable as mentioned there is no compiler that actually generate the useage of segmentation ... therefore you would need a specific app/executable that can do so - and that means chrome still would not pass the 4 GB ram limit (because it cant use segmention) (and then you also would need the windows operating system code also to control this kind of memory managment) also to mention is the FPU unit even the elder ones are 64 bit (or bigger) in modern cpu´s they are 512 bits (AVX) so actually you can make 64 bit moves in 32 bits or calculations, compessions like h.265 do so also some routines are written in the FPU, for example cpu intense sort mechanism´s - you might heared MMX, SSE ect. so in short 32 bit has 64 bit commands that it can use over the FPU unit (what is actually done sometimes) it can pass the 4 GB of ram over multiple app´s/executables it cant pass more then 4 GB of ram with just one app/executable - unless it use a chuckwise method (like the ramdisc) - the segment registers are actually not used to pass the 4 GB ram limit(that also goes for win10 32 bit) so i tryed my best to explain this as simple as i could Edited 14 hours ago by user57 1
reboot12 Posted 15 hours ago Posted 15 hours ago (edited) On 7/1/2025 at 6:47 AM, reboot12 said: I have to use the system for some time to make sure 100% that there is no problem. Unfortunately, after a few days of testing, there is still a problem with the lag on Haswell machine. I changed OS to my favorite - WinXP 64-bit - everything works ok and I have full access to RAM P.S. I wanted to add a RAM patch to my project on MDL - WinXP 32-bit on a modern PC (ISO boot.wim + install.wim) but since it works unstable, I will probably give up Maybe I will add as an alternative in the boot.ini and BCD file but not as default. Edited 14 hours ago by reboot12
Dave-H Posted 14 hours ago Posted 14 hours ago 1 hour ago, user57 said: hmmm in the past we had often a discussion about "32 bit is the 4 gb limit" or = 32 wires 32 bits can form 2^32 bits therefore 4´294´967´296 what represents 4 GB the problem with that is that even in 8 or 16 bit´s there are so called selectors those do actually still exits in 32 bit too also very important is that these 4 GB of possible "addresses" are not converted into that rather the representing PTE´s and PDE´s represent to a 4 KB page (in old normal 32 bit style) so going with that if you have 4 GB/ 4 k pages you end up in only using 20 bits ! (20 bits * 4k = 4 GB) a idea (very similiar to the discussion about 4 k sectors and 4 MB sectors with the harddrive(hdd/ssd)) would be just to increase the 4 K pages to 4 MB pages that would be 1024 times as much RAM the discussion about this are elder thats why that bit (PSE (page size extension)) was 1 candidate to pass that 32 bit limit the other candidate was the PAE bit (physical address extension) here is 1 catch PSE can extend the 4 GB by increasing the page size (even with a 32 bit PDE´s and PTE´s) in PAE mode PDE´s and PDE´s are 64 bit (yes even in 32 bit mode), that´s one of the reasons why PAE can address more then 4 GB of ram (to be more precise paging are mechanism´s there also some more things they play a role the CR3 register (also called the page directory base register) or the GDT, it would be to long to explain all but lets stay like this for now) so let us take the next important step in the old 32 bit system without PSE,PAE or "concrete use of segments", an app/executable had to use up the 4 GB ram because there was a user-mode (ring3) and a kernel-mode (ring0) there had to be a part in kernel mode and a other part in user-mode offsets are not a problem (as you can map a address like 0xFFFFFFFF to page 0 if you wanted) deviding the kernel mode and user mode is important, if that would not be the case every crash on a normal app/execuble would be a entire OS crash (BSOD or wrose) so the solution was simple usermode 0x00000000 - 0x7FFFFFFF - kernelmode 0x80000000 - 0xFFFFFFFF thats how the eldest 32 bit norms work it is not to bad to have kernel mode ram because many modules actually need space in memory (its not like an app is just made out of the app itself the app consits of many modules that also use ram internal) - also the video frame buffer often has a kernel mode area that takes memory for example so here we finally have our split 0x00000000 - 0x7FFFFFFF (2 gb) + 0x80000000 - 0xFFFFFFFF (2 gb) = 4 GB the app/executable now has the problem that it cant directly use the kernel mode offsets (it certainly use them but rather passiv (its some memory you need less in your app)) for this microsoft then made the /3 GB option in this one usermode has 3 GB and kernelmode 1 Gb (0x00000000-0xBFFFFFFF) + (0xC0000000-0xFFFFFFFF) = 4 GB but now the important part those offsets are not physical addresses ! actually you can map every of that "offset´s" to different places in memory (including the ones above 4 GB RAM) so thats why multiple applications/executables can use up the entire 64 GB of RAM (in j7n example he had 20 GB of 64 GB possible) thats more then 4 GB of ram and the proofes XP 32 bit can do it that way now we also have to mention the ramdisc solution, if you have like 20 GB of data and map then into the physical ram then the ramdisc use a chuck-wise solution it then maps for example 1 GB of that 20 GB data into an offset (that goes like fast) (when it wants the next part it just maps the next part - but it is chuckwise) now we have to mention the pagefile, in the pagefile windows stores memory data on the harddrive (HDD/SSD) when windows wants it use exactly that trick (it loads the data from the harddrive into that virtual offset) but this time there is an important differenz - the data is not from a RAM memory - it has to load that from the harddrive (and that is slow) for a SSD its faster - but still not as fast as it would be in memory so i guess some have reconized the increase in speed with SSD`s ? now you have a little more precision why this is the case so now to the part why a "normal app/executable" actually dont pass that 4 GB ram ram limit i already explained why it is rather like 2-3 GB ram + some kernel mode memory that is rather used like passiv but now the reason why it actually dont go beyond the 4 GB ram in "just one application/executable" here you have to know that all compilers form the past-now dont use segment selectors to pass that 4 GB ram i never seen a compiler that would be able to do so. probaly windows xp (even up to win10) dont have code for this either. but here is the thing normally in a code flow of a debugger you see a offset (or better called vitual offset this time) for the code flow it use a register called EIP (instruction pointer) - the E stands for 32 bits - for example only IP for 16 bit and REIP for 64 bit (R means 64bit) but here is the next catch ... the most common debuggers dont show the selector but for the code flow the CPU use a combination of a segment selector + virtual offset (the one you see in a debugger for example) this is called the CS:EIP combination - where CS is the segment selector (called the CODE SEGMENT(short CS) selector) EIP E for 32 bit and IP for instruction pointer now might be a good time to call out a wiki page about that: https://wiki.osdev.org/Segmentation that CS selector is 16 bits wide (in 32 bits) and 16 bits means 65536 possible selctors ... so the combination 16:32 or CS:EIP would be 65536 * 4 GB - that would be theoretical 262144 GB that it could select (+ you have 6 of segment selectors all in range 0-65536) but here is the catch windows (either xp - 10) dont use this to pass the 4 GB limit the only things windows actually did with these registers is that they point to small pieces of memory (like the EPROCESS structure) that is possible - but it leaves out the use to pass the 4 GB of ram limit by just using a few kilobytes with that method ... here is an example where microsoft accesses a small piece of ram using a selector : https://devblogs.microsoft.com/oldnewthing/20220919-00/?p=107195 (the problem with the way this is used, it not used that to pass the 4 GB ram limit, rather is use that as small memory storage) so the next step: back to the app/executable as mentioned there is no compiler that actually generate the useage of segmentation ... therefore you would need a specific app/executable that can do so - and that means chrome still would not pass the 4 GB ram limit (because it cant use segmention) (and then you also would need the windows operating system code also to control this kind of memory managment) also to mention is the FPU unit even the elder ones are 64 bit (or bigger) in modern cpu´s they are 512 bits (AVX) so actually you can make 64 bit moves in 32 bits or calculations, compessions like h.265 do so also some routines are written in the FPU, for example cpu intense sort mechanism´s - you might heared MMX, SSE ect. so in short 32 bit has 64 bit commands that it can use over the FPU unit (what is actually done sometimes) it can pass the 4 GB of ram over multiple app´s/executables it cant pass more then 4 GB of ram with just one app/executable - unless it use a chuckwise method (like the ramdisc) - the segment registers are actually not used to pass the 4 GB ram limit(that also goes for win10 32 bit) so i tryed my best to explain this as simple as i could Thank you so much, but I'm afraid that is pretty much all completely over my head! Do you know why the /3GB switch cannot be used with the RAM patch applied?
user57 Posted 12 hours ago Posted 12 hours ago On 6/13/2025 at 2:53 AM, j7n said: If you have a lot of RAM memory (to make use of PAE) that in itself consumes space in the kernel memory where a translation table is kept. Drivers allocate kernel memory for their own needs. For this reason /3GB might lead to instability with only 1 GB left over. /USERVA allows to tune the boundary to give a little more user memory, such as 2.5 GB. im not certain, it would requie to read out all the stuff that has been done related to this patches the /3GB is known to have some problems too, i dont know the problems with it j7n maybe has some knowledge here ? if its a problem with the /3GB switch there would be also the /USERVA switch to set a different value (between 2048 and 3072 (3 gb)) https://learn.microsoft.com/en-us/windows/win32/memory/4-gigabyte-tuning the one from dibya only use a small patch - maybe that would be worth a shot daniel k.´s patch use more of changes, maybe static offsets - that would be a reason why they dont work together also the method daniel k. described could trigger a crash not certain what "the russian patch" do to solve this problem is at the moment to much things around, i might can tell the norm´s, talks, idea´s a debug analize for example would take some time but if it comes to new work im busy at the moment sorry i also dont have the function test for dibya´s patch (aka if it passes the 4 gb limit) there should be someone to try these patches with the method j7n did to do so is simple just disabling the pagefile, then starting up app´s/executables that eat up more then 4 GB of ram if that is possible that patch is likely to work 1
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now