
user57
MemberContent Type
Profiles
Forums
Events
Everything posted by user57
-
is someone making that test spoken of ? like j7n did with server 2003 x32 bit´s (its a XP based OS) someone has to disable the pagefile (because theoretical windows can also pass the 4 GB using a pagefile, a pagefile is just data on a harddrive/HDD/SSD) then someone has to use up more then 4 GB of ram, it would be a good indicator, but perfect would be to look of the pages are really in the upper 4 GB ram i meant to turn off the pagefile quite often - but then it didnt do that anyways the pagefile was still not offline maybe daniel k. should say something about this he wrote in a other post that in his opinion both ntoskrnl and hal has to be changed to make xp able to do the 64 GB (dibya said the oposite for example) - who knows who is right daniel k.writes that he modified hal and ntoskrnl (that suggest that that ntoskrnl´s and hal.dll are patched) also he mention symbol files - that also suggest that a ntoskrnl version and a hal version was used to do so you cant just select the patched ntoskrnl.exe and hal.dll by a symbol file searcher you have to figure out what version that ntoskrnl and hal are... then you have to force the disassembler to use the file from the non patched version the filecompare tool will find the changes if you have the right ntoskrnl and hal therefore also daniel k. wrote that he used the free space method for doing the patch this can be risky because some codes only access some free space sometimes, or on certain conditions (this often makes random crashes) a better solution is to add a section or increase a section - you have to know what you do here but - you quickly can kill the files like that if you dont know what you are doing are there more then 3 versions to do so ? its gambling around like "dibya´s" "the russian one" and the "daniel k. one"
-
well "4 GB is the ram limit for a 32 bit operating system" is finally busted i wonder why even big company´s are still making that fault: https://www.asrock.com/mb/Intel/B560M-C/index.us.asp "**Due to the operating system limitation, the actual memory size may be less than 4GB for the reservation for system usage under Windows® 32-bit OS. For Windows® 64-bit OS with 64-bit CPU, there is no such limitation." to explain the selector/segment or maybe segment selector in the past like 8 bit or 16 bit (sometimes both - aka having 8 bit instructions and 16 bit data) the combination of segment + offset = "real address" also the instruction pharser that read the next opcode, automaticly use the next segment selector (for continuous flow - without chucks) - why this would not be the case for 32 bits ? (it is a cs:eip combination (cs for code segment)) you could write a code either with a selector+offset or just the offset - for "just the offset "the cpu automaticly then choosed the next one (where it then just executes the next instruction) if that isnt doing it there would be still the explained instruction codes that concretly use a selector:offset combination either way if the cpu isnt doing the next segment selector automaticly you still could write a jump at the end of the the 4GB address(or just changing the selector to the next one) https://wiki.osdev.org/Segmentation for a data allocation this would also be of use you could get a DS(data segment) + offset , and directly you would have a 4 GB chuck thats why i always (in like the past i wanted to have a virtualalloc2 function that also gives out the selector) - but me writing that info forums (including microsoft ones) where just ignored the others where already explained , like 4 mb pages (instead of 4k), PSE, PAE - PDE´s , PTE´s and why there is a chuckwise solution (and why this chuckwise solution is useable for app´s/executables/modules)
-
i dont know but the name "ntkrln64g" sounds like it is a wrapper, maybe a kernel extender ? if not there like many versions of ntoskrnl´s you have to find the right version in that case the searched patches show 7 changes 138-13a is probaly the checksum, that checksum is a "elder driver signature" - if that one isnt currect the system driver (in this case the important ntoskrnl) is not loaded as you may reconized that patch did not touch that ntkrpamp.exe, you should check what ntoskrl was loaded (there are only 3, ntkrnlpa, ntoskrnl and ntkrpamp to check if it use up more then 4 gb ram you should repeat the test from page 7, there is also a page-file that actually can pass that limit you might turn that one off while doing that test
-
something like this
-
if /PAE is on windows dont load NTOSKRNL.exe windows then load NTKRNLPA.exe it often sounds like that the things are like very simple, like it can be done within 5 mins but rather often these projects are big. you cant just do like many of them i would be happy if that would be the case, but rather teamwork is required, one person doing all of these isnt possible rather if someone would try he would just have to much to do (and in the end probably no problem was solved) - thats why i didnt want to join a other project then xp for example - we have still a lot to do with xp this one is also not a small one if you dig into the paging system - there are books that describe that mechanism´s and norms (assembly books for example) + getting these into the ntoskrnl before boot the bios also prints out "i have 8 gb ram" the message in the first post also just shows windows saying it have that, but there is no confirmation that it actually is doing that to look for what the patch actually changed there are file compare app´s this is a free one : https://www.file-upload.net/download-15511527/fc.exe.html a small patch theoretical could do it just like that, that´s in range of possibilities with the file compare tool you can look the disassembled code, so you can see with what the patch(es) interferes so thank you to all that help with us, a testing maybe loading multiple apps/executables would be a good idea these running app´s actually should use up more then 4 GB in total, maybe some chrome instances that would tell us more about the functionality of this patch a bit more just 1 app/executable cant pass the 32 bit limit without a selector/segment selector, except maybe there would be the switch option (like the ramdisc is doing) however what the patch maybe could do is that different apps can be mapped to higher ram then 4 GB that only require the right pages that represent the higher area over 4 GB into different executables/app´s to say is that like 99,99 % of app´s/executables dont use selectors, and i never have seen them (also in win7-10) for the purpose to pass more then 4 GB ram the switch solution (like a ramdisc do) works a bit different you can store like 16 gb of ram, thats not the problem - but each time you want to access a certain ram you have to remap that data to an certain virtual offset (thats what a ramdisc is doing, that is very fast but its a piece wise solution) - that solution dont use a selector but - i dont think windows can do that yet but again over multiple executables there we dont have that problem - 4 GB for each running application sounds not bad
-
that video from reboot12 just shows a very slow interacting opening windows for the "WinXPPAE 3.5 patch" for the other video it shows fast interacting for opening windows not using "WinXPPAE 3.5 patch" it actually dont show a "out of memory problem" is that patch doing something else ? the patch changes you can see with a file comparer i cant dig into a new project, because im already doing one but i would look what this patch actually is doing i remember a elder discussion but, it was either the first release of xp or the second where the "LARGE PAGE" discussion apeared and the "LONG MODE" in the past it has only 4K pages in the norm so that XP discussion claimed to use the large page where the discussion was about instead of 4K paged, was changed to 4 MB pages they called it "the long mode" so here might be a possible catch, that 4 MB pages where in that discussion said to be buggish and slow in interaction ... while AMD claimed to be able to use these 4 MB pages without any speed loss do someone might remember that discussion ? i only remember that this version then quickly got removed, later /PAE apeared (and claimed to use 4k pages in PAE, and having faster speed) what i wrote about the long mode and large page should be considered as discussion (it might not be correct, or need fixes , i just try to point out a elder discussion - for example if i google "long mode" now it just says "thats the x64 bit mode" - but thats not what was discussed that day from the past)
-
one way i could think of would be to check the PTE´s and PDE´s if the physical ones actually point to the higher physical ram for this you need to write a tool that reads out the PTE´s and PDE´s for each running app/executable that with the physical ram is simple to understand, app´s/executables have "offsets", these offsets then have a "virtual offset" that offset then can be "mapped" to everywhere in the ram like in lets say 64 gb of RAM into the 60 GB part of the ram to see if the upper parts higher then 4 GB are used you probaly have to start enough of app´s/executables that use up the 4 GB ram if so some of these app´s/executables should have PTE´s and PDE´s pointing to higher physical ram then 4 GB another way would be to look the source code the ntoskrnl is likely to control that and the WRK is like open source (once published by microsoft for students to study) it would requie to look these codes if they are doing it - it´s some work to dig into this but we dont know exactly what this version of ntoskrnl in that WRK can do - but in the past there was the LARGE PAGE discussion and the so called "LONG MODE" in "LONG MODE" pages are not 4k they are 4 MB (that was one possible way to pass the 4 GB limit because the bits representing the physical connection are then 1024 times more) to explain all the things in paging i would have to write a bit to much now, its like an entire book, as mentioned there are 3 PSE,PAE,PAGING lets just say for PAE PAE (in 32 bit) consits of 64 bit PTE´s and PDE´s, not all of the bits are where the page is, for a better explaination i suggest to look that here: https://wiki.osdev.org/Paging not having 64 bits is not a problem because the pages are to be multiplyed by the page size as mention like from 4k to 4 MB also its a combination of PDE´s and PTE´s depending on the mode what was set also the norm of the PTE´s and PDE´s are changing so on the osdev wiki website you look for the 64 bit entrys, above is a 32 bit entry explaination, 64 bit entrys are not to confuse with the 64 bit mode (that a x64 OS has) it just describe the size of the PTE´s and PDE´s, they have been 64 bit of size in a 32 bit OS since PAE was around (PAE came with the NX flag, then having that 64 bit entry for the first time) here you read out where the physical pages point to, therefore you then have the information if ram over 4 GB was used
-
i forgot the about RAMDISC that AWE was already used in the past with RAMDISC´s (and used more then 4 GB of ram in xp in the past like this) but it has the problem i described before, it can fill more then 4 GB of ram (or the upper parts past of 4 GB physical ram addresses) but then you have to "remap" that pages into a virtual offset - that goes fast but its a small step to do (you have to make it in pieces then) for an app/executable that might be different, because multiple apps/executables mean more virtual offsets for each app/executable therefore the app´s/executables then would be able to use 4 GB of ram in every executable (every app can be seen as a extra virtual offset) - in this case windows then just maps the higher physical ram (for example physical ram addresses from 5-8 GB) - that is possible thats because useally you only see the "virtual offset" - it is not a physical address so then you would have 4 GB each app/executable to mention a other problem, chrome or an application are not always "flawed" either, they actually have bugs for example i have a win7 machine where a memory crash always apear after some time bigger app´s dont useally have a own controlment for its memory, they use a engine - in this case probaly a memory managment engine that win7 chrome is bugged with a "untouched download" from chrome for me so it not neccesary has to be always a error of the backport for example to windowsXP well about that patch - i havnt worked on it so its hard for me to say if it is really doing its job what i can see but is that this patch dont show its connection for the PTE´s and PDE´s - that would be a neccesary thing it must can handle (the patch is incredible small) let´s say it cant, then it might print out "i have 16 GB of ram", but the progress might not be done maybe the patch just works - that would be possible the patch should be tested, or maybe someone comments if he already has tested that patch, maybe someone has knowledge what dave wrote sounds a bit like that it actually dont work chrome make multiple apps/executables - and if its doing that it has at least 2 GB of ram for each app 2 GB should be enough for one TAB by like a far if that out of memory error apears when a lot off app/executables apears it would be an indicator that the patch dont work because it cant handle the PTE´s PDE´s or maybe other parts of that progress (that handle the controlment of physical pages above 4 GB) then it would try to use the first piece of the 4 GB ram and that error would apear so hard to say, but i would try multiple apps/executables that actually use up more then 4 GB ram
-
the website still exits https://catalog.update.microsoft.com/home.aspx keyword is "Microsoft .NET Framework Version 1.1" the others are upgrades you have to first use the main installer what is around 24 mb
-
the most upgrades use the a installer the older ones are made via InstallShield (suspection is that these are made with VC6/VS6) the newer ones seems to be made with the InstallShield from VS2010 they have 1 important thing in common both create a .inf file for .inf files microsoft has a "installer engine" like "CopyFiles" or "AddReg" therefore you can combine all the upgrades using that "installer engine"/"inf installer". that engine can do multiple things but the most 2 things are registry and filecontrolment this has a problem that engine is not very fast - it is probaly made for smaller installers these .inf engine then use very windows common functions to control the registry and the files like MoveFileEx, RegOpenKey ect. there is also a famous bug with some of the registry entrys: office 2010 create context menu entrys that context menu is controled by explorer.exe (where office made entrys) but at some point the deeper function cant handle the corrupted Heap RtlAllocateHeap then is failing, it is failing because the Heap got currupted, RtlAllocateHeap is called by every alloc function that bug can be considered a windows error because RtlAllocateHeap should not fail just because the Heap got corrupted
-
maybe it would be possible over this what microsoft calls AWE https://learn.microsoft.com/en-us/windows/win32/memory/address-windowing-extensions however it seems to give control over physical pages but it is connected to a virtual address but it is not using a selector going with only 1 executable, that means you might can map more then 4 GB of data piece wise but always when you want to access that RAM/data you have to remap it to a virtual offset and just thinking around if that AWE is used by internal functions of windows - thats maybe how windows xp passes the 4 GB limit - in a post before i wrote its possible to map the other ram (higher then 4 gb) in a 32 bit system (or with a different name "pages") thus xp could map ram above 4 GB in multiple applications (that sounds very similiar to what i wanted to point out) having the "multiple app/executable" solution is not that bad so you actually use up the 8 GB 16 GB or whatever you have installed in your computer chrome is the best example right now i have 24 open executables of chrome (with 3 times google started up)
-
there are 3 that can exceed the 4 GB limit PAE PSE PAGING lets say this patch works, it dont exceed the normal 4 GB limit what rather is like 2 GB per running executable/app - this is because the usermode useally takes 0-7FFFFFFF addresses the rest is in kernel mode (80000000-FFFFFFFF) so somewhat somehow it can use that 4 GB per app, but its split apart (2 gb usermode memory and 2 gb kernel mode memory) to have a little more userspace, there is a smaller solution to pass that 2 GB limit to a 3 GB limit https://learn.microsoft.com/en-us/windows/win32/memory/4-gigabyte-tuning then the usermode limit is 0-BFFFFFFF and kernel mode is C0000000-FFFFFFFF (3 gb usermode, 1 gb kernelmode) but back to that important answer, normal apps dont use for example segments to exceed that 2 GB/maybe 4 GB limit but there is actually a different reasoning why the 64 GB are use anyways and that reason is that you dont have just 1 app/executable running on XP those different apps then can point to a different spot in memory (aka passing the 4 GB limit) so having multiple applications (for example google use a lot of extra/or restarting applications/executables) is also a solution to use the 64 GB ram have the patch ever tryed to be functional ? norm-wise it is certainly possible, someone might actually run a lot of big applications and check what physical memory pages are mapped out to make a paging example there are segments https://en.wikipedia.org/wiki/X86_memory_segmentation but XP barly use these segments (nor do 7 8 or 10 - and if they do they are not used to pass the 4 GB limit) for example in a older normwise app there is this segment (often called a selector) that is using the CS:EIP combination to execude the code flow, while the data access DS selector can access a different part in memory - but that wasnt happening - both DS and CS where made to point to the same memory in like win98-win10 OS`s the ES segment for example could be a choice to pass the 4 GB limit over the segment solution a even simplier idea would just to use the next counter for CS but if you then disassembly the applications you can see that the executables/apps are not using that - that goes for 99 %+ of apps/executables - also they cant if the OS/kernel mode dont have a controlment for this the most applications/executables dont have the selector being used but you can write an access with like a code using "cs:offset" instead of just "offset" besides the software, also a role plays if the hardware can do it, some still have 32 wires, then the CPU might be able to do so, but the motherboard dont have enough wires
-
have you tryed with XP ? it is said to not have this problem, but im not certain either
-
that killed it : "They are handled by SeaBIOS." then it rely´s on a certain BIOS that has this, this is bad for retro stuff like XP
-
that they doing it now i find interesting, maybe that would be something what i wanted to say but is that all grafic cards up to today then wont be able to en/decode h.266, because its a hardware print https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new then you might have buyed even the latest grafic card from today, but it simply cant encode h.266 either the other way would be to have opcodes that can be changed on a bios/firmware but that dont differ so much from just having 1 core more and giving it the de/encode codes doing it with pure hardware unit is a lot faster, it dont need to parse out the firmware or the opcodes either it is directly progressed at the hardware unit itself maybe the market lacks of an alternativ, why not a pci-e card the hardware unit dont neccesary have to be a grafic card ? and maybe having both (print and cpu based) but i think avx should be fast enough if they can write it for those X(Z)MM registers for hevc there is a check what opcodes can be used, if its mmx it use mmx, if it is sse it use sse and if its avx it use avx
-
at some point it will reach a cpu limit question however to say is that this is not the case yet chrome or lets say the browser language, is a scriptish language as everybody can see the browser is getting bigger and bigger and has pharse more and more code the high language question therefore also apears ... it would be at least 20 times of the normal work to write the routines in pure assembly the cpu´s are already doing this: https://www.youtube.com/watch?v=RQWpF2Gb-gU the cpu is skipping commands/opcodes in exactly that way also what is not mentioned in that video, that the cpu unit can see a future command and is actually not near at all it can see what memory it accesses and therefore read it from the cache and make a paraell calculation (it dont has to be directly the next command) (also to mention mmx-avx doing that even better) the cpu that is mentioned is rather going back to a 486 cpu also the most compilers are also doing something like that, modern compilers shorten the code if the compiler can reconize that then there is also the MMX-AVX+ question, these are not made of normal opcodes it are opcodes for speed, those are a lot faster sometimes 150 times as fast i say this because we have often the SSE question (like we have to have to non SSE version) at some point as sayed it will reach the cpu question and SSE is just like a hardware acceleration or a grafic card sse(aka mmx-avx+) can speed up the process a lot so we put it the right way, having a grafic card can speed it up however you might not have the right driver or your card is not supported by the OS, or dont have a driver for that OS 2 other ways would be first not a high language (what is not possible in this case chrome has like 300k pages, or at least the part for the grafics would have the be entire rewritten) faster opcodes (aka mmx-sse-avx ect.) also i want to mention that smartphones special olders ones have google chrome and dont support some functions either a simple way would be to buy a supported grafic card, a faster cpu that also supports xp if you dont want trouble wih the chipset or maybe hardware components then rather i would buy hardware that is supported for xp --- in other direction, from what i remember xp had some hidden functions that are written out in for example win10 - xp has them but dont have the function to read/write call them so very well solutions had, just to make 1 of many example a direct access to the eprocess structure that works very well as it then it the same solution as win10 has ... but for like 7 different operating systems like (vista, xp, 7, 8, 8.1, + 64 bit) thats a lot you need like 14 different reactions ... or you might end up in solution that is not very well , like bugged, less functional , has flaws --- amd even tho they sell hardware, they do not offer us a open source driver ... rather it would give amd some money because they dont make money with that driver also it would definatly bring amd money because then xp users buy their new hardware ... is it a kartell ? because doing it that way give a other company money (like buying win10 instead or new hardward to have win10) it would be illegal to form it that way ... - but thats a other long discussion that dont belong here --- here we already talked about, a hardware is a print but the mentioned VP9 codec is supported by some grafic cards but it dont support for example the h.266 codec (and probaly never will, because a hardware is like a print - once printed there is no upgrade) if it really has the other way (just a cpu and a bios for the opcodes (that can be changed) - its just a next core ... (that would raise questions, also that is a lot slower) as mentioned im very busy - i cant join to make a VP9 codec while im like 100 % busy with a different project - there is no room to jump in just in like 5 other projects - actually i dont have even one i could join thats why i must say it again : im really the only person who could make a h.265 en/decoder ? im not the freshest guy either maybe someone should do it instead also to mention is that if you use a older grafic card and even tho you have win10, the right driver then the grafic card cant do that either to mention dx (direct x video) again , directx, opengl, can be either used a "frame/1picture" engine or as "24frames/aka video" engine actually after directx8 there was not real improvements to clarify this we have to put it the right way: directx and opengl animate a 3d model a 3d modell is just a texture in 3d it like 99 % depents not on the engine - rather on the texture of the 3d model gdi might not have a 3d engine, but it can repeat frames so now we have to talk about that cuda (aka the video/frame/ en/decoder) - or just the grafic card the cuda engine or that grafic card just encodes a frame (not a 3d engine) thats why it works with opengl dx and a normal frame buffer (like gdi) cuda/the grafic card gives the encodes frame to a buffer it can be opengl or directx - if we ask it right certainly it could do that for a normal frame buffer for video there is only 1 differens it is just not 1 frame it useally are 24 frames (the 60/50 frames ect. is a other discussion that dont belong here now) maybe we say that you can have 1 frame shot with 1/60 either or even 1/2000 - that is possible - you dont need 50/60 frames to do so i hope that clarify the situations for all the others a bit
-
i dont want to disappoint anybody but thats why i always said the focus must be on the main operating system of the browser then it was already raining "windows 2000 support" "win10 win8 and win7 must work too" while making a new sp4 iso and 4 other projects meanwhile like firefox too and fixing issues xp for example dont have, like that sandbox discussion it rather gave my old logic saying these things like that to be right i hope he can fix all of that, but its much ... there are alternativ´s for win7 like the chinese version or redfox and maybe some others
-
maybe it is time that someone look the published codes from microsoft for example the WRK so we could tell why i had left open my fault saying 4 times 512 = 4 k, but that isnt right - it is 8 times however only cixert fixed that one up if FAT32ex on xp can acseed that 2 TB limit and set a sector size, it might be possible if not its not hard to write a loop that actually parse 512 bytes 8 times on the other hand it would raise question to that classical saying "32 bit are limited to 4 gb" according to this logic 32 bit cant address a HDD bigger then 4 gb the overlappend structure just use two dwords (aka 32 bit * 2 = 64 bits) (it has to be mentioned because in windows thats the structure windows use for file offsets) if i continue to talk like this i can only make speculations but lets say it would be able to pass the size of a dword (32 bits) it would not mean that it can pass the 512 sector as we know it can make 512 sectors with 4 gb (aka 2 tb data) but such things you can read out of the microsoft code/either disassembled/debugged or the published code - it is certainly some work - if someone actually know what the problem is it would be faster then just gambling around i actually wrote data to the disc on i/o level but that memory is far to old to get it back, it actually lack like the most part of it, i just remember a few I/O ports like 1f4 and the writes they are either dword (32 bit) word (16 bit) byte (8 bit) this is not a problem as you just give it a loop to write if you have lets say 1024 bits to write you use for the 32 bit writes (1024/32 = downrounded 33 times) 33 * 32 = 1000 then you still have to write 24 bytes , you can either do this with 8 bit writes or one 16 bit write and one 8 bit write 24 / 16 = 1 after that you have the last byte 8 / 8 = 1 = 1024 bits have been written the hardware actually transfer that code into a "next code" this next code dont really care if it was 33 + 1 + 1 writes, it rather finds the data that wants to be progressed (cache should be a a word here) to get this information out you need time it took 2 weeks just to compile chrome up another 2 weeks for getting the things around that that makes at least 4 weeks to dig into this - what time i dont have at the moment sorry maybe thats the right spot ? it definatly has low and high parts https://wiki.osdev.org/ATA_PIO_Mode#Registers quote: ";ATA PI0 33bit singletasking disk read function (up to 64K sectors, using 48bit mode)" quote2: "Note on the "magic bits" sent to port 0x1f6: Bit 6 (value = 0x40) is the LBA bit. This must be set for either LBA28 or LBA48 transfers." quote3: " An example: outb (0x1F2, bytecount/512 = sectorcount) outb (0x1F3, Sector Number -- the S in CHS) outb (0x1F4, Cylinder Low Byte) outb (0x1F5, Cylinder High Byte) " it is written in assembly to me it seems to have 3 words (word = 16 bit) that address a 48 bit offset (aka LBA48 / 16+16+16=48 (it seems low, mid and high/LBAlo, LBAmid, and LBAhi) the logic says it begins with a port that then counts up, it also says this port useally is 0x1F0 (if not its just that "beginning port" + X) (+3 / 1F3) LBAlo - 8-bit / 16-bit (LBA 48 are 16 bit) 8 bit is only for LBA 28 (+4 / 1F4) LBAmid - 8-bit / 16-bit (LBA 48 are 16 bit) 8 bit is only for LBA 28 (+5 / 1F5) LBAhi - 8-bit / 16-bit (LBA 48 are 16 bit) 8 bit is only for LBA 28 that again makes 48 bits, it is not a wire/address or 64 bit question you tell the device that 48 bits in 3 steps ... where you want to write - therefore it dont need a 64 bit address question - together they are 48 bit´s or just LBA48 after that you just write at that spot ... there is no offset - the "offset" has been set before --- if it is like that it isnt hard either you probaly have to set the right settings and address the 48 bits, its different from the paging mechanism (for ram) what actually has 4k pages, 4 megabyte pages, maybe segments, 64 bit PTE/PDE entrys (that can be done on a 32 bit OS, one example is that CMPXCHG8B command in 32 bit mode it can set 64 bit at once (atomic) ) another atmoic way to store 64 bit in 32 bit mode would would be to use the FPU unit the FPU unit can store 64 bits in an offset - to do so you could just put the two 32 bit values to an offset and storing those on the FPU , from the FPU you then store that value to the requied offset (aka where the PTE´s and PDE´s are at), in short talk you can use the FPU as integer if you do it rightm or even "just as memory storage for more then 32 bit" to make it via fpu unit FST / FSTP would be an example the opcodes (these are the ones who have a memory location, could be used) DD /2 (FST), or DD / 3 (FSTP) can write 64 bits to an offset: https://tizee.github.io/x86_ref_book_web/instruction/fst_fstp.html cmpxchg8b: https://www.felixcloutier.com/x86/cmpxchg8b:cmpxchg8b but jumping around from one project to a other just kills not only 1 project it kills both of projects, you guys are on the point
-
there was DOS then there was win3.11 then there was win95 then there was win98 then there was win98 SE then there was a windows me hard to say but 98 SE was the best if i would rate them then a change was happening a change to a pure 32 bit OS windows nt 3.1 windows nt 4.0 windows nt 5.0 windows 2000 (aka 5.0) windows XP aka - 5.1 windows server 2003 - 5.2 then there was already the 64 bit versions, however they did not provide advantages over 32 bit for that time there already was 32 bit extensions that can hold more then 4 GB ram already ... https://winworldpc.com/product/windows-nt-3x/31 so it wasnt that hard to see the succesor to see from these it was windows xp 32 bit then there was a change however it contained many problems at the beginning (as many first operating system had) vista aka nt 6.0 was born but there was nothing it really provided over XP, so xp somehow survived https://youtu.be/29qnXTw0qr0?t=199 it was a XP vs Vista battle, what xp actually won https://youtu.be/29qnXTw0qr0?t=231 to fix that problem ms came up with a bug fixed vista or overworked nt6.0 now the name changes nt 6.1 - windows 7 windows 7 was a successor what is still very capble even today so now we might think that we need a new version the next version was not a successor but it was the so called windows 8.0 (aka nt6.2) what later on got a problem fixed version aka 8.1 (here i still say it would be NT 6.2 not 6.3 because when a few upgrades kicked in they didnt call it windows 5.3 either) 8.1 wasnt that bad, however it lacked to bring new things that microsoft used to add the spyware called win10 that was because a next operating system had to yet to come - if they choosen 8.1 instead of 10 they could not been a chance to add the spyware components ... so thats the real story then lets remember what microsoft said: there will no be next windows there will just be win10 why would there be a windows 11 ? its just a win10 with a few upgrades - maybe a new GUI
-
since i dont know this guy i would say you are doing this, you really should
-
"vista" , supermium was for XP i have to point out again its 15 times as much to go backwards to XP as to go back to from win7 to vista the most liked just the idea what supermium also works on the other operating systems
-
this is to much getting compared with a nt6.x engine i once told dibya that i dont have interests in creating a vista version, still he pushed it up so dibya came up to make a redfox to "vista" one but it was over 150 internal functions to be changed for xp, and like 10 changes from win7 to vista from what i remember dibya then made some code and the vista version was already working vista has a problem vs 7, 7 is just a upgraded vista so 7 is in advantage - sure you can make it backwards to vista the method dibya used was to change the code itself but, not the redirections in a project with 150-300 functions thats a lot a better method would be to overrule some redirection or different linking in the c-runtime or a different c-runtime, in llvm and maybe others what is easier then in vs2019 vs2019 has a hidden c-runtime.obj file - you can make some changes but hmm i dont know ... it almost sounds like a chinese student made a better version but no - you cant compare nt6.x with xp , in our comparison a compare would be 15 times as much to have a "vista vs xp" version, what the chinese guy dont have he might have a 7 version - that certainly can be changed to vista somehow, there are already codes that can can that the sandbox is also a such example it dont have a real use, the sandbox is rather a mitigation question and mitigations are OS based - also they not function based - the code is going fine without it (and no bugs) reality is that the sandbox in win10 have a lot more then vista and 7 actually have - so having some flags popping up that you have a sandbox wont do much either when the chinese want to challenge supermium he has to challenge it on nt5.1 (aka xp) not 7 vs xp 1 reason more i dont want to make a vista/7 version is exactly this discussion - once done they say they got a better solution - what is not true a other reason you dont make vista/7 stuff is that you lose focus for your normal common OS - dont get me wrong i have nothing against people who like vista or 7 but they have it simplier then we have if not the chinese guy has to come up with a XP version and proof us wrong i would not be unhappen to proofen wrong in this case what i also have to say when i added that redfox code (that also works for vista) i saw redfox works with vista - so the code gone there im glad that he has the code ... but then you get something like this "the chinese guy made a better vista version" - sounds weird to me
-
this might be a good moment to mention the "engine problem" again first the one-core-api is giving a nice support for some win6+ apis since we talk about phyton stopping the xp support we can point out the engine question again a engine often use functions of a certain OS, is written for that certain OS elder programming languages useally never had a such point neither if it was c, c++, assembly, basic, delphi because that are a programming language ... that dont need a certain "windows, linux, mac" function today that is changing the new c++ styles often get tranlated into a different code (which then use a OS function) -> and then you have it your nt 6+ is involved a such example would be the c-runtime - even tho you written a normal c++ code the c++ code still now involves that c-runtime and that c-runtime use nt 6+ functions for c++ mutex would be a such example https://en.cppreference.com/w/cpp/thread/mutex however there is not only a nt 6+ interpration for this (aka srw locks) you also could use a thread based atom style to solve this problem there are some more, keyed_events, mutex windows functions as as createmutexa, creatthread styles, or criticalsection styles when i saw a new project i saw the following problem it uses DX11 it uses phyton it uses cmake it needs VS2019 (aka new c++ styles + the c-runtime) the project itself already where written with windows 10 functions often you dont have insight into the things these use (i often call them engines) lets say phyton break - then you cant compile it up because phyton decided to longer like xp if cmake use nt6 then you also cant compile up if visual studio wants a newer version you cant compile up if directx wants a newer version of directx you also cant compile up that makes it a lot to go through before you even can do anything the new trends doing exactly so even ffmpeg is going into that direction (for example ffmpegs cuda engine) in this discussion it seems to be bond to phyton a possible solution would be a code translation from phyton to c++ (normal styles ones) a good thing with c is that you can always have a c interpration in comparison to a other language without having a hard time with a lot of math like in assembly assembly for example can represent any language - its because all languages create a assembly code in the end c++ made a good compromise (but new c++ styles going into a direction to be something like a java script) im trying to point out that all of these try not to be just a programming language, they going into a different direction to like a script and engines so if phyton is not possible anymore, i would suggest a translation to c++
-
Intel 8th-9th Gen processors will reach ESU on June 30, 2025
user57 replied to halohalo's topic in Windows 11
"Windows 10 was offered for free from its release on July 29, 2015, until July 29, 2016, for users of eligible previous versions of Windows. However, Microsoft continued to allow free upgrades for several years after that, officially ending the offer on September 20, 2023. Q: Is the upgrade really free? Do I need to purchase Windows 10 after 1 year? A: With Windows 10, we will offer a free upgrade to Windows 10 for qualified Windows 7 and Windows 8.1 devices that upgrade in the first year. " -
Intel 8th-9th Gen processors will reach ESU on June 30, 2025
user57 replied to halohalo's topic in Windows 11
looks simple to me, there was a reason win10 was free, and win11 also the article rather writes about security reasons the security how they define it is from a person the plan is probaly to get rid of the person who owns the computer so he can only do what they want the TPM chip was also a such direction, you getting "trusted" for microsoft being secure from you we know they had to be something wrong with it when it was offered for free after the establishment they might want money, but before only spreading is important