Jump to content

user57

Member
  • Posts

    285
  • Joined

  • Last visited

  • Donations

    0.00 USD 
  • Country

    Germany

Everything posted by user57

  1. i remember that effect quite often https://fixthephoto.com/images/content/banding-in-photography.jpg also in a raw format but it raise questions why it happend those stream encoders have to be realtime therefore they probaly cant use the encoder/decoder to the maximum extend if the bitrate then drops they are more likely to apear the next picture is wrong but: https://fixthephoto.com/images/content/banding-in-photography-bit-mode.png it says 16 million colors but actually the color palett to the left only has 22 colors instead of 16 million one reason can be that there is not enough color differences aka only having like 16 colors a other is that the picture/or pixel had to be repeated a next reason is that somewhere before the progression unit what somewhere before arranged it inperfect a next reason would be no 100 iso (if iso is pushed you have less information, also in colors) a lack of dynamic range or a strong light differense can also be the reason hard to say what exactly caused either the RGB to not having 1 contrast difference or in YUV why it also dont can make 1 contrast difference i think we need pictures to really see if that is the case, and if yes how many the filesize actually increases, if its 20 % we just could use a higher bitrate and might have the same result you might have some information for us ? to the other part if version 442.74 can do this it raise the question if older versions can set this mode we dont have insight into the nvidia driver source code but therefore i would say if people from here just try to find where the problem rely it will be hard to figure out regarding 1 cpu thats to simple to say we dont have insight into the entire thing the entire os is also running in the background, then somewhere is the player, that LAV engine and that d3d video engine + more (maybe the driver ? more engines ?) it dont play just the h.265 codec someone would actually have to study the player and engines - and then might fail because he cant see into the nvidia driver (since that LAV engines or d3d video engine calls up that somewhere somehow) thats why i the past i pointed out it might be better to have a own engine the full source code is available at x265.com but writing from here you also then have some work to bring this into that MPC-HC player i might point out that the XMM registers are a lot faster in this like 5-150 times (sometimes called SSL but mmx ssl, avx are the same origin and still are called XMM registers (maybe added a ZMM sometimes)) a pure player/decoder should not cause many cpu power - the encoder need a lot more it certainly would be possible to bring that x265 decoder to that MPC-HC player - the question is rather who makes the work
  2. so there is progress in this one now ? what is the status ? that with the bit depth is a longer discussion as for RGB 8 bit per color or RGB 24, it dont improve any pixels as it supports already over 16 million possible resolution per 1 pixel even on a 4 k video you need a lot less then that for the entire frame (8,8 million pixels) so if you copied a picture in RGB you got a lossless picture, but it get big in filesize many camera´s actually dont resolve 4 k either (yet), useally they are upscaled those who actually can do nativ 4 k are rare i dont want to advertise them but here are the only 2 i know that resolve nativ 4 k https://www.red.com/red-tech/red-tech-red-ranger-with-monstro-8k-vv https://www.arri.com/en/camera-systems/cameras/alexa-lf notice the smallers versions of those dont resolve that resolution sometimes even these dont resolve 4k if the low light performance is coming for example or the lens is not on the best spot also from consideration is how many light the object emmits (if the shutter speed is high the momentums get more bad) its a very long discussion many lenses dont resolve that resolutions either, and for a nativ performance you need a large sensor or/and very good light https://www.dxomark.com/Lenses/ to me it raise questions to go higher because its getting less of a compression jpg is a yuv 4:2:0 compression, there might be other modes - but they not very common (main 10 main 12) where only 4:4:4 can resolve the maximum, but for this useally RGB 8 bit or RGB24 is used going to 10 bit raise exactly that question, the filesize will go up, in sence of bit from 8 to 10 bit its 25 % so lets say you have a 750 mb video would have 937 mb video what i for example want is better performance for the same filesize, not a increase for example in bitrate that cause a few more pixels but a bigger file the hardware compressions for a stream useally dont produce that much quality either i think they might have to use a older compression version and have to use medium to keep the frames to realtime https://youtu.be/5rgteZRNb-A?t=72 we can see that software performed a lot better (both P3 and P6) (well seen at the white parts at the mountains) we might start this pixel discussion that way, but being set to peak performance because in other case it might cause a few more pixels with 10 bit because the encoder performance needed to be realtime hard to say if 10 bit it should be reasoned
  3. im not certain if he thinks assembly is dos programming, so i think we try to create a better view actually assembly not a dos language assembly is not a high language in a high language you have something like "for i=0 , i<10, i+1" in assembly that looks different you have to write that vars on your own , you have to write where that routine/loop starts and ends and how many times it repeat it assembly has a big advantage, its a lot faster but assembly is a lot more work to programm for example today are many "engines" and something like the sdt (C++ Standard Library) like std::array is also very script "ish" and engine "ish" those engines always load a certain engine, api , or certain codes - that sum together so the file is getting very big and slow (aka now some people might understand why win10 is not the fastest) assembly is still used today for example the XMM routines are written in the heic encoder https://msfn.org/board/topic/185879-winxp-hevcheifheic-image-encoderdecoder/ sure you can use normal commands , but just try to use a big jpg file (such as a 20 mb + one) and encode without assembly (aka hardware acceleration and no XMM registers) the encode time is 5-50 times as much but here is a good part of your question "the book is old" the good news in assembly there is not many of new stuff, like they do today with engines, api´s or the sdt there is only a few new stuff (that you can lern like instand if you know the "old ones") there is not much differens between x64 and 32 bit either , the registers are called with a R(64 bit) instead of a E (32 bit) - still doing the same job you can write assembly in the vs2019 or 2022 compiler (or the olders ones too) you just write __asm { // assembly code here } assembly = faster, smaller downsides = you need a lot! more time to code, a lot more of understanding is needed (like many math), less code you can copy paste like you certainly could use a engine/or something like that - but then you have to write the right assembly routine for it then one more thing is to know, every current compiler or language have to create a assembly code (because thats the only code the cpu actually can read) so a java, c or phyton code actually creates assembly code - that makes it hard to say its a language for dos back in the day you had other assembly commands too, by writing a emulator you can translate these old assembly commands to new compatible ones (thats what people sometimes still do) normal c++ found a good compromise in my opinion (the new c++ styles rather go into a script direction like java, phyton and others do)
  4. i always find this discussion kinda interesting but actually i cant speac with the guys here because i simply dont know the details around it and lack the toolz to see into that specific stuff but it keeps going like there has to be a extra driver that exactly fits only for this purpose shouldnt there be a possible solution when the others operating systems like 7 10 ect. do ? what exactly is problem?
  5. that with the segments i probaly didnt write that well i wrote it like microsoft shows an example how to use a "4 GB extension" over segments but i only wanted to show that microsoft make use others segments but what microsoft dont do is to use the other segments for data extension/breaking the 4 GB limit the 32 bit protected mode also contains the so called segment registers CS (Code Segment), DS (Data Segment), FS (F Segment (the one microsoft uses a little in that example before) , ES (Extra Segment) GS (G Segment), SS (Stack Segment) https://en.wikibooks.org/wiki/X86_Assembly/X86_Architecture in the 32 bit protected mode the CS points to something called the Instruction Pointers (IP) you might heared that somewhere in 32 bit it has the E in front of its name called EIP (64 bit useally have an R in front (AX=16bit, EAX=32bit,RAX=64bit) the instructions are encoded with this combination CS:EIP (segment and offset) for example the data has also an segment (DS) but microsoft made it point to the same memory as CS so the idea would be to change either the selectors (what microsoft only use like 2 vars 0008 and 0032 (out of 65535 possibilities) or using an different segment such as ES, or GS common debuggers like "ollydbg" or "ida debug" dont show the segment selector before the EIP im sorry that i could not find a better picture: https://stackoverflow.com/questions/43300040/understanding-8086-assembler-debugger here you can see an example of CS:IP (16 bit) in 32 bit it is called CS:EIP - what he marked red would have been like cs:0000010D here is the alternativ to PAE called PSE it shows how to use PSE with 36 bit, and later on with 40 bit https://en.wikipedia.org/wiki/PSE-36
  6. maybe it dont have to be always the maximum of 128 GB from the "russian patch" maybe 8,16,32,64 GB would also be something but i readed up the information here on msfn they dont say what they actually used if PAE is used there might be still open questions lets say its PAE in PAE page table entrys (PTE) and page directory entrys (PDE) are not 32 bit wide they are 64 bit wide https://en.wikipedia.org/wiki/Page_table#/media/File:X86_Paging_PAE_4K.svg however this not neccesary means 32 bit PTE and PDE cant point over 4 GB of RAM thats because of the pages - for example common PTE and PDE that are 32 bit wide and are bound to 4 k pages (4096) that PSE can extend that to 4 megabytes pages (thats 1024 times as much then with 4 k pages) (4 mb) https://en.wikipedia.org/wiki/Page_Size_Extension#/media/File:X86_Paging_4M.svg (4 k) https://en.wikipedia.org/wiki/Page_Size_Extension#/media/File:X86_Paging_4K.svg but here is a other catch i once noticed windows has a certain space for a 4 GB address room so PTE and PDE entrys are exactly calculated for 4 GB entrys if you would extend these entrys a PTE would overwrite a PDE entry (thats because the space for PDE/PTE is exactly calculated, AND the next entry for the PTE is a PDE entry (aka the next space in memory) - therefore you overwrite the PDE´s) and here is the catch that PDBR (page directory base register) for each process would not have a such "overwrite of PTE/PDE" problem it can just use new PTE´s and PDE´s that point to a different memory (aka above the 4 GB room) so if that russian patch is doing that it are theorecial 4 GB for each process thats "OK" in my opinion (maybe they also done something with PTE´s and PDE´s ?) one more word about that /3GB option in normal space usermode has 0x00000000 to 0x7FFFFFFF (you might have seen this somewhere) thats only 2 GB room - however the kernel also can be used as RAM (for example for a RGB buffer) kernel mode begins in that "normal space" at 0x80000000 some are static at this place like ntoskrnl and some drivers (but the rest here can be used as ram) (why not kernel32.dll and some others are also always loaded and take some of the "normal userspace" 0 to 0x7FFFFFFF) thats why the /3GB options increase the userspace to 3 GB room 0x00000000 - 0xBFFFFFFF and the kernel mode then has 1 GB of room, while userspace had 3 GB ram so it raise questions what the "russian patch" used to reach this it would be something new, it seems there is actually room for real XP inventions neither what method was used or if the entry problem was extended and fixed inside the windows drivers if not PAE with not PSE, we could have instead of 4 k entrys 4 mb entrys less talked about is segmentation is a combination between a selector (called segment selector) and a offset https://en.wikipedia.org/wiki/Memory_segmentation i do not know normal programs that use a different selector just for extra memory a common selector is that FS: selector often FS:[00000018] in 32 bit sement selectors are 16 bite wide (65 k) following by the 32 bit offset (4 gb) = 65536 * 4 GB segmentation might also has potential, to make an example from microsoft: https://devblogs.microsoft.com/oldnewthing/20220919-00/?p=107195 swap files are good too, you use the harddrive what today is a SSD, as memory storage. in this case you simply put data you want to progress on the harddrive and by needs either control it on the harddrive or getting it back to your memory (as pleased) this allow you to pipe a lot more data then 4 GB (for example a blu ray movie that is above 25 GB, you dont need the entire 25 GB at once) in the past if you didnt had the ram (like 16 mb ram) you stored the data on the drive (you had a slow harddrive in the past) then you progressed it either on the disc or you loaded parts into memory where you progressed that data you might do not need 4 GB instantaneously/at once the pagefile of windows is doing exactly (also) that by the way someone called out the RAMDISC that is also very nice, and are very fast
  7. https://msfn.org/board/topic/180669-two-questions-about-diyba-128gb-pae-patch/ dibya says it works, maybe good thing to test
  8. hmmm maybe we should ask Dibya about the 128 GB ram patch for XP (there is a such thing - do i know if it works - no i do not know if that 128 gb ram patch works - but i know 32 bit can pass the 32 bit address room) from what i remember he said he tested that and it works, when i first seen the 128 GB patch i had doubts Dibya should be still around, he may know better about that 128 GB patch for XP, maybe others also know some details, geoffchappell also has mentioned that 128 GB ram somewhere if there is nobody at least it seems that we have people that want to test that out a problem could be that it maybe is app/executable-wise, but thats still ok then you have 4 GB RAM each app, only 2 APP´s would already use 8 GB of ram - what sound ok to me i say this because i remember a problem with the PDE/PTE´s windows had it calced for exactly 4 GB ram, so going past might overwrite one of the entrys (they follow up directly) i do not know if windows has a different way to handle this if more then 4 GB are present, this is not a problem if that PDBR (that i mentioned is involved) - because that exactly leads to 4 GB / per app
  9. there are basicly 4 things that can pass the 4 GB limit 1 PAE physical address extention 2 PSE page size extension 3 the PDBR (page directory base register) 4 segments also worth to mention is setting the userspace to 3 GB instead of 2 GB with normal settings windows xp use 0-7FFFFFFF (usespace of 2 GB) the rest for kernel mode this can be changed with /3GB then the userspace is going up to BFFFFFFF (3 GB) its only 1 GB more, but it is considerable to some extend to not explain it to a long story text i try to make it short PSE can extend the page size - a page is a certain amount of memory like 4KB PSE can increase that to 2 MB or 4 MB each page PAE this can use more address bits and therefore using more then the 32 bit limit the PDBR is something that points to a "APP/Executable" memory these are filled with PDE (PAGE DIRECTORY ENTRY(s)) and (Page Table Entry(s)) these can address a different spot in memory, therefore you can have an APP each time for 4 GB address room this dont extend the memory for 1 app to more then 4 GB ram, but allows you to have many APP´s with each having 4 GB address room segments: a segment was a known thing in 8 or 16 bit mode a segment point to a fraction of memory like for 16 bit it would be 64 KB (per segment) the next segment is then a next segment of 64 KB (for that example it would be 0-65535 kB then 65536-131070 (that makes a segment 0 and a segment 1 - aka 2 segments) also worth to mention is what you called out the normal harddrive what today is a SSD (flash memory) xp use that as memory too(yes it can use a HDD or SSD as ram) since we have SSD now this isnt a bad thing SSD´s are fast in the past you used up a slow harddrive and the loadtime was like terrible this is called the PAGEFILE so storing data to the disc is also an option that we can mention also worth to mention is that 32 bit can calculate with at least 64 bit (its the FPU unit or so called XMM registers) if you have AVX 32 bit can even handle 512 bit since these are made speed if if you are on x64 bit, MMX/SSE/AVX are faster then normal opcodes ... so if someone gonna challenge SSE or AVX with normal 64 bit opcodes he still gonna lose very hard against this hardware speed registers by far they not 1 times faster often 50 or 500 times faster in short story first was MMX then got the name SSE then SSE version X-X then SSE was called AVX (so basily its the same nature) i could write more detail, but its like complicated stuff and would fill to much of text now 2 more things are also important to remember: the hardware has to be able to do so (an example would be that there are only 32 wires or the cpu dont got the PAE or PSE mode - so lets just say cpu and bus/ram have to be capable to do so), the software has to have it supported/programmed (aka windows in our case)
  10. maybe joaquim is a good candidate to reprogramm VDMSound, and the right spot to ask would be that dosbox guys it only has a few people working on it https://github.com/volkertb/vdmsound
  11. it might be an application that starts up this. try msconfig and look around
  12. well from a programming perspectiv it is most likely the buffer it also can be the cpu what ticks cant read it out smoothly so knowing if this happens on a faster cpu/ram is also a little piece from consideration a good way would be to know 1: the game (i suspect its a game) 2: the source code of the emulator 3: a open source game if it has the same problem having those give you insight in the input (such as your open source game) and where it lands (the emulator) 1 way was already said, just using a different emulator one programming way would be instead of just playing that sound instead is loading the sounds into a memory location, memory accesses are a lot faster a ssd would also be possible a to small buffer can also be the case a false order (such as a wrong interpretation) if the source is slow the buffer is slower to be filled (delay) a very common windows function to play a wavesound would be playwave: https://learn.microsoft.com/en-us/previous-versions/dd743680(v=vs.85) a codec such as mp3 use compression tricks that goes from lossless tricks to "what soundwaves the ear hears better "soundhearing" " the mp3 does not recontruct the wave as it was to be for example mp3 boost certain types of wave forms you might dont hear that so well - but in a wave compare programm you can see that then there are like longer discussions too for example that 18-24 khz discussion "can you hear the 20 k tone or not ect." a other one would be the "sample rate" https://en.wikipedia.org/wiki/Sampling_(signal_processing) often in a computer its just a small piece of data like "make 10 k wave" "make 2 k wave" ect. at the end a sound is nothing but a wave form i think it is certainly possible to find that issue but its work to do all the steps you have to search the right code, the right OS, the right compiler, the right debug toolz, the right places of the code, you have to run tests ect.
  13. 32 MB/s is a very common speed for old flash memory - you might look what speed your flash usb stick has flash memory are in USB sticks, SD-cards and SSD harddrives to make an older example the super ninentendo ram uses flash memory https://www.kingston.com/en/blog/personal-storage/memory-card-speed-classes flash memory is a lot faster then reading from optical devices (like the cd-rom to dvd/blu ray) - the cd-rom use light to read out the signal the next storage is a common hdd drive, it use a magnetic tape older versions where VHS (yes hdd drives and vhs have a "head" for the reading signals) those are cheaper to manufacture the optical devices long lost the battle also because they are bigger light is around 380-800 nanometers while a modern CPU goes with 5 nanometers, thats a lot smaller then the light also a common myth is that light drives faster then "electrons" , this is not the case what also is very important is the frequency that can be used - not always only the "drive speed" a fibre-glas-wire is also never straight it bounces in the wire - also the "basel effect" makes it impossible for light to go straight - therefore you never reach the 300 000 km/s (travel not tick/frequency speed) https://en.wikipedia.org/wiki/Basel_problem a next problem is that the "photons" due that problem have different momentums so some overlap because a different momentum - so some what not suppose to reach the target yet are hitting the target before the others then you have to give the "photons" more time - what result in less speed now the catchy part photons dont exits its a so called electromagnetic spectrum (aka 380-800 nm) there are smallers (gamma rays for example) that also travel in that 300 000 km/s speed
  14. you might have an example where and how exactly it starts to lag from what i know the ms-dos mode is not a real ms-dos mode anymore for xp it is a "emulated dos" some old ms-dos stuff do not work there anymore i criticized that trend also in win98 - in win95 when you returned to dos-mode useally worked ... in 98 it was bugged already you had to boot up in dos mode first windows 98 and his last release (windows me) still had 16 bit parts inside that gone away while windows me had its turn (and to my opinion was a bit more bugged then win98 SE) they made their first 32 bit only aka nt 3.1 4.0 and 5.0 (what later was called windows 2000) is it a ms-dos game ? its been really long when i programmed a dos sound device (midi and wave for example) so its hard to say where the problem is if we dont know the cpu and the ms-dos app sure i think also xp still has open possibilities regard backwards compatibility to ms-dos as i think xp has found a nice compromise to still support backwards, somewhere in the xp area and future compatibility
  15. SSE dont sound to bad for me to be honest, at some point a CPU would probaly just be to slow to handle the job - here it might could be solved but is it that SSL that was written in SSE ? if so it is certainly possible to write that one to normal opcodes then you probaly need to understand that XMM logic and transfer it to normal opcodes (emulators of elder retro games do so for example). since it would be a hash then nobody would reconize a differenz
  16. "ImportError: DLL load failed while importing win32api: The specified module could not be found." that one sounds of interests because if its right what it says it has searched for a module (aka a dll or similiar) not a missing function (that may resides in that youtube downloader code) as i said i dont know python so i cant say exactly - but if its the other way around (for example it did not find the function) that description message is wrong but as i know python is a challenger to java - and java is rather a script/engine like language that means if a certain version build in some new functions or changed things up you bond to that engine - therefore it breaks up it got an advantage however since its a script "that should solve a piece of code" that code can get a different interpretation then that function can just be replaced with one function that works with the backwards compatibility i have a bit of a different meaning to vistalover you can write a code that works for all os (in this case just normal opcodes) then both works - and its "like" a end of story the way this gotten is a bit weird it sounds like 1 guy tryed to go against many people code (what he probaly has hard to win over) while if all just made normal opcodes that would not even have been a problem it makes it sound like its hard to make it backwards compatible - but rather its oposite in this case it might be right if you make the right "if´s" but if it gone the other way around (just making normal opcodes) neither future nor the backwardsguy "diskf" would have any problems and that way it would not be harder for anyone either in sence of a company backwards compatibility is a good thing - because it means more buyers, as it has a wide range of users that can buy that product
  17. i do not know python however if i interpret it right it says it cant install that "packet" called "pypiwin32" version "219" ? if so not the compiler told you a problem, then the installer of that pypiwin32 failed whatever name it can be given "packet" "plugin" "addon" "extra piece to pyhton software" "pypiwin32 v219" https://pypi.org/project/pywin32/ only in case if i suspected right, if not im happy to see the comments and being correct to the right thing
  18. a fingerprint ? its just a infrared light (flash) that red light is used as flash on your finger - then a sensor record that image that image is then compared with all "finger-pictures" it has in the databank like being a criminal :-) a few also look for the finger temperator with a second sensor (but why, then you hold that guys cutted finger under your arm - and that finger has the right temperature) the iris scanner is working similiar - the iris is even a little more unique - im not certain what light is used - but also it is a picture
  19. i get it, its that python compiler that use these sse commands then that vc14+ are just different writings that the compiler has to translate, the compiler probaly didnt care using normal opcodes instead it just included sse commands the compiler then see what you wrote and translate them into opcodes, link them together ect. that c-runtime made us problems, in the past we could just edit the c++ files that generate that c-runtime that c-runtime is a .obj file (that one contains the functions the compiler links against/to) microsoft then hidden that .obj file somewhere (actually we dont know where maybe as resource in the vs2019 executable or that hidden .obj file somewhere in its temp files) - but no matter we know its a .obj file people wondered why they see functions in vs2019 they never used in their entire code - the c-runtime ".obj file" thats why it get executed before your executable is running -> after that it executes your code i did not find a good method that gives that vs2019 a good buildchain (so our functions are overruled) , and somehow you cant override these functions , if you try to trick that c-runtime it come up with an error i still wonder why a modern vs2019 compiler dont have a such thing, there seems to be no real override method vs2019 offers then it would be about to edit the python compiler and replace the instructions if python is open source - so you right to choose 2 executables - the other way around makes to much work
  20. the same downloader has 2 versions for SSE2 and SSE ? why it didnt go the common way like check if mmx -> use mmx , if sse -> use see, if sse2 -> use sse2, ongoging you can definatly make it so, the compiler like vs2019 use sse if not turned off but if you have a routine that use sse you can skip it if you need, therefore the opcodes are then skipped if you dont have sse you can use normal opcodes or a different mmx-avx routine to have the right functional code one way to emulate a 64 bit command with normal opcodes is doing it multiple times , and instead of a register you can use the stack or some memory space you can control that piece as 64 bit i think dietmar did a such thing regarding replacing the 32 bit command CMPXCH8B command, it is a command that can change 64 bits in 32 bit operating systems using two 32 bit registers if its the same downloader it might solve this issue
  21. i think i have to understand the problem regarding oauth he wanted to get his emails with outlock - what is normal and common then microsoft aka hotmail.com added that oauth now it raise questions a : it need a second email for oauth / or login and pw for that oauth server b : it is just a mechanism that connect to that oauth server - where outlock dont have the oauth code - so it cant make this part and microsoft email just stops doing its job having that what they called a "token" to me it seems to be some hash then it can call with this hash, the server of interests and gather the data of interests that rather sounds like you have a login to some kind of server that says "this IP has sended me the right code - let this guy on your server" this sound all very old like a handshake with TCP SSL or a server that gives out something like "let this guy in" then your email such as a hotmail email allows you to see facebook, paypal, youtube - without having entered the password for facebook,youtube or paypal it raise questions where this oauth has its code , but the part that is making the question to the oauth server has to be on the users computer if its that it might be a module, a internal function, a hash maker in firefox, a certain code that is being executed that sounds insecure to me if someone has the right conditions can probaly just enter your facebook , youtube or wrose your paypal account this not only goes for a hacker, that also means people of interests can just make this with your account (such as the right people who have that trusted status - and that will not only be the police - and if so it raise questions why the police can just enter and look around in your facebook,youtube or paypal without having anything going on ...) so its a spy mechanism for the state - the more they know about the people the better they can enslave/control them (because guess what these people will have that mechnism´s - one might claim "on no i would not do that" - nope he will do he get a letter from a lawers and at some point he collopse, or something like "we dont do it yet/now" -> "oh see there in 2025 the terms of use changed now every people i want i can give this") so we cleared 1 question, why the do this - it dont give any security questions, rather it opens security questions and we know how this ends in a change of the so called "terms of use" - and then its done - you can be spyed
  22. to me OAuth looks it just being a extra mechanism something like "shake-hands", that xp certainly could do if the mechanism is known where and how the picture OAuth only also shows a picture where your computer is asking their "OAuth-server" if everything is ok - this "OAuth-server" then communicated with the target (such as youtube, microsoft and others) when the OAuth with your computer was ok and the others (yt,ms ect.), then it grants access to the wanted resources like pictures and video that dont sound so special to me SSL or TCP-handshake is doing a similiar thing to me then it sounds like they just added a next one doing the same thing, with only one difference that a such mechanism is used 2 times
  23. i have made a nice visual for the PSE (Page Size Extension) - the memory limit in the right part of the picture is from a older list of memory limits - that may or may not also include the other method (PAE Physical Address Extension) or both PSE and PAE combined the calculators show the related bits in 1010 binary format and in DEC for both 36 bit or 40 bit (since amd athlon maybe ? but somewhere around 1999 that started - now we have 2024) as you can guess PSE is 1 of the methods to reach more then 4 GB ram the other is PAE, a third way would be a second/3/4/ect. application (that then can address or point to different physical memory) interesting i find that intel lacked behind with 36 bits (64 GB) while amd already had 40 bit (1024 GB) also we can see that windows 2000 can have more then 4 GB ram, in that list it has 32 GB of ram (that also fits to the release dates of the cpu´s)
  24. that might be interesting most cpu´s had 64 bit support while everybody used up 32 bit the speed maybe ? hmm no mainly because 2 reasons first normal opcodes are not that fast (you lose vs 32 bit even on x64 bit if you use MMX-AVX (we talked about fast cpu and compatible cpus recently)) then also you have more then 32 bits on that MMX or AVX registers (in 32 bit mode) then the next question kicks in high languages are rather made for making the things simple - but they are not fast c++ found a good compromise however it still lose to a assembly implementation so if somebody say i want to use 64 bits because that is faster then i must say no rather changing your programming language would speed up your code a lot (and also significant lower the file size) there is a big downside to assembly and maybe c++ but lets talk about assembly first, you need to know like a lot more of math and logical reason to do this also you need to write the entire code yourself (not like for (x=1,x<3x++)) you have to write this code this make it a lot more work ... - definatly a downside a other problem are engines, engines are useally simplier to program but also they not very fast (so that have to leave to if you ask for speed question) i dont want to talk to much off topic now but we had a such discussion recently (LAV engine) - but that is not true so directly it already are 2 engines (that LAV engine + the d3d9 engine) (and thats only the ones we certainly know of - maybe there are more) (now we have 2 engines it goes through before it even reach anything ... while we figured out that we dont even need that engine nor a engine to do so ) a other discussion is the memory limit and no the thing can be a little harder here i wrote about that already so i try to make it short this time in the past segments was a word (in like 8 bit and 16 bit (65kb) or 20 bit (1mb)) so the idea was to have a segment that points to 16 bit memory (65 kb) that * 4 bits (256 * 65 = ~ 1 mb) (i think some should have heared about that 1 mb thing somehow or seen somewhat) (here one for later to have one from wiki) https://en.wikipedia.org/wiki/X86_memory_segmentation a segment is like a arrow -> segment -> 0 = 0-65 kb segment -> 1 = 65-131 kb so if one arrow points to a house and that arrow can aim for a different house you understand segmenting to call out this part, 32 bit has segment registers - but it would be rather a long story more into detail (rather talking about 4 kb or 4 mb pages (long mode) ect.) the most applications dont need 4 gb either, for sumatra pdf this is the case - you can set the compiler to x64 (chrome for example start up a an next application (always called chrome.exe) and having always new address room) but except that the file is more and that it dont start on 32 bits nothing is diffrent for sumatra pdf, and guess what you can run that 32 bit sumatra pdf on x64 too to me that raise questions why i would even compile a x64 version
  25. can i ask why is having x64 important ?
×
×
  • Create New...