Jump to content

user57

Member
  • Posts

    271
  • Joined

  • Last visited

  • Donations

    0.00 USD 
  • Country

    Germany

Everything posted by user57

  1. well there are some websites that offer a decoder for that heif format https://strukturag.github.io/libheif/ is that one working for you ? but encoders are rare for now, special a encoder for winxp dont exits, and having hardware acceleration and this encoder use the best settings, best code, and dont go down the road any time to create a .heic file as everybody can see normal jpg, jpeg 2000 and jpeg jxr are beaten up even the heif file from wiki is beaten up
  2. a dark background could actually be for energy saveing, aka it opens the question again if win10/11 is a smartphone software screensavers do use dark parts, because dark parts dont consume power, they simple not emmiting light an exception would be just to block the emmiting light, but that is normally not done lots of dark parts = less emmiting lights lot of bright parts = more emmiting lights actually a grafic interface that is dark could be remade for xp or even back to win95 here is a example of how a different gui style looked in windows 95
  3. well it interhents from hevc (h.265) then the idea seems to be .HEIF (High Efficiency Image File Format) what can store multiple formats (such as jpg2000) but jpg2000 is not a new file format so the new files that are actually encoded with the new encoder are called .heic so .heic is what we want (High Efficiency Image Coding)
  4. as promised .heic is brought to windows xp closing the gap for xp having a very new image encoder and decoder (and nothing using any other modules, engines or weird operating system dependencys) the resulting image is even better then the one on wikipedia https://en.wikipedia.org/wiki/High_Efficiency_Image_File_Format reason behind this i only choosed the best options, better internal code, better decisions, and disregarded code that decreases image quality ----- APP "WinXP HEVC/HEIF/H265 Image En/Decoder explained" The Encoder: Encode By Filename: allow you to select a file this heic encoder supports : .jpg, .png, .tif and .y4m (raw YUV format) the encoder make a .heic file from the choosen file Encoder By Folder: this read "Encode By Folder Searched Format Ending" if you have choosen jpg then it will search all .jpg files in the choosen folder actual chooses: png, jpg, tif or y4m if the entry was jpg then the encoder will encode all .jpg files in that folder to .heic this by folder was made so you can encode many files instead of always 1 file, while going to drink some coffee Encoder Complexity: "placebo" is the best setting here i actually dont see any reason to choose a different setting that actually only results in less image quality (best: 1: placebo, 2: veryslow, 3: slower, 4: slow, 5: medium, 6: fast , 7: faster, 8: veryfast, 9: superfast, 10 ultrafast) why we should we set a setting that decreases image quality? (basicly i not even wanted to have this box) (1 word about this, placebo use the best compressions tricks therefore the time is longer(more code = more time), the others try to speed up this (and leave out some functions, tricks, try to end the encoder before it really was done), it might not always increase the image - but you can be certain you got the best option (and the highest amount of pixels possible) also it can happen that that your image might not challenged the encoder for its maximum, then a "faster" setting dont have that much difference, still it can result in less amount of pixels, with placebo you are certain to get the maximum it really raise questions to use the others, you can make a big jpg file and you may dont see the image difference that much - but why ? are we making a jpg or are we making a high efficienty image encoder ?) Hardware Acceleration: makes use of hardware registers such as MMX, SSE and AVX this speed up encoding time a lot since the encoder is very complex image encoding can take time hardware acceleration makes encoding a lot faster notice: depending on your cpu power since the encoder is complex can take some time (if so keep a look "Encoded Image Files") (MMX, SSE and AVX are speed hardware registers they are between 64 and 512 bits wide, depending what one is available (yes in 32 bit)) Quality: controls the filesize of your .heic file, the lower this number the smaller your .heic file lossless: that option is not very useful as the real question is how well the pixels was preserved making a compression (we actually dont make a raw format - we make a compression) so better set this option to 0 (it dont make a real compression) Tuner: this increases the image quality even further good settings are psnr and ssim - the other settings only decrease image quality the tuner increased the amount of pixels, as said before its a extra function to improve more pixels more code = more time - this makes a good example - if you leave out many of good possible tricks you might end up in a less fancy picture ----- The Decoder: you have to choose a output image format for your .heic file (we have png, tif, jpg, and y4m) png compression level (0-9): png compression level -1 actually represents png compression 6 i actually dont see a well reason to have -1 as option, since -1 just represent compression 6 (you can try this out by looking at the filesize of the resulting .png file (try -1 and 6 they are the same) -1 actually is called png_default_compression what then is defined as 6 0 means no compression (this is good to make a compare how well your .heic file was preserved) increasing values make higher compressions losing more pixels (again 6 is equal to -1) 0 is the best png compression regarding pixels going from 0 to higher numbers decreasing image quality (higher numbers create smaller file sizes) (and make a compromise about pixels and compression) png is said to be lossless, but i only know for certain if option 0 is selected that it is a lossless copy. (what makes a 1:1 copy of the .heic file as it exits) jpg compression (1-100): nothing much to say here the higher this value the better the resulting jpg image notice higher values also cause bigger filze sizes 90 seems to be a good choice Decode By Filename: this button actually reads out "Decode In Format" why ? because if you select a .heic file the decoder has to know the decompression format valid formats are: png , jpg , y4m, tif Decode By Folder: reads out "Decode In Format" then the selected folder is searched for .heic files and then the decoder decodes all .heic files into the image format you set in "Decode In Format" Multi-pass: this makes a second image and compare the result with the first image - according to information this also improves image quality a bit (the h.266 says for example 1-3 % in average, then something about maybe sometimes more) Create A Subfolder: this allow you to put a folder where the WinXP HEIC en/decoder put its files it trys to create that folder, but you also can create that folder yourself this also avoid the name problem when controlling with "By Folder" ------ rumors say .heic is the best image encoder at the moment as we know .heic passes jpg, jpg2000 and jxr (jpeg xr) what we can see in the wikipedia site for heic maybe .heic also other jpg formats like the jxs format (what is rather speed orientated then quality orientated) there are some (jpeg xt, jpeg xs, jpeg ls, jpeg xe, jpeg xl) (https://jpeg.org) if someone wants to makes the compares the h.266 by frauenhofer or jxl would be candidates to try or even the others ... i actually never seen a h.266 frauenhofer image yet (updated the links): https://www.file-upload.net/download-15405155/WinXP_HEIC.zip.html https://www.mediafire.com/file/g9t94vi3dr4gycl/WinXP_HEIC.zip/file
  5. when vc++ 2022 and windows 11 SDK is reached, and even win10 stuff isnt working anymore that means windtows xp passed to that win11 time in sence of compiler however why i have doubts if it was worth to spend the time for this project according to dibya no one actually used it it took many time to fix all of the problems and also i lost many time related to the hevc encoder, by now (everybody wants something from me, however there is not that much space for a different code on such very big projects) i really hope it was worth that dibya had a different idea how this work, something "super simple" "something super fast", i always told him no it is not that simple to add all these codes, it are 173 projects and such, and we need more of compiles ect. still however i fully explainded how to add the code now, i think he should be able to add the code by himself now at least he brought the requied patience once he realized what has to be done
  6. francebb is back ? i thought last time he left us and said he should not look back anymore well if there is a limit set that should be able to be changed by the way we recently talked about francebb and that this might solve his problem, he wasnt there to see whats going on: https://github.com/reactos/reactos/commit/66dead68ec780a4a40c5b7d31f57e3646979a402 its from reactos just look this line: /* The forced speed, 10Mb, 100Mb, gigabit, 2.5Gb, 10GbE. */
  7. i can not join the project fully yet - i have a long route to go still in the past i set the .heic picture encoder as main next needed target according to information around its probaly the best image encoder available at the moment, only other candidates are the frauenhofer h266 and jxl (the other jpg like JPEG XT, JPEG XS), dont have maximum image quality as their target goal but belive me the settings and how i made the code should be very competitiv even with those (edit deleting old links) this a pre type not released but it can convert a jpg or png to .heic (dont use other formats for now) the best is to use a lossless .png file (made from a raw file or lossless bitmap) .png is said to compress an image lossless (without losing any pixels) when compression level is set to lossless (aka 0 ) compression_level = -1 means default_png_compression level (PNG_Z_DEFAULT_COMPRESSION) what is the same as png_compression_level 6 so the compression levels go from 0-9 (0 is lossless) increasing that value set higher compressions (aka more losses in pixels) -1 actually dont exits it is just equal to 6, -1 dont mean a better compression then 0 would be the first choose if no pixel losses are wanted https://refspecs.linuxbase.org/LSB_3.1.0/LSB-Desktop-generic/LSB-Desktop-generic/libpng12.png.set.compression.level.1.html 1 picture of setting png compression levels in other programms (here 9): https://i.stack.imgur.com/NLfvP.png 0 being lossless for png and 9 being png highest compression aka smallest filesize the heic encoder settings : placebo = for best image quality (faster settings mean less image quality) tuner = something to tune quality even more PSNR and SSIM are good quality = control filesize of your .heic file lossless = lossless but not very useful - dont use this one (actually we want a good compression for a raw file, not a raw file itself - and its about how many pixels remained in the compression) hardware acceleration = make use of hardware registers such as mmx, sse, and avx (choose by filename) select a .jpg or .png file and the encoder encodes a single file to .heic (choose by folder) checks the ending in (format ending) in the selected folder and compress all files that have that ending for .jpg it is a good idea to make a big jpg file , big file size jpg means more pixels we like that
  8. oh yes the idea of removing the SSE instructions by intel is a such thing intel probaly knows that SSE is a good competitor that actally can solve the compressions fast to get rid of those and programming their AVX512 they can force you to a new computer/cpu and those intel 12 gen + are spywares - and benefits systems or mechnism that support such idea´s they just make certain the computer is safe against you - for microsoft thats the idea - intel and microsoft seems to be brothers in arms in this question if SSE would stay there would be a valid good solution for intense compression it it also backwards compatible what avx512 is not microsoft creating a new monopoly ? microsoft is well known for lawsuits in that direction only one of many many many cases: https://www.jurist.org/news/2022/12/microsoft-faces-private-antitrust-lawsuit-over-68-7b-purchase-of-activision-blizzard/ there had to be something wrong with the "deal"
  9. exactly and people always said: that will not happen intel will never support a such mechanism they dont just change their policies / terms of use to something more bad -> even more is possible now intel even removes SSE commands they removed the 16 and 32 bit modes too - what is wrong using a old hardware with a new cpu ? or also "security is always good" yes against yourself - and making the software secure against you for microsoft - security against you - security for microsoft and thats also why they stopped the support for this version there was nothing wrong with it
  10. well here is the v16.0.5 https://www.file-upload.net/download-15158304/LLVM_XP_v16.zip.html it contains also some screenshots
  11. there is often a missconseption 32 bit means 4 gb ram limit thats not directly right first there is a second register (segment registers) that can take another *FFFF (16 bit)= 65535 * 32 bit (aka FFFFFFFF = 4gb) offset register = 281470 GB address room (and thats only https://wiki.osdev.org/Segmentation (gs would be a good candidate) but windows isnt doing that at least for windows xp i told them to make a virtualalloc2 functions what also returns the segment register that then could pass the 4 gb limit a second way is that when a process is switching the PDBR page directory base register changes that means it can address 4 gb each process thats because the memory is addressed by entrys if the entrys point to a different ram it could use the other ram a other problem is that usermode limit 7FFFFFFF (2gb) and the rest is kernel mode both make 4 gb (2 * 2gb) 80000000-FFFFFFFF is used by kernel (aka 2 gb) there was a time the usermode was extended to like EFFFFFFF (because the kernel mode do not need that much ram) the next problem is that windows useally maps those entrys for a maximum of 4 gb size that means going beyond without changing that system overwrite the PDE´s what cause a os crash
  12. not having knowlegue about this but structure si->si_band it is saying it has no such "si_band" member was it renamed or deleted ?
  13. greetings all, we found the related information that caused that problem even tho we have no inside information, the information i posted is incredible accurate so here is what microsoft did, it probaly is useful information to know https://github.com/microsoft/STL/pull/1194/commits/faa3434d7e748fcfdc253ad2788a0e4fddfea105 explain that __crtInitializeCriticalSectionEx(&_M_critical_section, 4000, 0); // to InitializeCriticalSectionEx(&_M_critical_section, 4000, 0); it also explains why the dependency walker for versions up to 16.7 show a try to search for that functions (meant is that there these functions where found on dependency walker but not in the import list)
  14. well i gone after that annotions message here is a next test: https://www.file-upload.net/download-14953656/SumatraPDF3.4.6.zip.html (edit now has a own jxr decoder) the dialog should apear now and ask "save to existing pdf, ect." well with the JXL file i think you got a version problem JXL should open on a new version of windowscodecs.dll a programmer has the choose to let a engine do the encode/decode, or manual this time windows is choosen to do so, if it cant find the the decode routine, it wont work here is a list that seems to be involved (from a programming standpoint it is just windowscodec.dll (that is the one we give the jxl file) WINDOWSCODECS.DLL loadlist: WINTRUST.dll IMAGEHLP.DLL WINTRUST.dll - loads and connections : RSAENH.DLL CRYPT32.DLL ADVAPI32.DLL xpsp2res.dll crypt32.dll - loads and connections : userenv.dll VERSION.dll CRYPTNET.DLL userenv.dll - loads and connections : SECUR32.DLL netapi32.dll CRYPTNET.DLL - loads and connections : PSAPI.DLL SENSAPI.DLL WINHTTP.DLL WLDAP32.DLL OLE32.DLL - loads and connections : wmphoto.dll sendmail.dll maybe not all are requied
  15. well i have made a test version but notice this one is only experimental del
  16. i readed it today, the first time it still had the 3.3.3 version also working on other project but whats new with it ? it might be of use
  17. there is already a problem 1 function isnt found by visual studio RtlRaiseStatus so i looked the ntdll.lib and the lib file has got the function RtlRaiseStatus is present in ntdll.dll in windows xp ntdll.lib is added in the linker list so i added the function to at first my cpp file then on winnt.h NTSYSAPI VOID NTAPI RtlRaiseStatus( DWORD Status ); but the visual studio compiler keeps telling me that it dont find that i also tryed with NTSTATUS instead of DWORD but that doesnt change anything what is causeing that problem? i know i could go for a loadlibrary and getprocaddress or maybe the definition from reactos well i choosed a solution just by doing the same thing windows is doing VOID NTAPI RtlRaiseStatus(IN NTSTATUS Status) { EXCEPTION_RECORD ExceptionRecord; ExceptionRecord.ExceptionAddress = _ReturnAddress(); ExceptionRecord.ExceptionCode = Status; ExceptionRecord.ExceptionRecord = NULL; ExceptionRecord.NumberParameters = 0; ExceptionRecord.ExceptionFlags = EXCEPTION_NONCONTINUABLE; RtlRaiseException(&ExceptionRecord); }
  18. microsoft released the source code of windows xp/2003 having the source code gives a big adventage in speed, analyze speed, quicker understanding, code can be copy pasted, assembly analyze is very reduced the source code release from microsoft was for students but then quickly gone viral public i do not really understand why vista code doesnt get released, students have to work with a old code of xp vista would be old too, but at least some code could be made use of, so why not ? it would be possible for microsoft doing that the windows 2000 vs windows xp rather remembers the fight between vista and windows 7 with one diffrense win 2000 didnt have bad things so directly so you could use vista instead of 7 that could be done but i rather would use windows 7 over vista many used windows 98 se because when windows nt apeared a lot of compatibility was lost and a lot of older apps didnt work anymore in other directions to have some fun with modding or code or creating coding stuff you could choose all of operating systems
  19. i dont see a reason to not add functions that can be added with xp its a longer story, there was dos-> win3.11 -> win95 -> 98 -> winme those where also dos based and in part 16 bit this suppose to leave with windows NT then nt versions apeared nt 4.0 wasnt good so most accepted nt 5.0 nt 5.0 then was updated and called windows 2000 then there was xp sp1 aka 5.1 then there was server 2003 was is windows xp but was called nt v 5.2 but then xp recieved a lot of upgrades it surpassed server 2003 and even got the updates from windows pos ready what also was based on xp and upgraded up to 2019 so most likely xp is the sucessor of all nt versions in 3-5 then vista apeared v 6.0 but it had many bugs and a lot things didnt even work, so people actually avoided it xp still was the better choose then microsoft made a good decition they brought back compatibility and tryed to fix the errors they made with vista aka windows 7 apeared, based on version it was nt 6.1 windows 8 suppose to be nt 6.2, but it brought not many new things direct12 ? that was moved to windows 7 too (world of warcraft for example use directx12 on windows 7) the problem with 8 was in my opinion that it didnt brought anything new rather incompatibilities what are not wanted it was a bad seller and begun with the spyware component another reason to avoid it with windows 10 what is at best nt 6.3 was called 10 or internal nt 10.0 for no reason windows 10 also was free to have just upgrade and use it, a lot of people then came up there must be something wrong with this deal and yes so it does it try to get rid of the user as owner of the computer and software, it has spyware components, it can stream updates if it wants to, it take more cpu power on will, it can patch whatever it wants against the users will windows 11 does the same thing but the TPM chip now also take place the user itself had not that many choose options because now time has passed it was time to get a new software having new hardware too a lot of users still use windows 7 for that reason but back to xp why would you go to windows 2000 when you got everything you actually need in windows xp(as successor of all nt ~5 versions), it is very compatible backwards and even upwards a other problem with win2000 and xp is that xp recieved a lot of upgrades that win2000 dont directly have some even from vista this leave open questions you dont have a real advantage going back to win2000 from xp you might have a adventage if you go back from xp to windows 98
  20. according to microsoft, windows 7 at least support directx 12 in part: https://devblogs.microsoft.com/directx/porting-directx-12-games-to-windows-7/
  21. ok but since we seems to found the problem, why this is not progressing? the idea to make a jpegxl.dll and just replaceing jpegxl.dll do probaly not work for the reason that the irvanview´s author wrote plugin functions to jpegxl.dll (instead of using the jpegxl.dll directly) his are added functions to jpegxl.dll are: "ReadAnimatedJXL_W ReadJXL_W SaveJXL_W ScanJXL_W ShowPlugInOptions_W ShowPlugInSaveOptions_W GetPlugInInfo" i wrote a post about that problem, that also goes for that WEBP.dll format (what is probaly linked with 16.8 or newer) v 16.8 + create InitializeCriticalSectionEx 16.7 has a working c-runtime for xp it trys to use InitializeCriticalSectionEx but if cant use it use a working other code you either need to fix the crt described in the other post or you go back to Visual Studio 2019 version 16.7 (maximum) the microsoft website also says something about Visual C++ Redistributable version 14.27.29114.0 (maximum) microsoft source: https://docs.microsoft.com/en-us/cpp/porting/binary-compat-2015-2017?view=msvc-170
  22. use a free vpn ? login: de4.vpnbook.com password: ct36a3k does that cuda gpu engine do something software cant do ? sounds weird to me you can emulate everything a cpu/gpu/semiconductor can do of course they will be a lot performance lost but with greater and faster cpus even software based render could reach and pass the often 10 times+ faster hardware acceleration with sse commands probaly a lot closer to hardware acceleration having more of cpu cores probaly also help in this questioning
  23. sounds like a harddrive problem they often have this kind of look (even if scandisk/chkdsk find nothing) do you maybe have a different hdd you can try ? they kinda cheap today maybe a ssd i would try that first, put your old hdd outside and install windows (if you no longer want to do this just put your old hdd drive inside the computer again) to debug other applications you need a debugger, to debug the operating system you need a os debugger windbg cant debug the entire os so directly
  24. that list says MSVCP140.DLL is the reason ? thats part of Microsoft Visual C++ Redistributable/vc_redist.x86.exe what version of MSVCP140.DLL is used ? microsoft said latest supported version is v 14.27 anyway here is the entire project, here the IrfanView author can add his export functions he maybe programmed that dll to use his functions instead of the export of jxl.dll that can be done that way more common or flexible way would be to use the export functions that jxl.dll (renamed Jpeg_XL.dll) gives those exported functions handle the encode/decode/compression the readme.txt has information about how to do the next part but i do not think msvcp140.dll is the reason https://www.file-upload.net/download-14783581/files.zip.html ------------------------------------------ seems i got confused a bit, it looked to me his jpeg_xl.dll loaded that unwanted functions in kernel32.dll but it doesnt, i must have eaten something wrong that InitializeCriticalSectionEx is result of dynamic CRT (for both the one i compiled (where i know it works on xp) and for the jpeg_xl.dll from IrfanView author, also for that "api-ms-win-core" dlls the CRT Irfanview author use work for xp the reason why this is not in the import list is that it loads this at runtime and its that CRT (i wrote something about that in msfn forum) it doesnt use InitializeCriticalSectionEx on xp it trys to load it, but then choose a working alternativ now we left with a new question why it doesnt work on xp ? maybe that msjava.dll or that MPR.dll ? the jpeg_xl.dll i compiled do not use those but is not having that export functions RamonUn is right those are not part of the compiled jxl.dll that then get renamed to jpeg_xl.dll ReadAnimatedJXL_W ReadJXL_W SaveJXL_W ScanJXL_W ShowPlugInOptions_W ShowPlugInSaveOptions_W GetPlugInInfo
×
×
  • Create New...