Jump to content

user57

Member
  • Posts

    285
  • Joined

  • Last visited

  • Donations

    0.00 USD 
  • Country

    Germany

Everything posted by user57

  1. its one thing to print a RGB buffer to a screen, that is what directx video is doing what i wanted to point out is first that CUDA is doing the de/encoding (so basicly everything 99.99 % happens in CUDA, not in directx) then second is some parameters are different in directx versions, the example code already needed d3d9ex not d3d9 without the ex so it also could be that some parameters are just not supported in dx9, hard to say exactly thats why you can give that "RGB" buffer to opengl instead to directx that strongly speaks for windows xp´s directx9, however cuda if you want to do that with a grafic card is doing the important thing the RGB buffer we also could draw with a GDI engine - and thats why i said directx might not be the right question another point is that cuda can be done software wise with mmx-avx (so called XMM registers), those are very fast and by far enough for the purpose either en or decoding rather cuda is a hardware interpretation of a software .... thats it if it has to be a card, why not someone invent a PCI-E card doing the en/decoding ? we do not neccesary need a grafic card to do so - maybe a new hardware invention ? that with the high level functions is different to ask if you have paint - paint isnt making the drawing - it use GDI and GDI use NTGDI and NTGDI use internal functions what in the end go into the grafic card driver using a next engine would be a script that controls paint - that you see quite often that "pre CUDA" is a DLL , it probaly has a pre code for the real control to the cuda part in the grafic card - while cuda is being coded in a CUDA SDK - what is also some "scriptish code" to control the CUDA engine (and is like totally bond to dx10/11 and windows10/11) , the phyton version also only control cuda so hard to say what to call this ? a half-script language to control cuda ? a engine code to control cuda ? a simplified programming to control cuda ? i dont know what i would name it, maybe the others have some corrections for me and im happy to hear them why we have to be so indirectly ? why cant we just use cuda rather directly ?
  2. these timers are almost the same just with different names
  3. well maybe there should be a inofficial SP4 (including all of these upgrades also the posready ones) for all languages in a older past there was a ugly solution someone collected all these upgrades. and then run them via createprocess() function , that took 4 hours or something a better way would be to set the files and registry entrys manually via registry and file functions that 4 hours method just runs each update.exe (for every kb upgrade) over and over - until it has the 190 upgrades or something
  4. if the life-cycles are meant, it would be better when the firmware to choose these that raise a certain question tho, if it splits the sectors with into less and more used sectors , then not just a few die on and on, rather the disc would then later die at once
  5. greetings, i would make a better description like screenshots (smartphone pictures are also good). error messages, when the problem apears, do it work in part, KB or upgrade number, your windows xp version
  6. at first it was a good fork for many codecs!, but at some time the engine question kicked in rather of being independent the "newer codecs" are done via engine and ffmpeg is no longer independent and those engines/apis/interfaces (whatever we call them) sit in windows 10, so they keep telling you to install win10 somewhat, somehow neither if its CUDA (what later on calls the high level functions in the grafic card driver(and is called something like "NVIDIA display driver: 551.76"), dx12, LAV engine that also goes for the other like vs2022(what used something called the c-runtime (that uses win10 functions, or the SDT from vs2020), python 3.10 + in future for the compilers today they trigger to use a engine code - that goes into direction of scripts rather then having a c++ code that is really doing the job manual (such as memory managment, file control) this script then is written with win10 functions ... those one-core-api.dll´s are also a such thing that is also the case for example for the nvidia SDK, this one is also to see as some kind of engine -> what then use cuda and d3d12 the HEIC image encoder also was a such thing, it was deeply bound into win10 engines and functions - but it was unbound to me this seems to be a method to get rid of the older operating systems instead of offering the real code - they offer script to do so (and those scripts use win10) - simple said it is not like there would be other ways ... there certainly would be for example if we would have the high level code of CUDA, or the programming control is not going with scripts, rather they are manually controled written - they it would be doable CUDA(and following up the driver) is just a engine that represents hardware video card encoding, such as VP9 or HEVC since CUDA is rather made for win10 its hard to use a code that use CUDA on XP - but the codecs VP9, HEVC are open source - CUDA is not a necessary requirement (and that even tho all the programs out there use this engine ffmpeg being one of them) if put the right way not having a video card is ok if you use a software based en/decoder (and mmx-avx makes such codes by far fast enough - its not like a software mode in a video game where everything would run just superslow) chrome have a such software based decoder for VP9, someone tryed it with 1 core and 2 cores - 2 cores being enough without a grafic card driver being installed (while when he had a grafic card driver installed even 1 core was enough) https://www.youtube.com/watch?v=wsSMmdwh89Y
  7. one way would be that instead the counter 4096 sector size is not repeated if it is a "virtual 4k sector, that has 512 sectors" = it would have to repeat that 512 * 4 if the counter then just access the next 4 k sector instead of sector 513-4096 then it would mis-align them same goes for the oposite "using 4 k step writes on 512 sectors" = it would try to write 4 k on a 512 sector - that would interfere with the next sectors, and leave out 3 512 sectors a buffer overrun oder underrun is also possible seems a matter to understand the entire OS file routines but something else is to see, xp actually handles over 4 GB size here * 512 = 2 TB so hard to say but it might use a 32 bit offsetting while that overlappend structure (what is used on like the entire file functions on xp) has a 64 bit offsetting it tends to say "4 gb * 512 sector size" maybe the issue is small, sure it could be more of problems hard to say to write a such routine (either 4 times 512 or 4k) would be very simple but
  8. the problem with directx is that we dont have dx10/11 or 12 for xp directx10/11/12 is a closed source so we cant see into that engine, so it might end up in a stuckpoint (if you put it the right way "it ask for dx10/11/12") but maybe directx or opengl is even the wrong answer from what i seen in the code its not opengl or directx doing the job its the CUDA engine that gives the RGB buffer to the opengl or directx engine so we might put it the right way you dont need a grafic card to encode / or decode an image or video (the HEIC en/decoder from msfn is proof of that) CUDA is a engine for a grafic card, it makes control/input/output as engine directx/open - uncertain for now - but from what i think its just a engine that gets the data from CUDA if we might think that "software mode" is to slow, no not this time because mmx-avx are from the same nature they 10-500 times faster then normal opcodes thats more then enough to make it fast enough this time so what does CUDA do ? cuda controls a grafic card unit that can make en/decoding of common video formats that sounds good so far - because if you want have something done you want a hardware unit to do the job but nowdays we have many cores - what also can be seen as a hardware unit (that is programmable) while a pure hardware unit is like a print - once printed it cant be changed but the formats are being upgraded - for example CUDA cant en/decode the h.266 codec the grafic card doing the encoding is kinda new, its not like a video game what hardware accelation is from like 1995 or something the video card doing that for a video en/decoder job is not to long ago, and many modes are not supported either the CUDA engine itself is not well supported for windows xp so going that direction might just lead to windows 10 then there you have the right grafic card driver, the right cuda engine, and dx12 so you dont have to deal with any other questions then again, i pointed this out in a other post for a nativ solution we would have insight to the nvidia driver what actually controls the grafic card then we would have a OS-independent solution (high level functions) i dont see this at the moment like at all. when the problems already where spinning around in a LAV engine that makes 3 engines - and no grafic card control was touched at all so at the moment its spinning around in these 3 engines - not where the needs actually would rely a strongpoint could actually be that directx9 dont look that much different to dx10/11/12 so it might could be given a RGB buffer from the CUDA engine - but the CUDA engine for XP is not being upgraded by nvidia https://youtu.be/W3zfb8lLDH0 https://youtu.be/p8387-gu37s https://www.youtube.com/watch?v=mZ-0XBqRxuc the problem i see is that already a next engine came but here is the problem : DX11->CUDA->GRAFIC-CARD-High-FLevel-Functions OPENGL->CUDA->GRAFIC-CARD-High-Level-Functions this time: MPV engine->CUDA->GRAFIC-CARD-High-Level-Functions so everybody can see the problem the high level functions are not touched (neither is CUDA) (MPV calls up cuda ? no ...) we had to sort out a similiar question, at first it was like "you need a grafic card doing this job" it took a while until everybody had an agreement that this is not the case - also not for speed (software mode) basicly this only leaves the software option, in avx ect. its by far enough of speed this is not done by a engine like directx, cuda or video card high levels functions its like the RAW control of the RGB buffer for a video player doing the decoding maybe encoding if wanted
  9. that NdisSetCoalescableTimer seems a timer function for me, it might work without that specific flag be set it has the same KTIMER structure as in KeSetTimerEx and xp has this one https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/wdm/nf-wdm-kesettimer https://www.geoffchappell.com/studies/windows/km/ntoskrnl/inc/ntos/ntosdef_x/dispatcher_header/timercontrolflags.htm if that DPC parameter is in use xp also has the KeInitializeDpc, the same goes for KeInitializeTimerEx https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/wdm/nf-wdm-kesetcoalescabletimer
  10. that might be a good point to point out that this false flagging is done quite often, what was not case in the past because at some point it did not have to be a virus, trojan horse, keylogger it has to be "potentional unwanted software, malware" - what this is they defined but rather going into a direction where software gets marked what is not on the want list after that they just flagged unwanted programms as virus, also coming over the anti virus software i could not even run the heic en/decoder in a win10 vmware test machine because all it said is that this executable is a virus, this was the case with many other 100 % virus free software i compiled up - other such as the one core api are also flagged as virus - even tho its open source its a monopoly
  11. youtube got taken over by the mainstream, there where no commercials, no weird real name asks, or telling your phone number, and no lawers that tell you that is right that is not youtube still has some good things left but things start to take over directx9 and 11 dont look that much different either: https://youtu.be/W3zfb8lLDH0 https://youtu.be/p8387-gu37s https://www.youtube.com/watch?v=mZ-0XBqRxuc thats because it rather come down to how well the 3d-models are made, directx just render a 3d object over a engine called d3d version xthere certainly are some things where a very few things are better, but overall it cant really improve the textures from the 3d object
  12. samuel has done a well job with his one core api even unlike me hes a very active guy, he really works on stuff sometimes he wants a bit much, but as everybody can see chrome gone way up i remember the old discussion about the skia engine and why it broke it up, that was for chrome v54 now we have 133 - thats way of a gap who wants a short look back just what have been achieved: https://archive.org/details/advancedchromev54.20.6530.0 that version actually was a long stuckpoint
  13. i really dont understand the hate for the old operating systems why someone cant go online with a win98 machine using IE6 ? it was never a problem even win10 apeared, neither it was when win2000, xp, vista and 7 where out it seems it wasnt enough to take out the IE11 either, they took just a google chrome fork called EDGE but then the problems apeared rather in like 2018
  14. the 2003 server version might actually a well solution, because its the same type of windows 5.1 to 5.2, but rather useally xp had more of upgrades https://github.com/Skulltrail192/One-Core-API-Binaries/issues/255 if it does work well it might be a well solution already, i remember seeing some discussions too
  15. right it dont in sence of GPT but rather that was going into the xbox solution, jaclaz talked already started to talk about it to pass the 2,1 terabyte limit it might be a solution to pass that 4 GB * 512 sector size = 2´199´023´255´040 + sector 0 = 2´199´023´255´552 what are ~ 2 terabyte if the solution has to be GPT wise, paragon might done something already if it is open source it would be something to work with in xp there is that overlappend structure, it has two 32 bit high and low parts from that spot on it looks well for xp, but you actually dont see what happens next useally it is this chain : deviceiocontrol -> nt/zwdeviceiocontrol -> transfered into IRP if xp makes the right request it might work (IOCTL_STORAGE_QUERY_PROPERTY), fills up a structure called STORAGE_ACCESS_ALIGNMENT_DESCRIPTOR, that structure has BytesPerPhysicalSector HANDLE hFile = CreateFileW(DeviceName, 0, FILE_SHARE_VALID_FLAGS, 0, OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, 0); // getting the handle to the harddrive/disc/ssd/device STORAGE_ACCESS_ALIGNMENT_DESCRIPTOR sad; // that is that structure what is filled with the sector size static STORAGE_PROPERTY_QUERY spq = { StorageAccessAlignmentProperty, PropertyStandardQuery }; // right i/o control codes ULONG BytesReturned; // just a unused dummy DeviceIoControl(hFile, IOCTL_STORAGE_QUERY_PROPERTY, &spq, sizeof(spq), &sad, sizeof(sad), &BytesReturned, 0); // common chain to call the request i
  16. what happend to the bigger sector solution ?
  17. i have shortly looked into that cuda/cuvid code, i expected it does set (control, output, input) - and yes it is doing exactly that it is right that a grafic card that supports a codec is needed to make use of nvenc/nvdec (or with other words if you want a grafic card doing the de/encoding you need a grafic card that can do this) what seems a missconsumtion to me is that you need a grafic-card to make a de/encoder - that is just not right the example code is totally bond to the dx9 video engine // Create the D3D object. "Direct3DCreate9Ex(D3D_SDK_VERSION, &g_pD3D)" to this point it dont look to bad for xp (dx9Ex video engine) (and in the end maps a RGB buffer to that dx9ex engine) but it can be expected there is a deeper code (inside the d3d9/11 dll), where you if you use that engine code actually dont see what it is doing there the code itself, is also mixed up some parts initialize cuda/cuvid while other gather the d3d9/11 video engine so this method use both (d3d video engine + cuda/cuvid engine) (it is mapping the cuda/cuvid engine to opengl/d3d video as RGB buffer) it would be possible that the driver simply dont have the right control code and just returns 0 or not function it could be that if the right parameters are set, that these are just not being processed (either nvcuvid.dll, d3d9.dll or the driver version) the nvidia driver either user or kernel level is closed source, makes it hard to overview and dont allow a direct controlment with high level functions (what also only do set control/parameters,input, output) for ffmpeg i looked around for the NVDEC/NVENC engine, ffmpeg use up these engines (ffmpeg is not independent) rather ffmpeg is to understand as command line that if it wants to use NVDEC/NVENC ffmpeg then calls up these engines so ffmpeg isnt doing it alone either or "just can be used on xp" the user engine/api seems to be at nvcuvid.dll what then calls a function inside that nvcuvid.dll like this: cuvidMapVideoFrame(&hDecoder, nPicIdx, &cuDevPtr, &nPitch, &stProcParams); but this is a low level function it dont tell what nvcuvid.dll (and all its followers such as the video driver) actually really do https://github.com/tpn/cuda-samples/blob/master/v9.0/3_Imaging/cudaDecodeD3D9/videoDecodeD3D9.cpp https://docs.nvidia.com/video-technologies/video-codec-sdk/11.1/nvdec-video-decoder-api-prog-guide/index.html then website for a devloper making use of NVDEC/NVENC tells us this: Software: Windows 11, Video Codec SDK v12.2, NVIDIA display driver: 551.76 this seems hard to me it tells a driver version that xp actually dont have (551.76) the SDK might be able to do it without opengl/d3d video (as cuda/cuvid maps a RGB buffer in the end, why would it have to be a opengl or d3d video engine?) but a new SDK tends to use new api sets (and here is where it says something about win11) it might actually already use win10+ apis in the SDK already the driver and the nvcuvid.dll (the newer versions) might also be used up with win11 windows api´s a real solution would look totally different we would need the high level functions that control the nvidia driver then we could set the right parameters/controlcode, input buffer, output buffer (RGB) the control code, input buffer, and the output buffer can be done in a player then that would be very os independent solution and could make use of nvidia grafic cards that support hardware de/encoding it will be exciting to see what happens now
  18. it is for MPC-HC ? then the question apears if it play the h.265 codec i do not know specific details as "CUDA" is closed source but if i get it right cuda makes: control input, control and output so if the driver isnt doing that the grafic card isnt talked to it dont has to be a grafic card but, its a plain c code (both en and decoding) - the question raise how fast that actually runs the code written with mmx-avx instructions are very fast but for a player only the decoder would be needed https://openrepos.net/content/lpr/libde265-hevc-decoder
  19. i dont have insight in this project but is dump_ntoskrn8.sys even part of the ACPI 2.0 driver ? sounds like a kernel extender but i do not know
  20. what about this player ? https://www.videolan.org/vlc/releases/3.0.20.html
  21. i personally like that you guys are writing with me i also lern the things everytime a bit more, neither where the direction goes well first to x265 you are right its rather a encoder (also rather no pictures, rather for video) this is the case because that encoder (x265) is rather using a command line for encoding (it has no command line or extra function for the oposite piece) that HEIF also rather use a command line to call x265 up ... to x265 there is its counter part called libde265 but here is the catch those libde265 routines looked pretty much the same to the ones from x265 ffmpeg is working like a command line too, just want to point that out. (you get it and you call it) so x265 should be able to be reversed to a decoder, but certainly not with a command line i looked some of the codes, names and such they are very alike but i did not study every compression method because i dont see much sence to that going with that logic, what can create that picture can also display that picture - it just has to be the reverse input (RGB buffer for example) -> output (encoder x265) -> back to input (RGB frame) if you have a good argument why that dont work i will listen (i dont think i didnt just read all the routines so exactly) i also get now why windows 10 besides the fact it use many engines over next engines still got some speed so it is right that windows 10 got a slow high language code ... and many engines that use other engines so in this case (aka win10) that slow engines are called up - but not doing the de/encoding stuff it is given to a grafic card thats why ... win10 despite the fact win10 has a very slow code .. win10 can encode/decode with enough speed a normal CPU can handle the "inefficient engines and highlanguage code" going backwards yes the idea to use a hardware unit for a software is a old idea the grafic card is doing that this time - to me it raise the question what method was used: 1: a firmware or a source code that use a "core on the grafic card" - GPU and CPU at some point are the same (rather only different schematics) 2: a pure hardware unit, a "core" that has the firmware on the hardware schematics if its 2: the schematics cant be upgraded if its 1: it raise questions to do so, you have a multicores today then we just use a other CPU and that is basicly the same (+ the XMM registers are pretty fast) quote: "Direct3D can be used to put the decoded picture on screen with acceleration: allowing to rotate and stretch it for free without using the CPU" this one i like special, that is right but that is exactly the problem that engine is bound to directx11-12 so it need this d3d engine to use the next engine (what maybe is cuda) and that is exactly the problem it are already 2 engines that maybe already are limited to win7+ therefore we cant use that with CUDA the problem seems that for XP there is no newer version = the cuda engine we have on XP dont support the controlment for the driver = we would need the control engine (aka the internal code of cuda) to control the driver (and the grafic card) then we could use a grafic card that supports the h265 de/encoding (the only question remains what version of cuda still works in XP and what en/decode modes that "newest version of CUDA for xp" support) https://youtu.be/H3AQnlpxk0c https://developer.nvidia.com/video-codec-sdk that LAV going the same direction, the LAV engine might be open source but the right question made to it are different first : it tells us a problem many dont seems to understand a engine always has a "internal code" that it uses up (this time we see it, but only that it use non xp functions/engines/apis) so using the LAV engine you are bond to what the LAV engine programmer used internal (and that is also going away from XP) the LAV engine seems not to have a very deep level either, if LAV use up DX10/11/12 its rather something to script the DX10-12 engine and the DX10-12 engine already is in favor of win7+ its not going like we use LAV and that works on XP LAV dont make the h265 encoding either its a in between engine going to FFmpeg "FFmpeg has supported NVDEC since 2017" that tells us ffmpeg already use NVDEC what at best has limited support for XP FFmpeg is a nice fork for codecs , but rather ffmpeg wants to use command lines and in the end calls up a engine https://trac.ffmpeg.org/wiki/Encode/H.265 maybe ffmpeg would be a option ... quote: "which is talked to by CUDA. OpenGL can be used to put the decoded picture on screen with acceleration" if that is right opengl4.5 still cant do it because cuda isnt doing it on xp if i follow that logic opengl4.5 calls up the cuda "engine" where it finds a outdated version - and then it would just not do its job going into a different direction the player "VLC media player" claims to work on XP and that it can play h265, and its open source do we know something here ? because open source means we can see what that player actually do we could copy paste the code too - or have our own interpratation sounds good to me that with x265 or libde265 we already had but even on x265 going with a try (as if x265 would be the encoder): RGB to HEIC (a few losses yes its a compression after all) but now it should be possible to convert it back the same way HEIC (logic reverted) to RGB what im trying to say is if you know the encode code you know the decode code i 100 % agree that x265 has no engine or command line code to do so to me the VLC media player and the x265/libde265 sounds most relyable for xp for the grafic card we could try if nvidia helps us and either makes a working engine for XP or at least nvidia tells us how we programm the grafic card without these engine (cuda ect.) cuda is also just a code that control input/output and "controlment code" for that grafic card unit i agree its much to talk, but so we sort out the stuff we dont need
  22. that refers all to a grafic card doing the encoding/decoding and using a engine (maybe through more engines) (engines like dx9 to dx11 can change and just say "no i dont support this", what then? that just raise the question to install win10 and use the dx11 engine) i looked neither one engine does say clearly what it supports for XP(like modes "10bit" or codecs "h264") , and some engines dont say anything about XP (therefore what should we think ? probaly xp is not supported) so lerning that engine or engine"s" would take a lot of time and probaly lead to a dead end also they have the inner source code of these engines, so they can stop the "support" anytime , version 1.001 is here now - xp is not supported anymore the website of nvidia names: vulkan (no OS said) directx video(only directx11 and 12) nvidia video codec sdk (says directx9 but not what exactly what mode can be done with d3d9, if lets say its the h264 codec and cant do h265 codec it is directly dead end) PyNvVideoCodec (no OS said) namely says CUDA but with CUDA there again is nothing about the OS said so that are 5 engines by name nvidia video codec sdk: to me that nvidia video codec sdk seems to be a engine to control the directx engine (i often wrote about that - engines for engines - and the cpu power lost by doing so) (why i think it use up directx? because later on it writes "nativ api interface directx9-12") since cuda is also a engine it raise the questions if it supports h265 for XP, then you also need a grafic card that can do that mode it might say directx9 but not what modes directx9 support, if its the h264 codec only it is not useable vulkan: someone already name the vulkan engine https://msfn.org/board/topic/186252-vulkan-api/ i do not know if that is just the right/full information but opengl 4.5 ? i dont know ... sounds like a never touched part - and nothing says that the grafic card is used on xp, there is just no information if opengl4.5 is doing that on xp the websites rather says vulkan is something for linux, android, mac, fuchsia, iOS (but we need xp) but important to see is that this vulkan engine rely on the OS grafic drivers, what if the driver isnt doing it on XP while on win10 it can (aka you might get it to run opengl4.5 but then it dont play anyways on xp while on win10 it do, maybe the code reaction is missing for xp while 10 has it) then again its a thing to lern - where might actually again reach just a dead end (but what i know opengl is a pre engine, the real thing happens later in the operating system and the grafic card driver) directx video: (dx11 and dx12) - directly out of question (dx11-12 are win7-11) CUDA: cuda sounds interesting, but it dont say what it actually supports it says that there are CUDA versions but like version 1.0 up to 12.5, hard to say if XP has the latest version available then again its a engine (with versions), hard to say anything if xp can do anything here and the nvidia information says not even a operating system ... its a closed source too - the cuda engine has again to be said a internal code that has a deeper controlment (+ maybe the related driver after the cuda engine) PyNvVideoCodec: nothing said, but you can see that it says it use CUDA, and since phyton is something like java that "phyton engine" probaly uses up cuda (engine for a engine) but this is what i pointed out at the very first talks about LAV and the MPC-HC player (that the MPC-HC author started to rely on engines) and these are all engines and engines are often very OS limited and version limited (therefore MPC-HC is no longer self contained - it rely on engines of certain operating systems (that might also be a dead end for that player - because many others can do that also, if mpc-hc would have a XP support it would make mpc-hc a special player - but like this it is "the same old - there are other players that can use that new engines on win10 ect. too")) ------------------------ the x265 i called out - dont rely on any engine, nor OS, nor driver, nor grafic card, nor unsuppored directx 11/12 engine (dx10 is probaly dead) i dont think the CPU is to slow the decoding process is faster then the encode option "ultrafast" https://msfn.org/board/topic/185879-winxp-hevcheifheic-image-encoderdecoder/ also the video frame will not be written with a file engine, therefore it can go directly into the video frame buffer (what also is a lot faster) the decoder also is generally a lot faster then the ultrafast encoder -------- i must say that i dont write this as absolution i want to try to create a positiv talk about this and if some have more specific information - i would be happy to hear that informations and i like to be corrected up ----- but again guys notice this : they made this codec bound to a engine and what this engine do is closed source in win10/11 or a closed source from the grafic card company (such as intel, nvidia) so if you actually try to use win10 engines you probaly fail (because you only can use win10, where only win10 has that engine to function) very important to know would be what players can play the h265 codec in XP or maybe what players played that codec in the past - that might be a good question to be answered one is worth of a try and says it has h265 into playable formats, and says it works for xp https://en.wikipedia.org/wiki/VLC_media_player#Input_formats -- one word to that cuda engine nvidia might tells us a "engine(aka cuda)" that we should use to do this: https://upload.wikimedia.org/wikipedia/commons/5/59/CUDA_processing_flow_(En).PNG but they dont tell us how this is really done (if we would know this we could write a own control code and use the grafic card and encode h265 with the grafic card) but in this case its "the engine is doing it - or not" if for example we would have what in the picture is called "copy the result" in our case that would be the "decoded h265 RGB buffer" - we could control that memory to the player or directly into the video frame buffer the other things we would need is how it make the control code (like decode this h265 data 8 bit - there should be a control code, as always) the input data where and how has to be given to the grafic card (a h265 file we will give it, or at least enough data in steps from that file) in win10 or "aka directx 11) this is not a problem because the engine "dx11" is doing that for you but thats a problem many todays programmers have - they dont have insight how things really work anymore - and have to rely on a high language/script/or multiple engines again that x265 code let you see what actually really happens ... (and this with the engines are solutions for a win10 player/stream/video solution, you actually dont see! how that decoder/maybe encoder really work - with the x265 source code you can see exactly that)
  23. well what we should not do is just to let them do their thing every kind of resistence is a resistence everything that is good against 10 is good, everything that is bad to 10 is good everykind of remembers how it used to be, what has badly changed might be something we can say we can code other programms linux is not a enemy we can stay on a OS before 10 , it dont matter what one just that it is not 10 or 11 we can talk against 10 we can try to bring missing parts to the other operating systems we can try to ask lawers what we can do against this monopoly we can try not to buy win10 products (such as only support that os) - what is our right neither what someones resistence look like, try to be creativ just not giving up ...
  24. well yuv is a old compression the idea wasnt that bad going back to only have the contrast first it was a question how much luminence that pixel had adding the colors was next so what i remember is that they had created a signal modell for 3 wires (4 signals , 2 and 2) 4 for luminence 2 for chroma blue 2 for chroma red in since of 3 wires the data win is 2 times 50 % less there was a YUV 4:4:4 mode that is lossless but the data is a lot more so we can say yuv 4:2:0 or 4:2:2 are compressions - or also must be seen as compressions https://en.wikipedia.org/wiki/Chroma_subsampling then there where already different modes for this transition yuv to rgb or rgb to yuv but there are more compression tricks, today even more - to discuss all of them it would be a long story to really make a decision we need compares with a current encoder the heic decedents would be such encoders(vp9, svt), maybe also the codec from frauenhofer is from consideration (h.266) those have modern compression tricks then we need a compare filesize and pixels to figure what is better if 10 bit is better ok , if 8 bit is better then its 8 bit but coming from a different end netflix and other stream providers longly used that h.264 codec it seems a bit much to provide a even better quality then a common stream provider as we are rather normal people but dont get me wrong on that 10 bit question if it can be proofen to be better at similiar filesize, i certainly would agree to that decision but also back to the h.265 player question, i think i can do it - but it would be a lot of work - not just a going there and its done if that LAV is the right decision, i doubt that LAV is the right choose - that i already pointed out for my perspectiv i would have to lern that entire LAV engine and its use on a player - that also is extra work where i think i will fail really hard because i think the encoder might is at a spot that cant be programmed from XP (one idea was maybe the driver dont accept the control code - maybe it can but its closed source) if this would be the case the entire lern for the LAV and player engine would be a waste but going forward - what are the next possible idea´s for this project ? we could start with players for xp that actually can play the h.265 codec , and if so what modes, are they open source, ect.
  25. somehow that website x265.com was down and changed weird ? also https://bitbucket.org/multicoreware/x265/ is down here is a working one: https://github.com/videolan/x265 https://www.file-upload.net/download-15418681/x265_3.5.tar.gz.html
×
×
  • Create New...