ADIXctlLevel=1; I am not sure about this setting and its understanding.
I got this setting from Win2K. Application Data Interface transfer control logic level. Programs invoked by XCTL commands run one at a time as part of a single task but is multi tasking. When a user selects a menu item the program should issue an XCTL command to transfer control directly to that program. The first program which receives the control directly is at highest logical level which is Level 1. More investigation is required with multi tasking and different tasking methods. This setting may not make any noticeable difference.
ConservativeSwapFileUsage=0
If conservative swap file usage is employed then the maximum RAM memory usage is used before switching to the HDD swap file. This caused problems with certain programing and so I did not set it to 1 anymore.
[MathCoprocessor]
FPUFlags=1
This setting came with Borland C. This was originally set to 0 but I thought that 1 made a performance improvement. Most of the following has been quoted from https://xem.github.io/minix86/manual/intel-x86-and-64-manual-vol1/o_7281d5ea06a5b67a-197.html
x87 FPU Floating-Point Exception Mask Bits
The exception-flag mask bits (bits 0 through 5 of the x87 FPU control word) mask the 6 floating-point exception
flags in the x87 FPU status word. When one of these mask bits is set, its corresponding x87 FPU floating-point
exception is blocked from being generated.
Zero is Invalid Operation, 1 is Denormal Operand. These are the only settings that can be used as a BSOD I think will occur with higher settings, see the above link for what other settings are.
It is the Floating point value rounding and how it is achieved that is what is at stake here. See https://en.wikipedia.org/wiki/Denormal_number The following is a quote from this link.
"Some systems handle denormal values in hardware, in the same way as normal values. Others leave the handling of denormal values to system software, only handling normal values and zero in hardware. Handling denormal values in software always leads to a significant decrease in performance. When denormal values are entirely computed in hardware, implementation techniques exist to allow their processing at speeds comparable to normal numbers;[3] however, the speed of computation is significantly reduced on many modern processors; in extreme cases, instructions involving denormal operands may run as much as 100 times slower.[4][5]"