Jump to content

Dietmar

Member
  • Posts

    1,163
  • Joined

  • Last visited

  • Days Won

    5
  • Donations

    0.00 USD 
  • Country

    Germany

Everything posted by Dietmar

  1. @Mark-XP Here is a first try with Backpropagation for to learn the "AND" function. Nice, works! Only Java Standard Bibliothek is used Dietmar To train a program to learn the AND function, we can use a neural network. A neural network is a type of machine learning model that consists of multiple layers of interconnected nodes (neurons) that can learn to recognize patterns in data. For this particular problem, we can create a neural network with two input nodes, one output node, and one hidden layer with a variable number of neurons. The two input nodes correspond to the two binary inputs of the AND function, and the output node corresponds to the output of the function. To train the neural network, we need to provide it with a set of input-output pairs (also known as training examples) for the AND function. For example, one input-output pair could be (0,0) -> 0, meaning that when the input is (0,0), the output should be 0. We can create a set of four such input-output pairs for the AND function using the truth table above. We can then randomly initialize the weights and biases of the neural network, and use a training algorithm (such as stochastic gradient descent) to adjust the weights and biases based on the training examples. The goal is to minimize the difference between the output of the neural network and the desired output for each training example. As the neural network trains, it learns to recognize the patterns in the input-output pairs and adjusts its weights and biases to better predict the output for each input. Once the training process is complete, the neural network should be able to correctly predict the output of the AND function for any given input. In summary, the program learns the AND function by using a neural network with two input nodes, one output node, and one hidden layer with a variable number of neurons. It is trained using a set of input-output pairs for the AND function and a training algorithm that adjusts the weights and biases of the neural network based on the training examples. The neural network learns to recognize patterns in the input-output pairs and adjusts its weights and biases to better predict the output for each input. package neuralnetwork; import java.util.Arrays; public class NeuralNetwork { private int numInputNodes; private int numHiddenNodes; private int numOutputNodes; private int numHiddenLayers; private double[][] inputHiddenWeights; private double[][] hiddenOutputWeights; private double[] hiddenBias; private double[] outputBias; public NeuralNetwork(int numInputNodes, int numHiddenNodes, int numOutputNodes, int numHiddenLayers) { this.numInputNodes = numInputNodes; this.numHiddenNodes = numHiddenNodes; this.numOutputNodes = numOutputNodes; this.numHiddenLayers = numHiddenLayers; // initialize weights and biases randomly inputHiddenWeights = new double[numInputNodes][numHiddenNodes]; hiddenOutputWeights = new double[numHiddenNodes][numOutputNodes]; hiddenBias = new double[numHiddenNodes]; outputBias = new double[numOutputNodes]; for (double[] row : inputHiddenWeights) { Arrays.fill(row, Math.random()); } for (double[] row : hiddenOutputWeights) { Arrays.fill(row, Math.random()); } for (int i = 0; i < numHiddenNodes; i++) { hiddenBias[i] = Math.random(); } for (int i = 0; i < numOutputNodes; i++) { outputBias[i] = Math.random(); } } public double sigmoid(double x) { return 1 / (1 + Math.exp(-x)); } public double sigmoidDerivative(double x) { double fx = sigmoid(x); return fx * (1 - fx); } public double[] forwardPropagation(double[] input) { // calculate activations for first hidden layer double[] hiddenActivations = new double[numHiddenNodes]; for (int j = 0; j < numHiddenNodes; j++) { double sum = 0; for (int i = 0; i < numInputNodes; i++) { sum += input[i] * inputHiddenWeights[i][j]; } hiddenActivations[j] = sigmoid(sum + hiddenBias[j]); } // calculate activations for subsequent hidden layers for (int layer = 1; layer < numHiddenLayers; layer++) { double[] nextHiddenActivations = new double[numHiddenNodes]; for (int j = 0; j < numHiddenNodes; j++) { double sum = 0; for (int i = 0; i < numHiddenNodes; i++) { sum += hiddenActivations[i] * inputHiddenWeights[i][j]; } nextHiddenActivations[j] = sigmoid(sum + hiddenBias[j]); } hiddenActivations = nextHiddenActivations; } // calculate output layer activations double[] outputActivations = new double[numOutputNodes]; for (int j = 0; j < numOutputNodes; j++) { double sum = 0; for (int i = 0; i < numHiddenNodes; i++) { sum += hiddenActivations[i] * hiddenOutputWeights[i][j]; } outputActivations[j] = sigmoid(sum + outputBias[j]); } return outputActivations; } public void backPropagation(double[] input, double[] targetOutput, double learningRate) { // perform forward propagation to get activations for all layers double[] hiddenActivations = new double[numHiddenNodes]; for (int j = 0; j < numHiddenNodes; j++) { double sum = 0; for (int i = 0; i < numInputNodes; i++) { sum += input[i] * inputHiddenWeights[i][j]; } hiddenActivations[j] = sigmoid(sum + hiddenBias[j]); } double[] outputActivations = new double[numOutputNodes]; for (int j = 0; j < numOutputNodes; j++) { double sum = 0; for (int i = 0; i < numHiddenNodes; i++) { sum += hiddenActivations[i] * hiddenOutputWeights[i][j]; } outputActivations[j] = sigmoid(sum + outputBias[j]); } // calculate output layer error double[] outputErrors = new double[numOutputNodes]; for (int j = 0; j < numOutputNodes; j++) { double outputActivation = outputActivations[j]; double targetOutputValue = targetOutput[j]; outputErrors[j] = outputActivation * (1 - outputActivation) * (targetOutputValue - outputActivation); } // calculate hidden layer errors double[] hiddenErrors = new double[numHiddenNodes]; for (int j = 0; j < numHiddenNodes; j++) { double hiddenActivation = hiddenActivations[j]; double sum = 0; for (int k = 0; k < numOutputNodes; k++) { double outputError = outputErrors[k]; double weight = hiddenOutputWeights[j][k]; sum += outputError * weight; } hiddenErrors[j] = hiddenActivation * (1 - hiddenActivation) * sum; } // update weights and biases for output layer for (int j = 0; j < numOutputNodes; j++) { double outputError = outputErrors[j]; for (int i = 0; i < numHiddenNodes; i++) { double hiddenActivation = hiddenActivations[i]; double delta = learningRate * outputError * hiddenActivation; hiddenOutputWeights[i][j] += delta; } outputBias[j] += learningRate * outputError; } // update weights and biases for hidden layer for (int j = 0; j < numHiddenNodes; j++) { double hiddenError = hiddenErrors[j]; for (int i = 0; i < numInputNodes; i++) { double inputActivation = input[i]; double delta = learningRate * hiddenError * inputActivation; inputHiddenWeights[i][j] += delta; } hiddenBias[j] += learningRate * hiddenError; } } public static void main(String[] args) { // create neural network with 2 input nodes, 2 hidden nodes, and 1 output node NeuralNetwork nn = new NeuralNetwork(2, 2, 1, 1); // define input and target output for AND function double[][] input = {{0, 0}, {0, 1}, {1, 0}, {1, 1}}; double[][] targetOutput = {{0}, {0}, {0}, {1}}; // train network using backpropagation for (int epoch = 0; epoch < 100000; epoch++) { for (int i = 0; i < input.length; i++) { nn.backPropagation(input[i], targetOutput[i], 0.1); } } // test network with some inputs double[] testInput1 = {0, 0}; double[] testInput2 = {0, 1}; double[] testInput3 = {1, 0}; double[] testInput4 = {1, 1}; System.out.println("0 AND 0 = " + nn.forwardPropagation(testInput1)[0]); System.out.println("0 AND 1 = " + nn.forwardPropagation(testInput2)[0]); System.out.println("1 AND 0 = " + nn.forwardPropagation(testInput3)[0]); System.out.println("1 AND 1 = " + nn.forwardPropagation(testInput4)[0]); } }
  2. @Mark-XP Hi, here is the Neural Network with Java from Scratch. What a crazy hard job. It works using only the Java Standard Bibliothek. This program is written in Java, which is a programming language. It's a neural network program, which means it's designed to learn and recognize patterns in data. Neural networks are used in many applications such as image recognition, natural language processing, and recommendation systems. The program starts by defining the NeuralNetwork class, which contains the following variables: numInputNodes: the number of input nodes in the neural network. numHiddenNodes: the number of hidden nodes in the neural network. numOutputNodes: the number of output nodes in the neural network. numHiddenLayers: the number of hidden layers in the neural network. inputHiddenWeights: a matrix of weights between the input layer and the first hidden layer. hiddenOutputWeights: a matrix of weights between the last hidden layer and the output layer. hiddenBias: an array of biases for the hidden nodes. outputBias: an array of biases for the output nodes. The constructor of the NeuralNetwork class takes four arguments: numInputNodes, numHiddenNodes, numOutputNodes, and numHiddenLayers. These arguments define the structure of the neural network. The constructor initializes the inputHiddenWeights, hiddenOutputWeights, hiddenBias, and outputBias variables with random values between 0 and 1. The program then defines a sigmoid function, which is a mathematical function used in neural networks to convert any input value into a value between 0 and 1. The sigmoid function is used to calculate the activation level of each node in the neural network. The forwardPropagation method takes an input array and returns an output array. The input array represents the input data that the neural network is trying to recognize. The forwardPropagation method calculates the activation level of each node in the neural network and returns the activation level of the output nodes as the result. The forwardPropagation method starts by calculating the activation level of the nodes in the first hidden layer. It does this by multiplying the input values by the weights between the input layer and the first hidden layer, adding the biases for each hidden node, and then applying the sigmoid function to the result. This gives the activation level of each node in the first hidden layer. The forwardPropagation method then calculates the activation level of the nodes in the subsequent hidden layers in the same way as the first hidden layer. It does this by multiplying the activation levels of the nodes in the previous hidden layer by the weights between the previous hidden layer and the current hidden layer, adding the biases for each hidden node, and then applying the sigmoid function to the result. Finally, the forwardPropagation method calculates the activation level of the output nodes by multiplying the activation levels of the nodes in the last hidden layer by the weights between the last hidden layer and the output layer, adding the biases for each output node, and then applying the sigmoid function to the result. The main method of the program creates an instance of the NeuralNetwork class with 2 input nodes, 4 hidden nodes, 1 output node, and 1 hidden layer. It then tests the forwardPropagation method with an input array of [0.5, 0.7]. The output of the forwardPropagation method is printed to the console using the Arrays.toString method. The output represents the activation level of the output node(s) for the given input. I hope this explanation helps! Let me know if you have any more questions. Dietmar package neuralnetwork; import java.util.Arrays; public class NeuralNetwork { private int numInputNodes; private int numHiddenNodes; private int numOutputNodes; private int numHiddenLayers; private double[][] inputHiddenWeights; private double[][] hiddenOutputWeights; private double[] hiddenBias; private double[] outputBias; public NeuralNetwork(int numInputNodes, int numHiddenNodes, int numOutputNodes, int numHiddenLayers) { this.numInputNodes = numInputNodes; this.numHiddenNodes = numHiddenNodes; this.numOutputNodes = numOutputNodes; this.numHiddenLayers = numHiddenLayers; // initialize weights and biases randomly inputHiddenWeights = new double[numInputNodes][numHiddenNodes]; hiddenOutputWeights = new double[numHiddenNodes][numOutputNodes]; hiddenBias = new double[numHiddenNodes]; outputBias = new double[numOutputNodes]; for (double[] row : inputHiddenWeights) { Arrays.fill(row, Math.random()); } for (double[] row : hiddenOutputWeights) { Arrays.fill(row, Math.random()); } for (int i = 0; i < numHiddenNodes; i++) { hiddenBias[i] = Math.random(); } for (int i = 0; i < numOutputNodes; i++) { outputBias[i] = Math.random(); } } public double sigmoid(double x) { return 1 / (1 + Math.exp(-x)); } public double[] forwardPropagation(double[] input) { // calculate activations for first hidden layer double[] hiddenActivations = new double[numHiddenNodes]; for (int j = 0; j < numHiddenNodes; j++) { double sum = 0; for (int i = 0; i < numInputNodes; i++) { sum += input[i] * inputHiddenWeights[i][j]; } hiddenActivations[j] = sigmoid(sum + hiddenBias[j]); } // calculate activations for subsequent hidden layers for (int layer = 1; layer < numHiddenLayers; layer++) { double[] nextHiddenActivations = new double[numHiddenNodes]; for (int j = 0; j < numHiddenNodes; j++) { double sum = 0; for (int i = 0; i < numHiddenNodes; i++) { sum += hiddenActivations[i] * inputHiddenWeights[i][j]; } nextHiddenActivations[j] = sigmoid(sum + hiddenBias[j]); } hiddenActivations = nextHiddenActivations; } // calculate output layer activations double[] outputActivations = new double[numOutputNodes]; for (int j = 0; j < numOutputNodes; j++) { double sum = 0; for (int i = 0; i < numHiddenNodes; i++) { sum += hiddenActivations[i] * hiddenOutputWeights[i][j]; } outputActivations[j] = sigmoid(sum + outputBias[j]); } return outputActivations; } public static void main(String[] args) { // create neural network with 2 input nodes, 4 hidden nodes, 1 output node, and 1 hidden layer NeuralNetwork nn = new NeuralNetwork(2, 4, 1, 1); // test forward propagation with input [0.5, 0.7] double[] input = {0.5, 0.7}; double[] output = nn.forwardPropagation(input); System.out.println(Arrays.toString(output)); // print output } }
  3. @Mark-XP I make a try with Neuroph 2.96. Hangs at exact the same place. So, it is a loong standing bug in Neuroph Dietmar PS: I try to use only the original Java Bibliothek. Hard job, to implement everything by hand for a working Neural Network, but I try.
  4. @Mark-XP During search for Prime Numbers, the Hund, Katze Program crashes again, means it is hanging when setting all the weights new after an "n". So, there is a bug in Neuroph 2.98, which depends not whether it is bit32 or bit64, win10 or XP SP3, or the Java version Dietmar
  5. @Mark-XP Today I make a new try with the "tiere" program. I install Netbeans 17 bit64 on win10 bit64 with latest Java. And voila, no crash now at all with the Neuroph Neural Network Bibliothek from Belgrad. So, I was right, that it is a resources problem. The program uses more than 1Gbyte ram in a session with training with 100 different names for dogs and cats. I run it on the Asrock z690 Extreme board with 12900k cpu and 32 Gbyte ram. Now, this nice program can be used also for looking for primes. Now it is fast. It is a fantastic program, I think nearly everything can be shown with this program, that has to do with Ai Dietmar
  6. I just installed in win10 bit64 the Alpaca ChatGPT from Stanford uni with name https://github.com/BenHerbst/Dalaix It works. After install, you have your own ChatGPT, no Internet needed after install. But until now it is stupid and I have no idea how I can train it. Example: Which year is now? 2019 No, 2023. Which year is today? 2021 No. 2023. Which year is today? 2021. No. 2023. Which year is today? 20 No. 2023. Which year is today? This year Dietmar
  7. @Mark-XP Always, when you hit "n", ALL the weigths have to be calculated new. There is a limit in Neuroph. For me the same happens, after about 40 numbers for training. Always after an "n" answer. Chat GPT told, that this is a limit in Neuroph, but I think, that it is a Bug. Of course, such a lot of resources are needed for this program, but it should work longer. Because of this I try to implement DL4j but under Java Ant without success until now. In 4 days I have holidays and then I make a try to transfer the "Hund" "Katze" "nix" program Dietmar
  8. @Mark-XP Does the program "tiere" works on Eclipse Ide? It is an amazing program. For example after teaching with names for "Hund" "Katze" or "nix" it can decide for example for HUUUNDDD, hundhundhund, hundkatze, katzehund, which was never trained before. And you can look for primes. Just set letter "a" to "0", "b" to "1" "c" to "2" "d" to "3" etc. which gives prime 37 as dh . This overcomes the problem with BIG numbers and Normalization because of Sigmoid f(x) = 1/(1+e^(-x)) Dietmar
  9. @Mark-XP You do not integrate all Bibliotheks from Neuroph, that I mentioned. The message about error in logger I get only, when I try to implement the Neural Network from DeepLearning4J. There should be Maven better than Ant, because in Maven all needed dependencies are found automatically. May be, that the message about missing logger happens only in Eclipse. In Netbeans 16, I never saw this message with Neuroph. And yes, all the training data here are lost with restart. The only reason, why until now we do not need to be so much afraid about Ai is, that its brain consumes 10.000.000.000 more energy than ours. A quick calculation gives, that for a Human Brain with processors you need all the Power Plants from the whole USA. When you can store the training data and more important the weights for each neuron, you do not need to start at point zero each time the compi is turned on. For a single "n" from learning, ALL the weights have be set new. This you can see very easy at the memory ressources and the cpu power, that this nice Hund, Katze, nix program uses Dietmar
  10. @Mark-XP Download neuroph 2.98 from here https://ufile.io/zpm08yru Extract it. Now comes the most most crazy part. Only this way works for me. Open in Netbeans above left on top "New Project". On next page choose "Java with Ant" and "Java Application". Nothing more, click "next". Type in "Project Name" in small letters "tiere". Green Mark is set for "Create Main Class" "tiere.Tiere" click finish. On the screen on the right side now delete all and after deleting all there, copy and paste whole "tiere" txt there. Now click on the left page "tiere" "Libraries". With right mouse click on "Libraries" and choose there "Add JAR/Folder", click on it with left mouse. Now search for your folder neuroph-2.98. Click on slf4 175 jar after this on slf4 176 jar visrec api 100 jar neuroph core 2.98 jar After this procedure, all Libraries that you need for the "tiere" program, are there, good luck Dietmar PS: May be there is a way to include all the Libraries from Neuroph into the Standard Libraries from Java. Until now, I di not succeed with it. So, for me only the way above works. Here is the whole tested code for "tiere" again. This program is crazy good and shows ALL, whatever an Artificial Intelligence can do at Maximum. No question, this is intelligent. I choose this program for to look for prim numbers, works. It has higher IQ than Chat GPT. package tiere; import java.util.HashMap; import java.util.Map; import java.util.Scanner; import org.neuroph.core.NeuralNetwork; import org.neuroph.core.data.DataSet; import org.neuroph.core.data.DataSetRow; import org.neuroph.nnet.MultiLayerPerceptron; import org.neuroph.util.TransferFunctionType; public class Tiere { public static void main(String[] args) { Scanner scanner = new Scanner(System.in); NeuralNetwork<?> neuralNetwork = new MultiLayerPerceptron(TransferFunctionType.SIGMOID, 26, 20, 20, 3); DataSet trainingSet = new DataSet(26, 3); Map<String, String> bewertungen = new HashMap<>(); Map<String, String> antworten = new HashMap<>(); while (true) { System.out.println("Gib ein Wort ein:"); String eingabe = scanner.nextLine().toLowerCase(); if (eingabe.equals("liste")) { for (String wort : bewertungen.keySet()) { String bewertung = bewertungen.get(wort); String antwort = antworten.get(wort); System.out.println(wort + ": " + bewertung + " (" + antwort + ")"); } continue; } double[] input = createInputVector(eingabe); neuralNetwork.setInput(input); neuralNetwork.calculate(); double[] output = neuralNetwork.getOutput().clone(); String ergebnis = bestimmeErgebnis(output); System.out.println("Ich schätze, dass es sich um " + ergebnis + " handelt."); System.out.println("War meine Antwort richtig? (Ja/Nein)"); String antwort = scanner.nextLine().toLowerCase(); antworten.put(eingabe, antwort); if (antwort.startsWith("n")) { double[] gewünschteAusgabe = new double[3]; System.out.println("Welches Tier ist es? (Hund, Katze, nix)"); String tier = scanner.nextLine().toLowerCase(); switch (tier) { case "hund": gewünschteAusgabe[0] = 1; break; case "katze": gewünschteAusgabe[1] = 1; break; default: gewünschteAusgabe[2] = 1; break; } DataSetRow trainingElement = new DataSetRow(input, gewünschteAusgabe); trainingSet.add(trainingElement); neuralNetwork.learn(trainingSet); String bewertung = gewünschteAusgabe[0] == 1 ? "Hund" : gewünschteAusgabe[1] == 1 ? "Katze" : "nix"; bewertungen.put(eingabe, bewertung); System.out.println("Ich habe etwas Neues dazugelernt."); } else { String bewertung = ergebnis; bewertungen.put(eingabe, bewertung); } } } // Hilfsmethode zum Erstellen des Eingabevektors private static double[] createInputVector(String eingabe) { double[] input = new double[26]; for (int i = 0; i < eingabe.length(); i++) { char c = eingabe.charAt(i); if (c >= 'a' && c <= 'z') { input[c - 'a'] = 1; } } return input; } // Hilfsmethode zum Bestimmen des Ergebnisses aus der Ausgabe des Netzwerks private static String bestimmeErgebnis(double[] output) { if (output[0] > output[1] && output[0] > output[2]) { return "Hund"; } else if (output[1] > output[0] && output[1] > output[2]) { return "Katze"; } else { return "nix"; } } }
  11. @Mark-XP Here is Netbeans 16 and Java 8.151 . First install Java under XP SP3. Then, you only have to look for netbeans.exe Dietmar https://ufile.io/cdsnn01j
  12. @sparty411 This Bsod is only possible, if for example, you choose Raid instead of Sata for the HD in Bios Dietmar
  13. @sparty411 When you have a Ramsey XP with KaiSchtrom Sata driver running on the B450, you can just take this XP and connect it to the B650-E board Dietmar
  14. @sparty411 You need an Sata CD Rom, no USB. And use a harddisk or sata HD disk, connected to a Sata port Dietmar
  15. @Mark-XP ALC12x0 Codecs work, ALC1150 I have no board for this to test. Until now, for me all Realtek works in XP SP3 Dietmar
  16. @sparty411 Use a real burned CD in an Sata CD-Rom drive together with the Kai Schtrom Sata driver and Ramsey XP SP3. For a first try, disable all USB in Bios and also other not elementary needed devices there Dietmar
  17. Hi, I make a new program with Java Netbeans, Ant. This one is much more intelligent in predicting the next number. It takes the whole input "number sequence" as the one and only thing, with which the Neural Network is trained. This means: Also the whole sequence in the numbers is stored for the learning process. I get a feeling, that not much more (!) can be done with Ai and this kind of Neural Network. Whole functions could be interpreted!!! One crazy thing I notice: Because of Normalization with 1000, small numbers give more bad results, dont know if the cause for this is the Sigmoid Function or just crazy round to zero by Java Dietmar package zahlen; import java.util.ArrayList; import java.util.Scanner; import org.neuroph.core.NeuralNetwork; import org.neuroph.core.data.DataSet; import org.neuroph.core.data.DataSetRow; import org.neuroph.nnet.MultiLayerPerceptron; import org.neuroph.util.TransferFunctionType; public class Zahlen { public static void main(String[] args) { ArrayList<Double> numbers = new ArrayList<>(); Scanner scanner = new Scanner(System.in); boolean isInputActive = true; while (isInputActive) { String input = scanner.next(); switch (input) { case "list": if (numbers.size() == 0) { System.out.println("Die Liste ist leer."); } else { for (double number : numbers) { System.out.println(number); } } break; case "q": System.out.println("Eingabe beendet. Geben Sie 'quit' ein, um das Programm zu beenden, oder 'list', um die Liste der Zahlen anzuzeigen."); break; case "quit": System.out.println("Das Programm wird beendet."); isInputActive = false; break; default: try { double number = Double.parseDouble(input); numbers.add(number / 1000.0); } catch (NumberFormatException e) { System.out.println("Ungültige Eingabe."); } break; } } scanner.close(); int inputSize = numbers.size(); int outputSize = 1; DataSet trainingSet = new DataSet(inputSize, outputSize); for (int i = 0; i < numbers.size() - 1; i++) { double[] inputArray = new double[numbers.size()]; for (int j = 0; j < numbers.size(); j++) { inputArray[j] = numbers.get(j); } double[] outputArray = new double[]{numbers.get(i+1)}; trainingSet.add(new DataSetRow(inputArray, outputArray)); } // create neural network NeuralNetwork neuralNet = new MultiLayerPerceptron(TransferFunctionType.SIGMOID, inputSize, 300, 300,300,300,300,300,300, outputSize); // train the neural network neuralNet.learn(trainingSet); // use the trained neural network to predict the next number in the sequence double[] inputArray = new double[numbers.size()]; for (int j = 0; j < numbers.size(); j++) { inputArray[j] = numbers.get(j); } neuralNet.setInput(inputArray); neuralNet.calculate(); double[] predictedOutput = neuralNet.getOutput(); // scale the predicted output back up to its original range double predictedNumber = predictedOutput[0] * 1000.0; // print the predicted output System.out.println("Das nächste Element könnte sein: " + predictedNumber); } }
  18. @Cocodile I think, his board has only Sata connectors. And when you are lucky, your Bios offers you the possibility to use them as IDE-drive Dietmar
  19. @legacyfan Install XP bit64 via real CD in an CD-Rom Sata drive, this works Dietmar
  20. @TheFighterJetDude Under XP SP3 you can use the USB3 driver from @Mov AX, 0xDEAD, integrated in the nice XP from Ramsey Dietmar
  21. @TheFighterJetDude The acpi.sys build from Sources for XP SP3 is more stable on newer compis, than the acpi.sys from Sources for Bit64 XP. On the Gigabyte z690 UD DDR4 the Bios update helps a lot. But anyway, to work with Bios is a risk for whole compi. I "succeed" 2 times, that after correct Bios update compi was "dead", one time even with BiosFlashBack, Dietmar
  22. The only z690 board, that never crashed during soso many tests under XP SP3, is the Asrock z690 Extreme board Dietmar
  23. @TheFighterJetDude Your board has an 1 x PS/2 Keyboard/Mouse combo port. So you can disable any USB in Bios and look, if compi starts. Then you can enable step by step the USB ports. Also has this board an COM1 port. There you can connect a serial mouse Dietmar PS: Update Bios to its last version. But any Bios update is always a risk.
  24. @TheFighterJetDude The last best acpi.sys for XP SP2 bit64 is this one, Dietmar https://ufile.io/qk91ivw7
  25. @TheFighterJetDude Disable in Device Manager the HID device, that is not your USB mouse. As you can see on my photo, I did the same. This HID device is for the LED control of the motherboard and it crashes your compi, as it did with all my z690 boards and I get report from other users with z690 or z790 boards about the same crash with this strange HID device Dietmar
×
×
  • Create New...