Jump to content

Mark-XP

Member
  • Posts

    117
  • Joined

  • Last visited

  • Donations

    0.00 USD 
  • Country

    Germany

Everything posted by Mark-XP

  1. i heard about the massive rainfalls and deluges in greece and now in Hong Kong: hope you both are passably fine anyway @roytam1 and @VistaLover!
  2. Kindest thanks @VistaLoverfor your explanations & clarifications concerning NM27. Maybe then its's rather a choice for web-1.0-die-hards on weaker hw...
  3. Well, please let me take this as an occasion to mention my personal issue with NM27: Observing it's under heaviest development the last years (see eg. above) i grab the newest build from time to time - only to find my favourite source for purchaseing music, bandcamp, still not working. For instance look at at this link here: clicking on the white "Play"-square (with the grey triangle) - besides generating a bunch of warnings and errors in the console - doesn't start to play the piece. However, NM28 and the Serpents are doing well, since years. Is the NM27's JS engine just too much outdated?
  4. Many thanks for the explanations, @VistaLover! Quintessence: i'll look in this thread here more often, to get a well maintained yt-dlp version for my classic XP and 7 environments. Best regards!
  5. After having some troubles the last days dl'ing from yt with yt-dlp_x86 from 03-02 i decided to try a newer version, but the binaries from here didn't work: the x86 version complained about a missing api (update via -U option), the other yt-dlp.exe about missing python stuff. The above linked version however works very well and resolved the previous issues. Many thanks @nicolaasjan!
  6. Hello @basilisk-dev, just a big Thank you (at this place) for still providing a version of Basilisk for Linux too! Grabbed the tarball recently from here, unpacked it on my debian partition, simply dragged the profile folder from Win-XP and everthing worked well an familiarly! My main and reliable browser: on XP, on Win7 and on Linux - very nice!
  7. Very nice summary @silverni! And btw: many thanks @nicolaasjan for providing yt-dlp.exe!
  8. Hello @Dietmar, i did run it but with "only" 100.000 epochs the result for prediction of 3 turns to 0!? (For 251: 0.36 which is respectable): Prediction for [0, 0, 0, 0, 0, 0, 0, 0]: 0.0 Prediction for [0, 0, 0, 0, 0, 0, 0, 1]: 0.9999998889558965 Prediction for [0, 0, 0, 0, 0, 0, 1, 0]: 0.0 Prediction for [0, 0, 0, 0, 0, 0, 1, 1]: 0.0 Prediction for [0, 0, 0, 0, 0, 1, 0, 0]: 0.0 Prediction for [0, 0, 0, 0, 0, 1, 0, 1]: 1.0000001086053538 Prediction for [0, 0, 0, 0, 0, 1, 1, 0]: 0.0 Prediction for [0, 0, 0, 0, 0, 1, 1, 1]: 1.0000000489125835 Prediction for [0, 0, 0, 0, 1, 0, 0, 0]: 0.0 Prediction for [0, 0, 0, 0, 1, 0, 0, 1]: 0.0 Prediction for [0, 0, 0, 0, 1, 0, 1, 0]: 0.0 Prediction for [0, 0, 0, 0, 1, 0, 1, 1]: 1.0000000111626701 ... Prediction for [1, 1, 1, 0, 1, 1, 1, 1]: 0.9999999167529428 Prediction for [1, 1, 1, 1, 0, 0, 0, 0]: 0.0 Prediction for [1, 1, 1, 1, 0, 0, 0, 1]: 0.9999990634050797 Prediction for [1, 1, 1, 1, 0, 0, 1, 0]: 0.0 Prediction for [1, 1, 1, 1, 0, 0, 1, 1]: 0.0 Prediction for [1, 1, 1, 1, 0, 1, 0, 0]: 0.0 Prediction for [1, 1, 1, 1, 0, 1, 0, 1]: 3.268234877173981E-8 Prediction for [1, 1, 1, 1, 0, 1, 1, 0]: 0.0 Prediction for [1, 1, 1, 1, 0, 1, 1, 1]: 0.0 Prediction for [1, 1, 1, 1, 1, 0, 0, 0]: 0.0 Prediction for [1, 1, 1, 1, 1, 0, 0, 1]: 0.0 Prediction for [1, 1, 1, 1, 1, 0, 1, 0]: 0.0 Prediction for [1, 1, 1, 1, 1, 0, 1, 1]: 0.363153607310525 Prediction for [1, 1, 1, 1, 1, 1, 0, 0]: 0.0 Prediction for [1, 1, 1, 1, 1, 1, 0, 1]: 2.0778473491800398E-7 Prediction for [1, 1, 1, 1, 1, 1, 1, 0]: 0.0 Prediction for [1, 1, 1, 1, 1, 1, 1, 1]: 0.0 Btw. nice to see you survived the tremendous thunderstorms yesteday... Edit: Currently i'm struggeling with a new Linux installation (Debian based Q4OS). I like it because it uses TDE (Trinity Desktop Environment) and you can make it look very like Windows XP. This time for the first time 64-bit (parallel to Win-XP and Win-7) and the issue i have iss hair-raising... For the case you are interested feel free to read more here...
  9. "define is not defined" isn't bad as well: a very interesting proposition from a linguistic-logical perspective .
  10. Servus @Dietmar, i hope you're doing fine!! I try to implement image recognitition with Neuroph, so i got the neurophstudio-2.98.zip from here and installed (extracted) it. It starts and runs fine, but at the point were i want to train the first example nnw, it behaves differently as described in the documentation: no file type "Training Set" is offered and the Train-icon is grayed out too (see pic below). Can you eventually verify that and do you have any suggestion? Many thanks and a nice sunday!
  11. As a small pre-investigation i installed latest PM 32.1.1 (32bit) on my Win7 partition for a comparison: - On commerzbank.de PM-32.1.1 does also struggle heavyly for a while but then calms down: (Nice to see all my addons from NewMoon 28.10 profile running nicely btw.) - in contrary on https://www.ebay-kleinanzeigen.de/m-nachrichten.html : in this case PM 32.1.1 is running hot too, even after a minute: allways transfering ammounts of data from gateway.... (and storing it to .\Profile\cache2\entries) Btw. @roytam1, would it be an effort to change Serpent's "Company Name" ("Moonchild Productions") as shown in Process Explorer above to something else? "Roytam's Industries" would be my proposal
  12. Thank you @VistaLover for your verification of the issues - and sorry for the inconviniences. Commerzbank is the 2nd largest Bank in Germany (partly state-owned after finance crises - you surely remember? ). Fortunately one can immediately go to the login screen... The othe case concernes ebay-"small advertisements" (ebay-kleinanzeigen.de - which does not belong to ebay directly any more, but to norwegian Adevinta group - which is owned 33% by ebay). And also in this case it's only one particular sub-site that hurts: those which organizes the communication (messages) with other users... My personal workaround for the moment: reduce usage of that site to an absolute minimum. If that kind-of-practice spreads in the future we do will have to investigate that... rubbish (and ,our' JS engines trying to deal with it) deeply!
  13. I'm getting high / highest CPU utilization in both Spt.52 and NM28.10 (latest versions of 04/07 and 04/15) on various sites when JS is activated, eg. https://www.ebay.de or https://www.commerzbank.de/ Didn't observe that behavior with the late-march versions (and Palefill 1.26). Edit: Sorry, wrong observation: the same heavy CPU utilization now persists when reverting to older Spt./NM versions. There must have taken place some JS modification the last days coincidently on several sites, www.commerzbank.de/ is one of them.
  14. @legacyfan: did you try your luck with the USB3 drivers of the latest Patch Integrator (from here)?
  15. @Dietmar that's really interesting. BUT, if you dig the training-hole around the primes 179 and 181 only a bit bigger // Skip excluded inputs if (i > 172 && i < 188) { continue; } the result gets worse immediately: Please forgive me for beeing so mean
  16. @Dietmar as far as i can see no magic: you're not excluding anything here for (int i = 0; i < trainingInputs.length; i++) { if (trainingInputs[i][0] != 211 && trainingInputs[i][0] != 212 && ... since trainingInputs [j] [0] is allways 0 or 1 and hence the above condition doesn't catch. it would be indeed use- and helpful to add the decimal value of the number in the first element trainingInputs [j] [0]
  17. Ok @Dietmar, it obviously has learned it's lessons (nn.train) well. But if you train it only up to 200: /* for (int i = 0; i < trainingInputs.length; i++) { */ for (int i = 0; i < 201; i++) { the results for higher nubers (201..255) do not convince me.
  18. Hello @Dietmar, i'm back again. You shurely did observe what happened with it's prediction for 11010 (26) above: that's really stupid, and later predictions get worse again and again. Not my taste of learning Maybe you should give it a try in tertiary number system!?
  19. Hello @Dietmar an a happy easter! Sorry but i don't see any learning curve here in regards of prime-recognition (see attatchment). Just as you stated above Primes1_Out.txt
  20. Hello @Dietmar, i think you mixed up odd and even in some System.out.printf statements in the while-block. I tried to make it a bit easier, only one double as input: package multi; import org.neuroph.core.data.DataSet; import org.neuroph.core.data.DataSetRow; import org.neuroph.nnet.MultiLayerPerceptron; import org.neuroph.nnet.learning.BackPropagation; import java.util.Scanner; public class Multi1 { public static void main(String[] args) { // create MultiLayerPerceptron neural network MultiLayerPerceptron neuralNet = new MultiLayerPerceptron(1, 4,4, 2); neuralNet.setLearningRule(new BackPropagation()); // create training set DataSet trainingSet = new DataSet(1, 2); trainingSet.add(new DataSetRow(new double[]{0}, new double[]{1, 0})); //even trainingSet.add(new DataSetRow(new double[]{1}, new double[]{0, 1})); //odd trainingSet.add(new DataSetRow(new double[]{2}, new double[]{1, 0})); //even trainingSet.add(new DataSetRow(new double[]{4}, new double[]{1, 0})); //even trainingSet.add(new DataSetRow(new double[]{7}, new double[]{0, 1})); //odd trainingSet.add(new DataSetRow(new double[]{5}, new double[]{0, 1})); //odd trainingSet.add(new DataSetRow(new double[]{11}, new double[]{0, 1})); //odd trainingSet.add(new DataSetRow(new double[]{12}, new double[]{1, 0})); //even Scanner scanner = new Scanner(System.in); while (true) { System.out.println("Enter a number or 'exit' to quit:"); String inputStr = scanner.nextLine(); if (inputStr.equals("exit")) { break; } // String inputStrArray = inputStr; double input = Double.parseDouble(inputStr); neuralNet.setInput(input); neuralNet.calculate(); double[] output = neuralNet.getOutput(); System.out.printf("Network prediction: %.2f even, %.2f odd %n", output[0], output[1]); if (output[0] > output[1]) { System.out.print("I think this is even. Is that correct? (y/n) "); String answer = scanner.nextLine(); if (answer.equals("y")) { trainingSet.add(new DataSetRow(new double[]{input}, new double[]{1, 0})); } else { // correcting as odd trainingSet.add(new DataSetRow(new double[]{input}, new double[]{0, 1})); } } else { System.out.print("I think this is odd. Is that correct? (y/n) "); String answer = scanner.nextLine(); if (answer.equals("y")) { trainingSet.add(new DataSetRow(new double[]{input}, new double[]{0, 1})); } else { // correcting as even trainingSet.add(new DataSetRow(new double[]{input}, new double[]{1, 0})); } } // train neural network with updated training set neuralNet.learn(trainingSet); } // save trained neural network neuralNet.save("oddoreven.nnet"); } } But then i have endless loops in neuralNet.learn and java running hot again...
  21. Hi @Dietmar, the AI learns by experiences, just to refine weight's and biases of it's Neurons. This is what made me thinking "For (0 and 0) it is 10 times more secure about the result than for (1 and 0)... it hasn't understood the essence of the matter (logical and) at all" (here) The bias values outside [-1, 1] are not astonishing to me a priori: your Backpropagation has to be examinated carefully: bias1[i] += learningRate * hidden1Errors[i] * tanhDerivative(hidden1[i] Btw.: on my current System Ivy Bridge with 4 GB RAM (built years ago to host Win-XP 32) it's not possible to run the last example with a 1000 nodes per hidden Layer! Had to reduce it to 100 nodes...
  22. @Dietmar Not exactly Tanh, it's scaled. Let sSig be the above funtion: sSig = 2 * sig(x) - 1 : Then sSig' (0) = 0.5 but Tanh'(0) = 1. Anyway, with Sigmoid/tanh you're on the right track.
  23. @Dietmar then, what about feedForward like ... = 2.0 * sigmoid(sum) - 1.0; The output will be in around 0, but halftimes negative
  24. @Dietmar Yes: now the output (in several runs) seems more equal distributed (between -1 and 1). Tanh is flatter than sigmoid, hence outputs seem to be more ,scattered'. Hence, for a stable NN i would prefer a sigmoid TransferFunction (rather than Tanh). (at my currently low knowledge state about NNs)
  25. @Dietmar This seems rather obvious since Tanh is negative for a nagative input (tanh "=" 2 x sigmoid - 1). Edit: since i don't know what you wanna achieve, i cannot consider it correct or wrong
×
×
  • Create New...