Agile Artificial Intelligence

Reading through Agile Artificial Intelligence - sent these updates:

  • Comparing array setup - messages like PerceptronTest>>testTrainingOR have code like:

     p train: { 0 . 0 } desiredOutput: 0.

    While code above it has:

     p weights: #(-1 -1).

    Checking the performance on a slow laptop:

    [ (#(1 2 3 4) with: #(2 3 4 5) collect: [ :x :y | x * y ]) sum ] benchFor: 1 second.
    "a BenchmarkResult(608,475 iterations in 1 second 2 milliseconds. 607,260 per second)"
    [ ({ 1. 2. 3. 4 } with: { 2. 3. 4. 5 } collect: [ :x :y | x * y ]) sum ] benchFor: 1 second.
    "a BenchmarkResult(584,067 iterations in 1 second 3 milliseconds. 582,320 per second)"

    The #() is slightly faster, but less than 5%, so its mostly just a taste issue (I prefer the parenthesis).

  • Added these methods for the Classification section:

    NNetwork>>numberOfNeurons
    	^ layers sum: [ :layer | layer neurons size ]

  • Several tests like NeuronLayerTest>>testOutputLayer depend on floating point percision testing, which failed between different version of Pharo, I found that this was more useful:

    	self assert: result size equals: 4.
    	result with: #(0.03089402289518759  0.9220488835263312 0.5200462953493654 0.20276557516858304) do: [ :r :test | self assert: (r closeTo: test precision: 0.0000000001 ) ]

  • The message NormalizationTest>>testError02 makes the naming wizard cry, how about NormalizationTest>>testErrorOnEmpty.
The code depends on Roassal2, which loads fairly smooth for Pharo 6.1, Pharo 7 is more involved.

Posted by John Borden at 1 June 2018, 2:25 am link