Reading through Agile Artificial Intelligence - sent these updates:
PerceptronTest>>testTrainingOR
have code like:p train: { 0 . 0 } desiredOutput: 0.
While code above it has:
p weights: #(-1 -1).
Checking the performance on a slow laptop:
[ (#(1 2 3 4) with: #(2 3 4 5) collect: [ :x :y | x * y ]) sum ] benchFor: 1 second. "a BenchmarkResult(608,475 iterations in 1 second 2 milliseconds. 607,260 per second)" [ ({ 1. 2. 3. 4 } with: { 2. 3. 4. 5 } collect: [ :x :y | x * y ]) sum ] benchFor: 1 second. "a BenchmarkResult(584,067 iterations in 1 second 3 milliseconds. 582,320 per second)"
The #()
is slightly faster, but less than 5%, so its mostly just a taste issue (I prefer the parenthesis).
NNetwork>>numberOfNeurons ^ layers sum: [ :layer | layer neurons size ]
NeuronLayerTest>>testOutputLayer
depend on floating point percision testing, which failed between different version of Pharo, I found that this was more useful:self assert: result size equals: 4. result with: #(0.03089402289518759 0.9220488835263312 0.5200462953493654 0.20276557516858304) do: [ :r :test | self assert: (r closeTo: test precision: 0.0000000001 ) ]
NormalizationTest>>testError02
makes the naming wizard cry, how about NormalizationTest>>testErrorOnEmpty
.