Here are some notes that I don't want to forget:
Types of Neurons (Single points of contacts in a 'brain')
Linear Neuron - Collects all input values and applies a weight and that is it's output. (aka Linear Filter)
- y = b + sum(xi + wi)
- if b + sum(xi + wi) > 0 output 1 else output 0
- In simpler terms, it works like a Binary Threshold but outputs like a Linear Neuron
- Most commonly used
- Ouput = 1 / (1+ e^-(b + sum(xi + wi))
- Leads to smooth derivitives
- Follow up: Not sure what randomization is based on...
- Follow up: Why is this usefull
- Poisson Rate for Spikes (huh??)
- If you choose the right measures(features) then this is a great learning model
- Choosing features is the hardest part though!
- Once features are "chosen" you have limited your learning process
- Do not use this learning model for "multi-layer" networks, it doesn't work
- Will it eventually get to a correct answer?
- How quickly will this happen? (How many evolutions/learnings/weight adjustments)