Last Good Quote: Son's are the seasoning on our lives. - Someone on Facebook

Friday, November 30

Random Musings on AI

I've been reading more on AI (again). It left me pondering...

Einstein had a great quote: 

"Any intelligent fool can make things bigger and more complex... It takes a touch of genius - and a lot of courage to move in the opposite direction."
I agree. Most likely the solution to General Artificial Intelligence is simple and elegant. The current models seem ... complex.

Ray Kurzweil talks about how our intelligence is built from layers of simple mechanics. Each part of the layer works identically, but it gathers info from the prior layer. I like this idea, it sounds "simple". The difficulty is determining what the "rules" are for the smallest element in the layer.

Monica Anderson talks about a non-model based approach. Specifically one in which "intuition" plays the largest role in "getting to" intelligence. She is definitely on to something. Her ideas that the world as we see it is not built on specific models (forumula's) is right on target.

My primary concern with this field is that there is no clear agreed upon definition of when "intelligence" has been reached. Like wise there is no test for intelligence, at least for a digital avatar. We have the Turing Test but that comes up short on a few fronts, in my opinion.

Recall that the goal of General or Strong Artificial Intelligence is to create a machine that can successfully perform any intellectual task that a human being can.

In simple words, it has to be able to learn anything.

So we need simple rules that can enable a system to learn anything.

From this thought my mind jumps through the following sequence of thoughts:
  • Humans have many senses, AI would need to have many types of input.
  • Wait, an infant, in his mom's belly has very few senses. 
  • Rather, he has all senses, but the input coming in is very small.
  • AI with multiple inputs should be able to "learn" something" with very few inputs
  • As we grow new senses are introduced one at a time. We grow fingers before ears, ears before eyes, and so on.
  • Perhaps the method by which the AI can learn, needs to be adaptable to many different types of inputs
  • So all inputs regardless of source must resolve to the same "signals"
  • I think all the input needs to be in a binary state
  • Or all inputs need to be on a sigmoid curve (value between 0 and 1
Can we build rules based on the last three statements? We can and have. This would be the neuron inputs that have been long discussed.

Now I know that I have Sigmoid coming into my input layer...what rules can I build around this?

Another interesting item, at a certain point in our learning, we do not need a lot of exposure to new input to "learn" a new concept. For example, the effort to learn how to add is significantly harder than the effort to learn how to subtract, once we have learned how to add.

Put another way, when placed in a new environment, but the environment is "similar" to something we know, we learn quickly. The more the environment is different the slower we learn.

What this means is that once our AI has learned a specific subject area, learning in that area should be quick. For example, a maze solving AI would solve mazes quickly with few "learning" cycles, but that same AI placed in a Turing test would take "longer" to learn how to interact within that system.




Thursday, November 29

Notes on AI

I was watching/exploring an online AI course over at https://class.coursera.org/neuralnets-2012-00

Here are some notes that I don't want to forget:

Types of Neurons (Single points of contacts in a 'brain')

Linear Neuron - Collects all input values and applies a weight and that is it's output.  (aka Linear Filter)
  • y = b + sum(xi + wi)
Binary Threshold Neuron - Collects all input values and when sum hits a "threshold" it outputs a value of 1
  •  if b + sum(xi + wi) > 0 output 1 else output 0
Rectified Linear Neuron - Collects all input values and when sum hits a "threshold" it outputs a value progressive value (dependant on input)
  • In simpler terms, it works like a Binary Threshold but outputs like a Linear Neuron
Sigmoid Neuron - "Smoothes" the output to something between 0 and 1
  • Most commonly used
  • Ouput = 1 / (1+ e^-(b + sum(xi + wi)) 
  • Leads to smooth derivitives
 Stochastic Binary Neuron - Take a Sigmoid Neuron and randomize if it actually fires are not.
  • Follow up: Not sure what randomization is based on...
  • Follow up: Why is this usefull
  • Poisson Rate for Spikes (huh??)
Perception Based Architecture - Will always find a solution within its Test Cases IF a solution exists within Test Space. Often times a solution does not, due to what measurs(features) are chosen.
  • If you choose the right measures(features) then this is a great learning model
  • Choosing features is the hardest part though!
  • Once features are "chosen" you have limited your learning process
  • Do not use this learning model for "multi-layer" networks, it doesn't work
Important Questions to Ask of Your System:
  • Will it eventually get to a correct answer?
  • How quickly will this happen? (How many evolutions/learnings/weight adjustments)

Followers