So we discussed how the Ai will sense the world and we have discussed how it will interact with the world. How will it bridge the gap and make intelligent interactions based on what it has sensed?
Before we can cover that you have to understand that the Ai Brain is actually made up of a number of parts. Some folks would consider each of these parts as Agents or AI themselves. I will leave that debate to others that are wiser than me.
For my purposes I call them Evaluators. The Ai Brain consists of many of them. Each Evaluator is responsible for performing the following:
- It takes data in, the data will always be in a rawest form
- It attempts to understand this data
- It will understand the data if it knows an motor action that can be given performed based on the input
- Or it knows a memory object that needs to get created based on this action
- Or it knows another evaluator that will understand this data
- If it knows a motor action to perform based on the input, it will place the motor action on the motor stack
- If it knows a memory object that needs to get created it will create the memory object*
- If it knows another evaluator that will understand this data it will pass this “suggestion” along. **
Note that a sensor stack item may be sent to multiple evaluators. Some evaluators will be better at some things then others and usually there is a “best” evaluator for each type of data on the sensor stack. We will talk about this more later.
- Language evaluator - responsible for holding conversations
- Math Evaluator – responsible for performing calculations
- Image Evaluator – responsible for identifying an image
- Knowledge Evaluator – responsible for identifying relationships between objects
I would imagine that at some point the Ai brain may create its own evaluators by combining or altering existing evaluators. We will talk on this further as well.
* See memory creation post (tbd)
** See brain evaluator selection post (tbd)