Contacts
News research & development
Demo videos
Browser extension
Widget / API tools
Comexp telegram bot
T-bit
Reverse video search
What is TAPe
About theory
Evolution of ideas underlying AI: Brief Description
Negative feedback. The central idea is that complex computations are a composition of simple ones. So to solve a complex task, you need to break it down into many smaller ones and solve them.
01
02
Consequently, by the same principle, human reasoning can be broken down into simpler parts — some initial statements can be used as a basis to deduce the solution to the problem using certain logic and rules.
The next step was to represent knowledge about the world in the form of a set of concepts and relations between those concepts, i.e., in the form of semantic networks, or knowledge graphs. Such a semantic rule network can help model reasoning in a particular subject area. This did not work properly until the backpropagation method started to be used for AI training.
04
SL is believed to allow creating, if not a world model for AI, then at least a part of it — a game, a language, etc. — and using the AI model trained on this "part of the world" to solve other tasks as well.
07
Today, this is the most widespread method. It has given rise to such approaches as, for example, Supervised Learning (SL). The SL method in machine learning means that the model is given both the initial data and the result it should achieve with this data.
06
The dominance of negative feedback lasted for several decades. In the 1980s, the concept of AI learning was introduced. What is learning in terms of a computer/engineering system? It is the changing relationships between elements. Changes in the relationships should increase the probability of a correct answer.
03
The method is about introducing the concept of weight for each part of a multilayer neural network consisting of so-called neurons. Weight is the relationship between neurons. Using the method, you can calculate the contribution of each neuron to the changes or errors in the network — that is, to the weights that allow reducing the error. It is from the final problem solution to the initial data input that calculations are made, hence the "back" root in the method name.
05
The creators of DeepMind, Gato, and the like using this approach postulated that they had created an almost AGI. However, they are actually as far from it as any other technology.
08
Technically, humankind has done a tremendous amount of work and created brilliant, ingenious engineering solutions, which, nevertheless, are still as far from intelligence as a calculator — no matter what their creators may say. Eventually, all AI models work with huge sets of facts and data. And so far, it all boils down to cramming even more facts and data into the AI and using even more resources and energy to get... What?
09