On Fundamental Artificial Intelligence
Today I attended Mikhail Burtsev's lecture on evolutionary approach to AI. The lecture was hosted by Moscow Polytechnic Museum's lecture hall. First of all, the organization was great, and the quality of presentation was high. Unsurprisingly, the hall was almost totally occupied by the listeners. I recommend you to attend their lectures.
Mikhail talked a lot about the history and foundations of AI, which was not new to me. He did not bore the audience with the psychology-related stuff, but featured two approaches to AI, that have been being developed simultaneously all the way since 1950s: artificial neural networks and symbolic computational AI. ANNs attempt to model human brain functioning by means of artificial neurons and axons. The problem here is if we build even human-scale artificial brain (which was actually done in Allen Institute for Brain Science in 2005), it will be isolated from the world, and could not be trained like real human brain. Symbolic AI uses formal logic to solve the problems. It also suffers certain problems, like the lack of uncertainty. So, vanilla AI is in the period of stagnation now (it dates bake to the late 1980s, when Rodney Brooks started to criticize it).
Mikhail's point was the one that most researchers attempt to build a straight away model of human intelligence. He suggested to look at the evolution process. We need to start with understanding how the neural system of an amoeba operates, and then track the evolution of it to find out how a human being's mind works. Evolutionary algorithms might be useful there.
I cannot agree with that point. Last 50 years witnessed the failure of fundamental AI research. Their common problem was they tried to model some cognitive mechanisms instead of tackling actual problems, while those problems were being solved by different techniques. Consider the autonomous robot control problem (e.g. a robotic vacuum cleaner). It can be formulated in terms of reinforcement learning and solved by the methods not connected to the vanilla AI (dynamic programming-based, I suppose). Mikhail does not depart from the problem, again. So I don't really believe in success of the method. I am also sceptic to the computational methods inspired by biology. They could be a good starting point, but they used to be supplanted by mathematically founded ones.
Another issue raised in the discussion after the lecture was AI safety, which is probably the most popular philosophical question connected to AI. As far as I understood, there are two main concerns: the AI could become self-replicated (or self-improved) uncontrollably, and it could become self-motivated. If the AI is self-replicated, it is not so dangerous unless it is self-motivated. If it is motivated by a human, it is bad just like nuclear weapon, which is scary only if an evil party possesses it (so, nothing brand new here). If the AI will be motivated to gain the world domination, it could be harmful. But where such self-motivation can come from? In my opinion, the only option is the biological obsession to propagate (imagine some joint AI system with bio-motivator), but such a disaster has already been overcome once. Do you remember that rabbits in Australia? I hope, an uncontrolled AI won't be a bigger problem.
0 Response to "On Fundamental Artificial Intelligence"
Post a Comment