AI Technology #231

Open
opened 2023-05-10 10:25:37 +02:00 by Why-you-like-to-be-scared-on-Halloween · 1 comment

AI Technology This chapter contains some thoughts on artificial intelligence (AI). First, the distinction between strong and weak AI is explained, as well as the related concepts of general and specific AI, making it clear that all existing manifestations of AI are weak and specific. The main models are briefly described, insisting on the importance of corporality as a key aspect to achieve AI of a general nature. Next, the need to provide machines with common sense knowledge that makes it possible to move towards the ambitious goal of building AI of a general nature is addressed. The latest trends in AI based on the analysis of large amounts of data that have made spectacular progress possible in very recent times are also discussed, with an allusion to the difficulties present today in the AI ​​approaches. Finally, other issues that are and will continue to be key in AI are discussed, before closing with a brief reflection on the risks of artificial intelligence.

The ultimate goal of AI, to make a machine have general human-like intelligence, is one of the most ambitious goals science has ever set itself. Due to its difficulty, it is comparable to other great scientific objectives such as explaining the origin of life, the origin of the universe or knowing the structure of matter. Over the last few centuries, this desire to build intelligent machines has led us to invent models or metaphors of the human brain. For example, in the 17th century, Descartes wondered if a complex mechanical system made up of gears, pulleys, and tubes could, in principle, emulate thought. Two centuries later, the metaphor was telephone systems since it seemed that their connections could be assimilated to a neural network.

THE PHYSICAL SYMBOL SYSTEM HYPOTHESIS: WEAK AI VS. STRONG AI

In a paper on the occasion of receiving the prestigious Turing Award in 1975, Allen Newell and Herbert Simon (Newell & Simon, 1975) formulated the Physical Symbol System hypothesis according to which “every system of physical symbols possesses the necessary means and enough to carry out intelligent actions. On the other hand, since human beings are capable of displaying intelligent behavior in the general sense, then, according to the hypothesis, we too are physical symbol systems. It is convenient to clarify what Newell and Simon mean when they talk about the Physical Symbol System (SSF). An SSF consists of a set of entities called symbols that, through relationships, can be combined to form larger structures—like atoms combining to form molecules—and that can be transformed by applying a set of processes. These processes can generate new symbols, create and modify relationships between symbols, store symbols, compare if two symbols are the same or different, and so on. These symbols are physical insofar as they have a physical-electronic (in the case of computers) or physical-biological (in the case of human beings) substrate. Indeed, in the case of computers, the symbols are made by means of digital electronic circuits and in the case of human beings by networks of neurons. In short, according to the SSF hypothesis, the nature of the substrate (electronic circuits or neural networks) is unimportant as long as said substrate allows symbols to be processed. Let’s not forget that it is a hypothesis and, therefore, it should not be accepted or rejected. a priori . In any case, its validity or refutation must be verified according to the scientific method, with experimental tests. AI is precisely the scientific field dedicated to trying to verify this hypothesis in the context of digital computers, that is, verifying whether or not a properly programmed computer is capable of general intelligent behavior.

It is important to note that it should be about intelligence of a general type and not a specific intelligence, since the intelligence of human beings is of a general type. Exhibiting specific intelligence is quite another matter. For example, programs that play chess at the Grandmaster level are unable to play checkers despite it being a much simpler game. It is necessary to design and execute a different and independent program from the one that allows you to play chess so that the same computer can also play checkers. In other words, he cannot take advantage of his ability to play chess to adapt it to checkers. In the case of human beings, this is not the case, since any chess player can take advantage of his knowledge of this game to, in a matter of a few minutes, play checkers perfectly. Weak AI as opposed to the strong AI that was actually referred to by Newell and Simon and other AI founding fathers. Although strictly speaking the SSF hypothesis was formulated in 1975, it was already implicit in the ideas of the AI ​​pioneers in the 1950s and even in the ideas of Alan Turing in his pioneering writings (Turing, 1948, 1950) on intelligent machines. .

The ultimate goal of AI, to make a machine have general human-like intelligence, is one of the most ambitious goals science has ever set itself. Due to its difficulty, it is comparable to explaining the origin of life, the origin of the universe or knowing the structure of matter:

The person who introduced this distinction between weak and strong AI was the philosopher John Searle in an article critical of AI published in 1980 (Searle, 1980) that caused, and continues to cause, much controversy. Strong AI would imply that a properly designed computer does not simulate a mind but is a mind. and therefore should be capable of having an intelligence equal to or even superior to that of humans. Searle in his article tries to prove that strong AI is impossible. At this point it should be clarified that general AI is not the same as strong AI. There is obviously a connection but only in one sense, that is to say that all strong AI will necessarily be general but there may be general AIs, that is to say multitasking, that are not strong, that emulate the ability to exhibit general human-like intelligence but without experiencing states. mental.

Weak AI, on the other hand, would consist, according to Searle, of building programs that perform specific tasks and, obviously, without the need to have mental states. The ability of computers to perform specific tasks, even better than people, has already been amply demonstrated. In certain domains, weak AI advances far exceed human expertise, such as finding solutions to logical formulas with many variables or playing chess, or Go, or medical diagnosis and many other aspects related to decision making. decisions. Also associated with weak AI is the fact of formulating and testing hypotheses about aspects related to the mind (for example, the ability to reason deductively, to learn inductively, etc.) through the construction of programs that carry out these functions, even if it is through processes completely different from those carried out by the brain. Absolutely all the advances made so far in the field of AI are manifestations of weak and specific AI.

THE MAIN MODELS IN AI: SYMBOLIC, CONNECTIONIST, EVOLUTIONARY AND CORPOREAL

The dominant model in AI has been the symbolic one, which has its roots in the SSF hypothesis. In fact, it is still very important and is now considered the classic model in AI (also called by the acronym GOFAI, for Good Old Fashioned AI ). It is a top down model which is based on logical reasoning and heuristic search as pillars for problem solving, without the intelligent system needing to form part of a body or be located in a real environment. That is, symbolic AI operates with abstract representations of the real world that are modeled by representation languages ​​based mainly on mathematical logic and its extensions. For this reason, early intelligent systems mainly solved problems that do not require direct interaction with the environment, such as proving simple mathematical theorems or playing chess—programs that play chess do not actually need visual perception to see the pieces. on the board or actuators to move the pieces. This does not mean that symbolic AI cannot be used to, for example, programming the reasoning module of a physical robot located in a real environment, but in the early years the pioneers of AI did not have knowledge representation languages ​​or programming that would allow them to do it efficiently and for this reason the first intelligent systems they were limited to solving problems that did not require direct interaction with the real world. Currently, symbolic AI is still used to prove theorems or play chess, but also for applications that require perceiving the environment and acting on it, such as learning and decision-making in autonomous robots. But in the early years, the pioneers of AI did not have knowledge representation languages ​​or programming that would allow them to do so efficiently, and for this reason the first intelligent systems were limited to solving problems that did not require direct interaction with the real world. Currently, symbolic AI is still used to prove theorems or play chess, but also for applications that require perceiving the environment and acting on it, such as learning and decision-making in autonomous robots. But in the early years, the pioneers of AI did not have knowledge representation languages ​​or programming that would allow them to do so efficiently, and for this reason the first intelligent systems were limited to solving problems that did not require direct interaction with the real world. Currently, symbolic AI is still used to prove theorems or play chess, but also for applications that require perceiving the environment and acting on it, such as learning and decision-making in autonomous robots.

Simultaneously with the symbolic AI, a bio-inspired AI called the connectionist also began to develop. Connectionist systems are not incompatible with the SSF hypothesis but, contrary to symbolic AI, it is a bottom-up modeling , since they are based on the hypothesis that intelligence emerges from the distributed activity of a large number of interconnected units that process information in parallel. In connectionist AI these units are very approximate models of the electrical activity of biological neurons.

As early as 1943, McCulloch and Pitts (McCulloch and Pitts, 1943) proposed a simplified model of the neuron based on the idea that a neuron is essentially a logical unit. This model is a mathematical abstraction with inputs (dendrites) and outputs (axons). The output value is calculated based on the result of a weighted sum of the inputs, so that if said sum exceeds a preset threshold then the output is “1”, otherwise the output is “0”. By connecting the output of each neuron to the inputs of other neurons, an artificial neural network is formed. Based on what was already known at that time about the reinforcement of synapses between biological neurons, it was seen that these artificial neural networks could be trained to learn functions that relate inputs to outputs by adjusting the weights used to weight the connections. between neurons, for this reason it was thought that they would be better models for learning, cognition and memory than models based on symbolic AI. However, intelligent systems based on connectionism neither need to be part of a body nor be situated in a real environment and, from this point of view, have the same limitations as symbolic systems. Besides, real neurons have complex dendritic arborizations with not only electrical but also non-trivial chemical properties. They may contain ionic conductances that produce nonlinear effects. They can receive tens of thousands of synapses varying in position, polarity, and magnitude. Also, most of the brain cells are not neurons, they are cells glial cells , which not only regulate the functioning of neurons, but also have electrical potentials, generate calcium waves and communicate with each other, which seems to indicate that they play a very important role in cognitive processes. However, there is no connectionist model that includes these cells, so at best these models are very incomplete and at worst wrong. In short, all the enormous complexity of the brain is far removed from current models. This immense complexity of the brain also leads us to think that the so-called singularity, that is, future artificial superintelligences that, based on replicas of the brain, will far exceed human intelligence in about twenty-five years, is a prediction with little scientific basis.

Another bioinspired modeling, also compatible with the SSF hypothesis, and not corporeal, is evolutionary computation. (Holland, 1975). The successes of biology in evolving complex organisms led some researchers in the early sixties to consider the possibility of imitating evolution so that computer programs, through an evolutionary process, automatically improve solutions to problems to those that had been programmed. The idea is that these programs, thanks to mutation and crossover operators of “chromosomes” that model the programs, generate new generations of modified programs whose solutions are better than those of the programs of previous generations. Given that we can consider that the objective of AI is the search for programs capable of producing intelligent behaviors, it was thought that evolutionary programming could be used to find such programs within the space of possible programs. The reality is much more complex and this approach has many limitations, although it has produced excellent results, particularly in solving optimization problems.

The complexity of the brain is far removed from AI models and leads one to think that the so-called singularity —artificial superintelligences based on replicas of the brain that will far exceed human intelligence—is a prediction with little scientific foundation.

One of the strongest criticisms of these non-embodied models is based on the fact that an intelligent agent needs a body in order to have direct experiences with its environment (we would say that the agent is “situated” in its environment) instead of a programmer providing descriptions abstract images of that environment encoded using a knowledge representation language. Without a body, these abstract representations have no semantic content for the machine. However, thanks to the direct interaction with the environment, the agent can relate the signals that it perceives through its sensors with symbolic representations generated from what is perceived. Some AI experts, notably Rodney Brooks (Brooks, 1991) even went so far as to state that it was not even necessary to generate such internal representations, that is, that it is not necessary that an agent have to have an internal representation of the world around him since the world itself is the best possible model of himself and that most of the intelligent behaviors do not require reasoning but emerge from the interaction between the agent and its environment. This idea generated much controversy and Brooks himself, a few years later, admitted that there are many situations in which an internal representation of the world is necessary for the agent to make rational decisions.

In 1965, the philosopher Hubert Dreyfus claimed that the ultimate goal of AI, that is, strong AI of a general type, was as unattainable as the goal of the 17th century alchemists . who tried to transform lead into gold (Dreyfus, 1965). Dreyfus argued that the brain processes information in a global and continuous manner while a computer uses a finite and discrete set of deterministic operations applying rules to a finite set of data. In this aspect we can see an argument similar to that of Searle, but Dreyfus, in later articles and books (Dreyfus, 1992), also used another argument that the body plays a crucial role in intelligence. He was therefore one of the first to advocate the need for intelligence to form part of a body with which to interact with the world. The main idea is that the intelligence of living beings derives from the fact of being located in an environment with which they can interact thanks to their bodies. In fact, this need for corporeality is based on Heidegger’s Phenomenology, which emphasizes the importance of the body with its needs, desires, pleasures, pains, ways of moving, acting, etc. According to Dreyfus, AI should model all these aspects to achieve the ultimate goal of strong AI. Dreyfus does not completely deny the possibility of strong AI but states that it is not possible with the classical methods of symbolic and non-corporeal AI, in other words he considers that the Physical Symbol System hypothesis is not correct. This is certainly an interesting idea that many AI researchers share today. Indeed, the corporeal approximation with internal representation has been gaining ground in AI and currently many of us consider it essential to advance towards intelligence of a general type. In fact, we base a large part of our intelligence on our sensory and motor skills. In other words, the body conforms to the intelligence and therefore without a body there can be no intelligence of a general type. This is so because the The body’s hardware , in particular the mechanisms of the sensory system and the motor system, determine the type of interactions that an agent can perform. In turn, these interactions shape the cognitive abilities of the agents, giving rise to what is known as situated cognition.. That is, the machine is placed in real environments, as occurs with human beings, so that they have interactive experiences that eventually allow them to carry out something similar to what Piaget’s theory of cognitive development proposes ( Inhelder and Piaget, 1958), according to which a human being follows a process of mental maturation in stages and perhaps the different steps of this process could serve as a guide to design intelligent machines. These ideas have given rise to a new subarea of ​​AI called developmental robotics (Weng et al. , 2001).

SPECIALIZED AI SUCCESSES

All AI research efforts have been focused on building specialized artificial intelligences and the successes achieved are very impressive, particularly during the last decade thanks above all to the conjunction of two elements: the availability of huge amounts of data and access to high-performance computing in order to analyze them. Indeed, the success of systems, such as AlphaGo (Silver et al. , 2016), Watson (Ferrucci et al., 2013) and advances in autonomous vehicles or image-based medical diagnosis have been possible thanks to this ability to analyze large amounts of data and detect patterns efficiently. However, we have hardly made any progress towards achieving general AI. In fact, we can affirm that current AI systems are a demonstration of what Daniel Dennett calls “competition without understanding” (Dennett, 2018).

Possibly the most important lesson we have learned over the sixty years of AI’s existence is that what seemed most difficult (diagnosing diseases, playing chess and Go at the highest level) has turned out to be relatively easy and what seemed easier has turned out to be the most difficult. The explanation for this apparent contradiction must be found in the difficulty of providing machines with common sense knowledge. Without this knowledge, it is not possible to have a deep understanding of language or a deep interpretation of what a visual perception system captures, among other limitations. In fact, common sense is a fundamental requirement to achieve human-like AI in generality and depth. Common sense knowledge is the result of our experiences and experiences. Some examples are: “water always flows from top to bottom”, “to drag an object tied to a rope you have to pull the rope, not push it”, “a glass can be stored in a cupboard but we cannot store a cupboard inside a glass», etc. There are millions of common sense pieces of knowledge that people easily handle and that allow us to understand the world in which we live. A possible line of research that could give interesting results in the acquisition of common sense knowledge is the aforementioned developmental robotics. Another very interesting line of work is the one whose objective is mathematical modeling and learning cause-effect relationships, that is, learning the causal and, therefore, asymmetrical aspects of the world. et al. , 2016).

FUTURE: TOWARDS TRULY INTELLIGENT ARTIFICIAL INTELLIGENCES

The most difficult capabilities to achieve are those that require interacting with unrestricted or previously unprepared environments. Designing systems that have these capabilities requires integrating developments in many areas of AI. In particular, we need knowledge representation languages ​​that encode information about many different types of objects, situations, actions, etc., as well as their properties and the relationships between them, particularly cause-effect relationships. We also need new algorithms that, based on these representations, can robustly and efficiently solve problems and answer questions on virtually any topic. Finally, since they will need to acquire a practically unlimited number of knowledge, These systems must be able to learn continuously throughout their entire existence. In short, it is essential to design systems that integrate perception, representation, reasoning, action and learning. This is a very important problem in AI, since we still don’t know how to integrate all these intelligence components. We need cognitive architectures (Forbus, 2012) that integrate these components properly. Embedded systems are a fundamental step prior to one day achieving general artificial intelligence. since we still do not know how to integrate all these components of intelligence. We need cognitive architectures (Forbus, 2012) that integrate these components properly. Embedded systems are a fundamental step prior to one day achieving general artificial intelligence. since we still do not know how to integrate all these components of intelligence. We need cognitive architectures (Forbus, 2012) that integrate these components properly. Embedded systems are a fundamental step prior to one day achieving general artificial intelligence.

The most difficult capabilities to achieve are those that require interacting with unrestricted or previously unprepared environments. Designing systems that have these capabilities requires integrating developments in many areas of AI.

Among future activities, we believe that the most important research topics will go through hybrid systems that combine the advantages of systems capable of reasoning based on knowledge and use of memory (Graves et al ., 2016) and the advantages of AI based on the analysis of massive amounts of data, in what is known as deep learning (Bengio, 2009). Currently, a major limitation of deep learning systems is so-called “catastrophic forgetting”, which means that if they have been trained once to carry out a task (for example, play Go), if we then train them to carry out a different task (for example, distinguish between images of dogs and cats) they completely forget the previously learned task (in this case, playing Go). This limitation is strong proof that these systems do not really learn anything, at least in the human sense of learning. Another important limitation of these systems is that they are “black boxes” with no explanatory capacity, For this reason, an interesting research objective will be how to provide explanatory capacity to deep learning systems by incorporating modules that allow explaining how the proposed results and conclusions have been reached, since explanatory capacity is an inalienable characteristic in any intelligent system. It is also necessary to develop new learning algorithms that do not require huge amounts of data to be trained as well as a much more energy-efficient hardware to implement them, since energy consumption could end up being one of the main barriers to the development of AI. By comparison, the brain is orders of magnitude more efficient than the current hardware needed to implement the most sophisticated AI algorithms. One possible avenue to explore is memristor-based Neuromorphic computing (Saxena et al. , 2018).

Other more classical AI techniques that will continue to be the subject of extensive research are multi-agent systems, action planning, reasoning based on experience, artificial vision, human-machine multimodal communication, humanoid robotics and, above all, new trends. in developmental robotics which may be the key to equipping machines with common sense and, in particular, learning the relationship between their actions and the effects they produce on the environment. We will also see significant progress thanks to biomimetic approaches to reproduce the behavior of animals in machines. It is not just a question of reproducing the behavior of an animal, but of understanding how the brain that produces said behavior works. It is about building and programming electronic circuits that reproduce the brain activity that generates this behavior. Some biologists are interested in attempts to make an artificial brain as complex as possible because they see it as a way to better understand the organ, and engineers are looking for biological information to make more effective designs.

Development robotics may be the key to endow machines with common sense and, in particular, learn the relationship between their actions and the effects they produce in the environment

In terms of applications, some of the most important will continue to be those related to the web, video games, personal assistants, and autonomous robots (particularly autonomous vehicles, social robots, planet-exploring robots, etc.). Applications to the environment and energy saving will also be important, as well as those aimed at economics and sociology.

Finally, the applications of AI to art (visual arts, music, dance, storytelling) will significantly change the nature of the creative process. Computers are no longer just tools to help creation, computers are beginning to be creative agents. This has given rise to a new and very promising application area of ​​AI called Computational Creativity that has already produced very interesting results (Colton et al. , 2009, 2015; López de Mántaras, 2016) in chess, music, plastic arts and narrative, among other creative activities.

FINAL REFLECTION

No matter how intelligent future artificial intelligences, including those of a general type, become, they will never be equal to human intelligences since, as we have argued, the mental development that all complex intelligence requires depends on interactions with the environment and these Interactions depend in turn on the body, particularly the perceptual system and the motor system. This, together with the fact that machines will not follow socialization and acculturation processes like ours, has an even greater impact on the fact that, no matter how sophisticated they become, they will be different intelligences from ours. The fact that they are intelligences alien to humans and, therefore, alien to human values ​​and needs should make us reflect on possible ethical limitations to the development of AI. In particular,

The real danger of AI is not the highly unlikely technological singularity due to the existence of hypothetical future artificial superintelligences, the real dangers are already here. Currently, the algorithms on which Internet search engines, recommendation systems and the personal assistants of our mobile phones are based, know quite well what we do, our preferences and our tastes and can even infer what we think and how we feel. Access to massive amounts of information, which we voluntarily generate, is essential for this to be possible, since by analyzing this data from various sources it is possible to find relationships and patterns that would be impossible to detect without AI techniques. All of this results in an alarming loss of privacy. To avoid this we should have the right to own a copy of all the personal data we generate, control its use and decide who we allow access to and under what conditions, instead of it being in the hands of large corporations without being able to really know what it is used for. of our data.

No matter how intelligent future artificial intelligences become, they will never be human-like; the mental development required by all complex intelligence depends on interactions with the environment and these in turn depend on the body, particularly the perceptual and motor systems

The AI ​​is based on complex programming, and therefore will necessarily make mistakes. But even assuming that completely reliable software could be developed , there are ethical dilemmas that software developers face. should be taken into account when designing it. For example, an autonomous vehicle could decide to run over a pedestrian to avoid a collision that could cause harm to its occupants. Equipping companies with advanced AI systems to make management and production more efficient will require fewer human employees and create more unemployment. These ethical dilemmas have led many AI experts to point to the need to regulate their development. In some cases, the use of AI should even be prohibited. A clear example is autonomous weapons. The three basic principles that govern armed conflicts: discrimination (the need to distinguish between combatants and civilians or between a combatant ready to surrender and one ready to attack), proportionality (avoid excessive use of force) and precaution (minimization of the number of casualties and property damage) are extraordinarily difficult to assess and, therefore, almost impossible for the AI ​​systems that control autonomous weapons to comply with. But even if in the very long term machines have this capacity, it would be undignified to delegate the decision to kill to a machine. But, in addition to regulating, it is essential to educate citizens about the risks of smart technologies, providing them with the necessary skills to control it instead of being controlled by it. We need future citizens who are much more informed, with a better capacity to assess technological risks, with a more critical sense and willing to assert their rights. This training process must begin at school and continue at the university. In particular, it is necessary that science and engineering students receive an ethical training that allows them to better understand the social implications of the technologies that they will most likely develop. Only if we invest in education will we achieve a society that can take advantage of smart technologies while minimizing risks. AI undoubtedly has extraordinary potential to benefit society as long as we make appropriate and prudent use. It is essential to raise awareness of the limitations of AI, as well as to act collectively to ensure that AI is used for the common good safely, reliably and responsibly. In particular, it is necessary that science and engineering students receive an ethical training that allows them to better understand the social implications of the technologies that they will most likely develop. Only if we invest in education will we achieve a society that can take advantage of smart technologies while minimizing risks. AI undoubtedly has extraordinary potential to benefit society as long as we make appropriate and prudent use. It is essential to raise awareness of the limitations of AI, as well as to act collectively to ensure that AI is used for the common good safely, reliably and responsibly. In particular, it is necessary that science and engineering students receive an ethical training that allows them to better understand the social implications of the technologies that they will most likely develop. Only if we invest in education will we achieve a society that can take advantage of smart technologies while minimizing risks. AI undoubtedly has extraordinary potential to benefit society as long as we make appropriate and prudent use. It is essential to raise awareness of the limitations of AI, as well as to act collectively to ensure that AI is used for the common good safely, reliably and responsibly.

The path towards truly intelligent AI will continue to be long and difficult, after all, AI is barely sixty years old and, as Carl Sagan would say, sixty years is a very brief moment on the cosmic scale of time; or, as Gabriel García Márquez very poetically said: «Since the appearance of visible life on Earth, 380 million years had to elapse for a butterfly to learn to fly, another 180 million years to make a rose with no other commitment than to be beautiful, and four geological ages for human beings to be able to sing better than birds and die of love.

Bibliography

—Bengio, Yoshua (2009): “Learning deep architectures for AI”, in Foundations and Trends in Machine Learning , vol. 2, no. 1, p. 1-127.

Brooks, Rodney A. (1991): “Intelligence without reason”, Proceedings of the 12th International Joint Conference on Artificial Intelligence (IJCAI’91), vol. 1, p. 569-595.

Colton, S.; López de Mántaras, R. and Stock, O. (2009): «Computational creativity: coming of age», in AI Magazine , vol. 30, no. 3, p. 11-14.

Colton, S.; Halskov, J.; Ventura, D.; Gouldstone, I.; Cook, M. and Pérez-Ferrer, B. (2015): «The painting fool sees! New projects with the automated painter”, International Conference on Computational Creativity (ICCC 2015), pp. 189-196.

Dennet, DC (2018): From Bacteria to Bach and Back: The Evolution of Minds , London, Penguin Random House.

—Dreyfus, Hubert L. (1965): Alchemy and Artificial Intelligence , Santa Monica, California, Rand Corporation.

—(1992): What Computers Still Can’t Do , New York, MIT Press.

Ferrucci, D.A.; Cams, A.; Bagchi, S.; Gondek, D. and Mueller, ET (2013): “Watson: beyond jeopardy!”, in Artificial Intelligence , no. 199, pp. 93-105.

Forbus, Kenneth. D. (2012): «How minds will be built», in Advances in Cognitive Systems , no. 1, pp. 47-58.

Graves, A.; Wayne, G.; Reynolds, M.; Harley, T.; Danielka, I.; Grabska-Barwinska, A.; Gomez-Colmenarejo, S.; Grefenstette, E.; Ramalho, T.; Agapiou, J.; Puigdomènech-Badia, A.; Hermann, KM; Zwols, Y.; Ostrovski, G.; Cain, A.; King, H.; Summerfield, C.; Blunson, P.; Kavukcuoglu, K. & Hassabis, D. (2016): « Hybrid computing using a neural network with dynamic external memory», in Nature , no. 538, pp. 471-476.

—Holland, John. H. (1975): Adaptation in natural and artificial systems , Michigan, University of Michigan Press.

Inhelder, Bärbel, and Piaget, Jean (1958): The Growth of Logical Thinking from Childhood to Adolescence , New York, Basic Books.

—Lake, B.M.; Ullman, TD; Tenenbaum, JB & Gershman, SJ (2017): “Building machines that learn and think like people”, in Behavioral and Brain Sciences , vol. 40, e253.

López de Mántaras, R. (2016): «Artificial intelligence and the arts: towards computational creativity», in AA VV, The Next Step: Exponential Life , Madrid, BBVA/ Turner, pp. 100-125.

—McCulloch, Warren. S. and Pitts, Walter (1943): “A logical calculus of ideas immanent in nervous activity”, in Bulletin of Mathematical Biophysics , no. 5, pp. 115-133.

—Newell, Allen and Simon, Herbert A. (1976): “Computer science as empirical inquiry: symbols and serch”, in Communications of the ACM , vol. 19, no. 3, p. 113-126.

—Pearl, Judea, and Mackenzie, Dana (2018): The Book of Why: The New Science of Cause and Effect , New York, Basic Books.

—Saxena, V.; Wu, X.; Srivastava, I. and Zhu, K. (2018): «Towards neuromorphic learning machines using emerging memory devices with brain-like energy efficiency, preprints». Available at techoder.

—Searle, John R. (1980): “Minds, brains, and programs,” in Behavioral and Brain Science , vol. 3, no. 3, p. 417-457.

Silver, D.; Huang, A.; Maddison, CJ; Guez, A.; Sifre, L.; Come den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; Dieleman, S.; Grewe, D.; Nham, J.; Kalchbrenner, N.; Sutskever, I.; Lillicrap, T.; Leach, M.; Kavukcuoglu, K.; Graepel, T. and Hassabis, D. (2016): “Mastering the game of go with deep neural networks and tree search”, in Nature , vol. 529, no. 7,587, p. 484-489.

—Turing, Alan M. (1948): Intelligent Machinery, National Physical Laboratory Report , reprinted in B. Meltzer and D. Michie (eds.) (1969): Machine Intelligence 5 , Edinburgh, Edinburgh University Press, 1969.

—(1950): «Computing machinery and intelligence», in Mind , vol. 59, no. 236, p. 433-460.

—Weizenbaum, Joseph (1976): Computer Power and Human Reasoning: From Judgment to Calculation , San Francisco, WH Freeman and Company.

—Weng, J.; McClelland, J.; Pentland, A.; Sporns, O.; Stockman, I.; Sur, M. and Thelen, E. (2001): “Autonomous mental development by robots and animals”, in Science , no. 291, pp. 599-600.

[AI Technology](https://techoder.com/2023/04/09/ai-technology/) This chapter contains some thoughts on artificial intelligence (AI). First, the distinction between strong and weak AI is explained, as well as the related concepts of general and specific AI, making it clear that all existing manifestations of AI are weak and specific. The main models are briefly described, insisting on the importance of corporality as a key aspect to achieve AI of a general nature. Next, the need to provide machines with common sense knowledge that makes it possible to move towards the ambitious goal of building AI of a general nature is addressed. The latest trends in AI based on the analysis of large amounts of data that have made spectacular progress possible in very recent times are also discussed, with an allusion to the difficulties present today in the AI ​​approaches. Finally, other issues that are and will continue to be key in AI are discussed, before closing with a brief reflection on the risks of artificial intelligence. The ultimate goal of AI, to make a machine have general human-like intelligence, is one of the most ambitious goals science has ever set itself. Due to its difficulty, it is comparable to other great scientific objectives such as explaining the origin of life, the origin of the universe or knowing the structure of matter. Over the last few centuries, this desire to build intelligent machines has led us to invent models or metaphors of the human brain. For example, in the 17th century, Descartes wondered if a complex mechanical system made up of gears, pulleys, and tubes could, in principle, emulate thought. Two centuries later, the metaphor was telephone systems since it seemed that their connections could be assimilated to a neural network. ## THE PHYSICAL SYMBOL SYSTEM HYPOTHESIS: WEAK AI VS. STRONG AI In a paper on the occasion of receiving the prestigious Turing Award in 1975, Allen Newell and Herbert Simon (Newell & Simon, 1975) formulated the Physical Symbol System hypothesis according to which “every system of physical symbols possesses the necessary means and enough to carry out intelligent actions. On the other hand, since human beings are capable of displaying intelligent behavior in the general sense, then, according to the hypothesis, we too are physical symbol systems. It is convenient to clarify what Newell and Simon mean when they talk about the _Physical Symbol System_ (SSF). An SSF consists of a set of entities called symbols that, through relationships, can be combined to form larger structures—like atoms combining to form molecules—and that can be transformed by applying a set of processes. These processes can generate new symbols, create and modify relationships between symbols, store symbols, compare if two symbols are the same or different, and so on. These symbols are physical insofar as they have a physical-electronic (in the case of computers) or physical-biological (in the case of human beings) substrate. Indeed, in the case of computers, the symbols are made by means of digital electronic circuits and in the case of human beings by networks of neurons. In short, according to the SSF hypothesis, the nature of the substrate (electronic circuits or neural networks) is unimportant as long as said substrate allows symbols to be processed. Let’s not forget that it is a hypothesis and, therefore, it should not be accepted or rejected. _a priori_ . In any case, its validity or refutation must be verified according to the scientific method, with experimental tests. AI is precisely the scientific field dedicated to trying to verify this hypothesis in the context of digital computers, that is, verifying whether or not a properly programmed computer is capable of general intelligent behavior. It is important to note that it should be about intelligence of a general type and not a specific intelligence, since the intelligence of human beings is of a general type. Exhibiting specific intelligence is quite another matter. For example, programs that play chess at the Grandmaster level are unable to play checkers despite it being a much simpler game. It is necessary to design and execute a different and independent program from the one that allows you to play chess so that the same computer can also play checkers. In other words, he cannot take advantage of his ability to play chess to adapt it to checkers. In the case of human beings, this is not the case, since any chess player can take advantage of his knowledge of this game to, in a matter of a few minutes, play checkers perfectly. _Weak AI_ as opposed to the _strong AI_ that was actually referred to by Newell and Simon and other AI founding fathers. Although strictly speaking the SSF hypothesis was formulated in 1975, it was already implicit in the ideas of the AI ​​pioneers in the 1950s and even in the ideas of Alan Turing in his pioneering writings (Turing, 1948, 1950) on intelligent machines. . > ### **The ultimate goal of AI, to make a machine have general human-like intelligence, is one of the most ambitious goals science has ever set itself. Due to its difficulty, it is comparable to explaining the origin of life, the origin of the universe or knowing the structure of matter:** The person who introduced this distinction between weak and strong AI was the philosopher John Searle in an article critical of AI published in 1980 (Searle, 1980) that caused, and continues to cause, much controversy. Strong AI would imply that a properly designed computer does not simulate a mind but _is a mind._ and therefore should be capable of having an intelligence equal to or even superior to that of humans. Searle in his article tries to prove that strong AI is impossible. At this point it should be clarified that general AI is not the same as strong AI. There is obviously a connection but only in one sense, that is to say that all strong AI will necessarily be general but there may be general AIs, that is to say multitasking, that are not strong, that emulate the ability to exhibit general human-like intelligence but without experiencing states. mental. Weak AI, on the other hand, would consist, according to Searle, of building programs that perform specific tasks and, obviously, without the need to have mental states. The ability of computers to perform specific tasks, even better than people, has already been amply demonstrated. In certain domains, weak AI advances far exceed human expertise, such as finding solutions to logical formulas with many variables or playing chess, or Go, or medical diagnosis and many other aspects related to decision making. decisions. Also associated with weak AI is the fact of formulating and testing hypotheses about aspects related to the mind (for example, the ability to reason deductively, to learn inductively, etc.) through the construction of programs that carry out these functions, even if it is through processes completely different from those carried out by the brain. Absolutely all the advances made so far in the field of AI are manifestations of weak and specific AI. ## THE MAIN MODELS IN AI: SYMBOLIC, CONNECTIONIST, EVOLUTIONARY AND CORPOREAL The dominant model in AI has been the symbolic one, which has its roots in the SSF hypothesis. In fact, it is still very important and is now considered the classic model in AI (also called by the acronym GOFAI, for _Good Old Fashioned AI_ ). _It is a top down_ model which is based on logical reasoning and heuristic search as pillars for problem solving, without the intelligent system needing to form part of a body or be located in a real environment. That is, symbolic AI operates with abstract representations of the real world that are modeled by representation languages ​​based mainly on mathematical logic and its extensions. For this reason, early intelligent systems mainly solved problems that do not require direct interaction with the environment, such as proving simple mathematical theorems or playing chess—programs that play chess do not actually need visual perception to see the pieces. on the board or actuators to move the pieces. This does not mean that symbolic AI cannot be used to, for example, programming the reasoning module of a physical robot located in a real environment, but in the early years the pioneers of AI did not have knowledge representation languages ​​or programming that would allow them to do it efficiently and for this reason the first intelligent systems they were limited to solving problems that did not require direct interaction with the real world. Currently, symbolic AI is still used to prove theorems or play chess, but also for applications that require perceiving the environment and acting on it, such as learning and decision-making in autonomous robots. But in the early years, the pioneers of AI did not have knowledge representation languages ​​or programming that would allow them to do so efficiently, and for this reason the first intelligent systems were limited to solving problems that did not require direct interaction with the real world. Currently, symbolic AI is still used to prove theorems or play chess, but also for applications that require perceiving the environment and acting on it, such as learning and decision-making in autonomous robots. But in the early years, the pioneers of AI did not have knowledge representation languages ​​or programming that would allow them to do so efficiently, and for this reason the first intelligent systems were limited to solving problems that did not require direct interaction with the real world. Currently, symbolic AI is still used to prove theorems or play chess, but also for applications that require perceiving the environment and acting on it, such as learning and decision-making in autonomous robots. Simultaneously with the symbolic AI, a bio-inspired AI called the connectionist also began to develop. Connectionist systems are not incompatible with the SSF hypothesis but, contrary to symbolic AI, it is a _bottom-up_ modeling , since they are based on the hypothesis that intelligence emerges from the distributed activity of a large number of interconnected units that process information in parallel. In connectionist AI these units are very approximate models of the electrical activity of biological neurons. As early as 1943, McCulloch and Pitts (McCulloch and Pitts, 1943) proposed a simplified model of the neuron based on the idea that a neuron is essentially a logical unit. This model is a mathematical abstraction with inputs (dendrites) and outputs (axons). The output value is calculated based on the result of a weighted sum of the inputs, so that if said sum exceeds a preset threshold then the output is “1”, otherwise the output is “0”. By connecting the output of each neuron to the inputs of other neurons, an artificial neural network is formed. Based on what was already known at that time about the reinforcement of synapses between biological neurons, it was seen that these artificial neural networks could be trained to learn functions that relate inputs to outputs by adjusting the weights used to weight the connections. between neurons, for this reason it was thought that they would be better models for learning, cognition and memory than models based on symbolic AI. However, intelligent systems based on connectionism neither need to be part of a body nor be situated in a real environment and, from this point of view, have the same limitations as symbolic systems. Besides, real neurons have complex dendritic arborizations with not only electrical but also non-trivial chemical properties. They may contain ionic conductances that produce nonlinear effects. They can receive tens of thousands of synapses varying in position, polarity, and magnitude. Also, most of the brain cells are not neurons, they are cells _glial cells_ , which not only regulate the functioning of neurons, but also have electrical potentials, generate calcium waves and communicate with each other, which seems to indicate that they play a very important role in cognitive processes. However, there is no connectionist model that includes these cells, so at best these models are very incomplete and at worst wrong. In short, all the enormous complexity of the brain is far removed from current models. This immense complexity of the brain also leads us to think that the so-called _singularity_, that is, future artificial superintelligences that, based on replicas of the brain, will far exceed human intelligence in about twenty-five years, is a prediction with little scientific basis. Another bioinspired modeling, also compatible with the SSF hypothesis, and not corporeal, is _evolutionary computation._ (Holland, 1975). The successes of biology in evolving complex organisms led some researchers in the early sixties to consider the possibility of imitating evolution so that computer programs, through an evolutionary process, automatically improve solutions to problems to those that had been programmed. The idea is that these programs, thanks to mutation and crossover operators of “chromosomes” that model the programs, generate new generations of modified programs whose solutions are better than those of the programs of previous generations. Given that we can consider that the objective of AI is the search for programs capable of producing intelligent behaviors, it was thought that evolutionary programming could be used to find such programs within the space of possible programs. The reality is much more complex and this approach has many limitations, although it has produced excellent results, particularly in solving optimization problems. > ### The complexity of the brain is far removed from AI models and leads one to think that the so-called _singularity_ —artificial superintelligences based on replicas of the brain that will far exceed human intelligence—is a prediction with little scientific foundation. One of the strongest criticisms of these non-embodied models is based on the fact that an intelligent agent needs a body in order to have direct experiences with its environment (we would say that the agent is “situated” in its environment) instead of a programmer providing descriptions abstract images of that environment encoded using a knowledge representation language. Without a body, these abstract representations have no semantic content for the machine. However, thanks to the direct interaction with the environment, the agent can relate the signals that it perceives through its sensors with symbolic representations generated from what is perceived. Some AI experts, notably Rodney Brooks (Brooks, 1991) even went so far as to state that it was not even necessary to generate such internal representations, that is, that it is not necessary that an agent have to have an internal representation of the world around him since the world itself is the best possible model of himself and that most of the intelligent behaviors do not require reasoning but emerge from the interaction between the agent and its environment. This idea generated much controversy and Brooks himself, a few years later, admitted that there are many situations in which an internal representation of the world is necessary for the agent to make rational decisions. In 1965, the philosopher Hubert Dreyfus claimed that the ultimate goal of AI, that is, strong AI of a general type, was as unattainable as the goal of the 17th century alchemists . who tried to transform lead into gold (Dreyfus, 1965). Dreyfus argued that the brain processes information in a global and continuous manner while a computer uses a finite and discrete set of deterministic operations applying rules to a finite set of data. In this aspect we can see an argument similar to that of Searle, but Dreyfus, in later articles and books (Dreyfus, 1992), also used another argument that the body plays a crucial role in intelligence. He was therefore one of the first to advocate the need for intelligence to form part of a body with which to interact with the world. The main idea is that the intelligence of living beings derives from the fact of being located in an environment with which they can interact thanks to their bodies. In fact, this need for corporeality is based on Heidegger’s Phenomenology, which emphasizes the importance of the body with its needs, desires, pleasures, pains, ways of moving, acting, etc. According to Dreyfus, AI should model all these aspects to achieve the ultimate goal of strong AI. Dreyfus does not completely deny the possibility of strong AI but states that it is not possible with the classical methods of symbolic and non-corporeal AI, in other words he considers that the Physical Symbol System hypothesis is not correct. This is certainly an interesting idea that many AI researchers share today. Indeed, the corporeal approximation with internal representation has been gaining ground in AI and currently many of us consider it essential to advance towards intelligence of a general type. In fact, we base a large part of our intelligence on our sensory and motor skills. In other words, the body conforms to the intelligence and therefore without a body there can be no intelligence of a general type. This is so because the _The body’s hardware_ , in particular the mechanisms of the sensory system and the motor system, determine the type of interactions that an agent can perform. In turn, these interactions shape the cognitive abilities of the agents, giving rise to what is known as _situated cognition._. That is, the machine is placed in real environments, as occurs with human beings, so that they have interactive experiences that eventually allow them to carry out something similar to what Piaget’s theory of cognitive development proposes ( Inhelder and Piaget, 1958), according to which a human being follows a process of mental maturation in stages and perhaps the different steps of this process could serve as a guide to design intelligent machines. These ideas have given rise to a new subarea of ​​AI called _developmental robotics_ (Weng _et al._ , 2001). ## SPECIALIZED AI SUCCESSES All AI research efforts have been focused on building specialized artificial intelligences and the successes achieved are very impressive, particularly during the last decade thanks above all to the conjunction of two elements: the availability of huge amounts of data and access to high-performance computing in order to analyze them. Indeed, the success of systems, such as AlphaGo (Silver _et al._ , 2016), Watson (Ferrucci _et al._, 2013) and advances in autonomous vehicles or image-based medical diagnosis have been possible thanks to this ability to analyze large amounts of data and detect patterns efficiently. However, we have hardly made any progress towards achieving general AI. In fact, we can affirm that current AI systems are a demonstration of what Daniel Dennett calls “competition without understanding” (Dennett, 2018). Possibly the most important lesson we have learned over the sixty years of AI’s existence is that what seemed most difficult (diagnosing diseases, playing chess and Go at the highest level) has turned out to be relatively easy and what seemed easier has turned out to be the most difficult. The explanation for this apparent contradiction must be found in the difficulty of providing machines with common sense knowledge. Without this knowledge, it is not possible to have a deep understanding of language or a deep interpretation of what a visual perception system captures, among other limitations. In fact, common sense is a fundamental requirement to achieve human-like AI in generality and depth. Common sense knowledge is the result of our experiences and experiences. Some examples are: “water always flows from top to bottom”, “to drag an object tied to a rope you have to pull the rope, not push it”, “a glass can be stored in a cupboard but we cannot store a cupboard inside a glass», etc. There are millions of common sense pieces of knowledge that people easily handle and that allow us to understand the world in which we live. A possible line of research that could give interesting results in the acquisition of common sense knowledge is the aforementioned developmental robotics. Another very interesting line of work is the one whose objective is mathematical modeling and learning cause-effect relationships, that is, learning the causal and, therefore, asymmetrical aspects of the world. _et al._ , 2016). ## FUTURE: TOWARDS TRULY INTELLIGENT ARTIFICIAL INTELLIGENCES The most difficult capabilities to achieve are those that require interacting with unrestricted or previously unprepared environments. Designing systems that have these capabilities requires integrating developments in many areas of AI. In particular, we need knowledge representation languages ​​that encode information about many different types of objects, situations, actions, etc., as well as their properties and the relationships between them, particularly cause-effect relationships. We also need new algorithms that, based on these representations, can robustly and efficiently solve problems and answer questions on virtually any topic. Finally, since they will need to acquire a practically unlimited number of knowledge, These systems must be able to learn continuously throughout their entire existence. In short, it is essential to design systems that integrate perception, representation, reasoning, action and learning. This is a very important problem in AI, since we still don’t know how to integrate all these intelligence components. We need cognitive architectures (Forbus, 2012) that integrate these components properly. Embedded systems are a fundamental step prior to one day achieving general artificial intelligence. since we still do not know how to integrate all these components of intelligence. We need cognitive architectures (Forbus, 2012) that integrate these components properly. Embedded systems are a fundamental step prior to one day achieving general artificial intelligence. since we still do not know how to integrate all these components of intelligence. We need cognitive architectures (Forbus, 2012) that integrate these components properly. Embedded systems are a fundamental step prior to one day achieving general artificial intelligence. > ### **The most difficult capabilities to achieve are those that require interacting with unrestricted or previously unprepared environments. Designing systems that have these capabilities requires integrating developments in many areas of AI.** Among future activities, we believe that the most important research topics will go through hybrid systems that combine the advantages of systems capable of reasoning based on knowledge and use of memory (Graves et al _._, 2016) and the advantages of AI based on the analysis of massive amounts of data, in what is known as deep learning (Bengio, 2009). Currently, a major limitation of deep learning systems is so-called “catastrophic forgetting”, which means that if they have been trained once to carry out a task (for example, play Go), if we then train them to carry out a different task (for example, distinguish between images of dogs and cats) they completely forget the previously learned task (in this case, playing Go). This limitation is strong proof that these systems do not really learn anything, at least in the human sense of learning. Another important limitation of these systems is that they are “black boxes” with no explanatory capacity, For this reason, an interesting research objective will be how to provide explanatory capacity to deep learning systems by incorporating modules that allow explaining how the proposed results and conclusions have been reached, since explanatory capacity is an inalienable characteristic in any intelligent system. It is also necessary to develop new learning algorithms that do not require huge amounts of data to be trained as well as a _much more energy-efficient hardware_ to implement them, since energy consumption could end up being one of the main barriers to the development of AI. By comparison, the brain is orders of magnitude more efficient than the current _hardware_ needed to implement the most sophisticated AI algorithms. One possible avenue to explore is memristor-based Neuromorphic computing (Saxena _et al._ , 2018). Other more classical AI techniques that will continue to be the subject of extensive research are multi-agent systems, action planning, reasoning based on experience, artificial vision, human-machine multimodal communication, humanoid robotics and, above all, new trends. in _developmental robotics_ which may be the key to equipping machines with common sense and, in particular, learning the relationship between their actions and the effects they produce on the environment. We will also see significant progress thanks to biomimetic approaches to reproduce the behavior of animals in machines. It is not just a question of reproducing the behavior of an animal, but of understanding how the brain that produces said behavior works. It is about building and programming electronic circuits that reproduce the brain activity that generates this behavior. Some biologists are interested in attempts to make an artificial brain as complex as possible because they see it as a way to better understand the organ, and engineers are looking for biological information to make more effective designs. > ### **_Development robotics_ may be the key to endow machines with common sense and, in particular, learn the relationship between their actions and the effects they produce in the environment** In terms of applications, some of the most important will continue to be those related to the web, video games, personal assistants, and autonomous robots (particularly autonomous vehicles, social robots, planet-exploring robots, etc.). Applications to the environment and energy saving will also be important, as well as those aimed at economics and sociology. Finally, the applications of AI to art (visual arts, music, dance, storytelling) will significantly change the nature of the creative process. Computers are no longer just tools to help creation, computers are beginning to be creative agents. This has given rise to a new and very promising application area of ​​AI called _Computational Creativity_ that has already produced very interesting results (Colton _et al._ , 2009, 2015; López de Mántaras, 2016) in chess, music, plastic arts and narrative, among other creative activities. ## FINAL REFLECTION No matter how intelligent future artificial intelligences, including those of a general type, become, they will never be equal to human intelligences since, as we have argued, the mental development that all complex intelligence requires depends on interactions with the environment and these Interactions depend in turn on the body, particularly the perceptual system and the motor system. This, together with the fact that machines will not follow socialization and acculturation processes like ours, has an even greater impact on the fact that, no matter how sophisticated they become, they will be different intelligences from ours. The fact that they are intelligences alien to humans and, therefore, alien to human values ​​and needs should make us reflect on possible ethical limitations to the development of AI. In particular, The real danger of AI is not the highly unlikely technological singularity due to the existence of hypothetical future artificial superintelligences, the real dangers are already here. Currently, the algorithms on which Internet search engines, recommendation systems and the personal assistants of our mobile phones are based, know quite well what we do, our preferences and our tastes and can even infer what we think and how we feel. Access to massive amounts of information, which we voluntarily generate, is essential for this to be possible, since by analyzing this data from various sources it is possible to find relationships and patterns that would be impossible to detect without AI techniques. All of this results in an alarming loss of privacy. To avoid this we should have the right to own a copy of all the personal data we generate, control its use and decide who we allow access to and under what conditions, instead of it being in the hands of large corporations without being able to really know what it is used for. of our data. > ### **No matter how intelligent future artificial intelligences become, they will never be human-like; the mental development required by all complex intelligence depends on interactions with the environment and these in turn depend on the body, particularly the perceptual and motor systems** The AI ​​is based on complex programming, and therefore will necessarily make mistakes. _But even assuming that completely reliable software_ could be developed , there are ethical dilemmas that _software_ developers face. should be taken into account when designing it. For example, an autonomous vehicle could decide to run over a pedestrian to avoid a collision that could cause harm to its occupants. Equipping companies with advanced AI systems to make management and production more efficient will require fewer human employees and create more unemployment. These ethical dilemmas have led many AI experts to point to the need to regulate their development. In some cases, the use of AI should even be prohibited. A clear example is autonomous weapons. The three basic principles that govern armed conflicts: discrimination (the need to distinguish between combatants and civilians or between a combatant ready to surrender and one ready to attack), proportionality (avoid excessive use of force) and precaution (minimization of the number of casualties and property damage) are extraordinarily difficult to assess and, therefore, almost impossible for the AI ​​systems that control autonomous weapons to comply with. But even if in the very long term machines have this capacity, it would be undignified to delegate the decision to kill to a machine. But, in addition to regulating, it is essential to educate citizens about the risks of smart technologies, providing them with the necessary skills to control it instead of being controlled by it. We need future citizens who are much more informed, with a better capacity to assess technological risks, with a more critical sense and willing to assert their rights. This training process must begin at school and continue at the university. In particular, it is necessary that science and engineering students receive an ethical training that allows them to better understand the social implications of the technologies that they will most likely develop. Only if we invest in education will we achieve a society that can take advantage of smart technologies while minimizing risks. AI undoubtedly has extraordinary potential to benefit society as long as we make appropriate and prudent use. It is essential to raise awareness of the limitations of AI, as well as to act collectively to ensure that AI is used for the common good safely, reliably and responsibly. In particular, it is necessary that science and engineering students receive an ethical training that allows them to better understand the social implications of the technologies that they will most likely develop. Only if we invest in education will we achieve a society that can take advantage of smart technologies while minimizing risks. AI undoubtedly has extraordinary potential to benefit society as long as we make appropriate and prudent use. It is essential to raise awareness of the limitations of AI, as well as to act collectively to ensure that AI is used for the common good safely, reliably and responsibly. In particular, it is necessary that science and engineering students receive an ethical training that allows them to better understand the social implications of the technologies that they will most likely develop. Only if we invest in education will we achieve a society that can take advantage of smart technologies while minimizing risks. AI undoubtedly has extraordinary potential to benefit society as long as we make appropriate and prudent use. It is essential to raise awareness of the limitations of AI, as well as to act collectively to ensure that AI is used for the common good safely, reliably and responsibly. The path towards truly intelligent AI will continue to be long and difficult, after all, AI is barely sixty years old and, as Carl Sagan would say, sixty years is a very brief moment on the cosmic scale of time; or, as Gabriel García Márquez very poetically said: «Since the appearance of visible life on Earth, 380 million years had to elapse for a butterfly to learn to fly, another 180 million years to make a rose with no other commitment than to be beautiful, and four geological ages for human beings to be able to sing better than birds and die of love. ## **Bibliography** **—Bengio, Yoshua (2009):** “Learning deep architectures for AI”, in Foundations and Trends in Machine Learning , vol. 2, no. 1, p. 1-127. —**Brooks, Rodney A. (1991):** “Intelligence without reason”, Proceedings of the 12th International Joint Conference on Artificial Intelligence (IJCAI’91), vol. 1, p. 569-595. —**Colton, S.; López de Mántaras, R. and Stock, O. (2009):** «Computational creativity: coming of age», in AI Magazine , vol. 30, no. 3, p. 11-14. —**Colton, S.; Halskov, J.; Ventura, D.; Gouldstone, I.;** Cook, M. and Pérez-Ferrer, B. (2015): «The painting fool sees! New projects with the automated painter”, International Conference on Computational Creativity (ICCC 2015), pp. 189-196. —**Dennet, DC (2018):** From Bacteria to Bach and Back: The Evolution of Minds , London, Penguin Random House. —Dreyfus, Hubert L. (1965): Alchemy and Artificial Intelligence , Santa Monica, California, Rand Corporation. —(1992): What Computers Still Can’t Do , New York, MIT Press. —**Ferrucci, D.A.;** Cams, A.; Bagchi, S.; Gondek, D. and Mueller, ET (2013): “Watson: beyond jeopardy!”, in Artificial Intelligence , no. 199, pp. 93-105. **Forbus, Kenneth. D. (2012):** «How minds will be built», in Advances in Cognitive Systems , no. 1, pp. 47-58. —**Graves, A.;** Wayne, G.; Reynolds, M.; Harley, T.; Danielka, I.; Grabska-Barwinska, A.; Gomez-Colmenarejo, S.; Grefenstette, E.; Ramalho, T.; Agapiou, J.; Puigdomènech-Badia, A.; Hermann, KM; Zwols, Y.; Ostrovski, G.; Cain, A.; King, H.; Summerfield, C.; Blunson, P.; Kavukcuoglu, K. & Hassabis, D. (2016): « _Hybrid_ computing using a neural network with dynamic external memory», in Nature , no. 538, pp. 471-476. —Holland, John. H. (1975): Adaptation in natural and artificial systems , Michigan, University of Michigan Press. —**Inhelder, Bärbel,** and Piaget, Jean (1958): The Growth of Logical Thinking from Childhood to Adolescence , New York, Basic Books. —Lake, B.M.; Ullman, TD; Tenenbaum, JB & Gershman, SJ (2017): “Building machines that learn and think like people”, in Behavioral and Brain Sciences , vol. 40, e253. —**López de Mántaras, R. (2016):** «Artificial intelligence and the arts: towards computational creativity», in AA VV, The Next Step: Exponential Life , Madrid, BBVA/ Turner, pp. 100-125. —McCulloch, Warren. S. and Pitts, Walter (1943): “A logical calculus of ideas immanent in nervous activity”, in Bulletin of Mathematical Biophysics , no. 5, pp. 115-133. —Newell, Allen and Simon, Herbert A. (1976): “Computer science as empirical inquiry: symbols and serch”, in Communications of the ACM , vol. 19, no. 3, p. 113-126. —Pearl, Judea, and Mackenzie, Dana (2018): The Book of Why: The New Science of Cause and Effect , New York, Basic Books. —Saxena, V.; Wu, X.; Srivastava, I. and Zhu, K. (2018): «Towards neuromorphic learning machines using emerging memory devices with brain-like energy efficiency, preprints». Available at techoder. —Searle, John R. (1980): “Minds, brains, and programs,” in Behavioral and Brain Science , vol. 3, no. 3, p. 417-457. —**Silver, D.; Huang, A.;** Maddison, CJ; Guez, A.; Sifre, L.; Come den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, **V.; Lanctot, M.;** Dieleman, S.; Grewe, D.; Nham, J.; Kalchbrenner, N.; Sutskever, I.; Lillicrap, T.; Leach, M.; Kavukcuoglu, K.; Graepel, T. and Hassabis, D. (2016): “Mastering the game of go with deep neural networks and tree search”, in Nature , vol. 529, no. 7,587, p. 484-489. —Turing, Alan M. (1948): Intelligent Machinery, National Physical Laboratory Report , reprinted in B. Meltzer and D. Michie (eds.) (1969): Machine Intelligence 5 , Edinburgh, Edinburgh University Press, 1969. —(1950): «Computing machinery and intelligence», in Mind , vol. 59, no. 236, p. 433-460. —Weizenbaum, Joseph (1976): Computer Power and Human Reasoning: From Judgment to Calculation , San Francisco, WH Freeman and Company. —Weng, J.; McClelland, J.; Pentland, A.; Sporns, O.; Stockman, I.; Sur, M. and Thelen, E. (2001): “Autonomous mental development by robots and animals”, in Science , no. 291, pp. 599-600.

Painting a villa can be a time-consuming and labor-intensive task. By hiring professionals, you save valuable time and effort. Experienced painters can efficiently manage the entire process get here—from preparation and priming to the final coat—allowing you to focus on other important aspects of your life. Their expertise ensures that the job is done quickly and effectively, minimizing disruption to your daily routine.

Painting a villa can be a time-consuming and labor-intensive task. By hiring professionals, you save valuable time and effort. Experienced painters can efficiently manage the entire process [get here](https://dubaiimaintenance.com/services/villa-painting-dubai/)—from preparation and priming to the final coat—allowing you to focus on other important aspects of your life. Their expertise ensures that the job is done quickly and effectively, minimizing disruption to your daily routine.
Sign in to join this conversation.
No Label
No Milestone
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: bryanwest/Coprehensive_Blogs#231
No description provided.