Artificiel intilligence
mardi 26 mai 2020
lundi 25 mai 2020
In computing, AI (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals. Leading AI textbooks define the sphere because of the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] Colloquially, the term "artificial intelligence" is commonly accustomed to describe machines (or computers) that mimic "cognitive" functions that humans escort the human mind, like "learning" and "problem-solving".[2]
As machines become increasingly capable, tasks considered to want "intelligence" are often off from the definition of AI, a phenomenon referred to as the AI effect.[3] A quip in Tesler's Theorem says "AI is whatever hasn't been done yet."[4] for example, optical character recognition is often excluded from things considered to be AI,[5] having become a routine technology.[6] Modern machine capabilities generally classified as AI include successfully understanding human speech,[7] competing at the best level in strategic game systems (such as chess and Go),[8] autonomously operating cars, intelligent routing in content delivery networks, and military simulations[9].
Artificial intelligence was founded as a tutorial discipline in 1955, and within the years since has experienced several waves of optimism,[10][11] followed by disappointment and also the loss of funding (known as an "AI winter"),[12][13] followed by new approaches, success and renewed funding.[11][14] for many of its history, AI research has been divided into sub-fields that always fail to speak with one another.[15] These sub-fields are supported technical considerations, like particular goals (e.g. "robotics" or "machine learning"),[16] the utilization of particular tools ("logic" or artificial neural networks), or deep philosophical differences.[17][18][19] Sub-fields have also been supported by social factors (particular institutions or the work of particular researchers).[15]
The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, linguistic communication processing, perception, and also the ability to maneuver and manipulate objects.[16] General intelligence is among the field's long-term goals.[20] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are employed in AI, including versions of search and mathematical optimization, artificial neural networks, and methods supported statistics, probability, and economics. The AI field draws upon computing, information engineering, mathematics, psychology, linguistics, philosophy, and plenty of other fields.
The field was founded on the idea that human intelligence "can be so precisely described that a machine is often made to simulate it".[21] This raises philosophical arguments about the character of the mind and also the ethics of making artificial beings endowed with human-like intelligence. These issues are explored by myth, fiction, and philosophy since antiquity.[22] Some people also consider AI to be a danger to humanity if it progresses unabated.[23][24] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[25]
In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of information, and theoretical understanding; and AI techniques became a necessary a part of the technology industry, helping to resolve many challenging problems in computing, software engineering, and research
As machines become increasingly capable, tasks considered to want "intelligence" are often off from the definition of AI, a phenomenon referred to as the AI effect.[3] A quip in Tesler's Theorem says "AI is whatever hasn't been done yet."[4] for example, optical character recognition is often excluded from things considered to be AI,[5] having become a routine technology.[6] Modern machine capabilities generally classified as AI include successfully understanding human speech,[7] competing at the best level in strategic game systems (such as chess and Go),[8] autonomously operating cars, intelligent routing in content delivery networks, and military simulations[9].
Artificial intelligence was founded as a tutorial discipline in 1955, and within the years since has experienced several waves of optimism,[10][11] followed by disappointment and also the loss of funding (known as an "AI winter"),[12][13] followed by new approaches, success and renewed funding.[11][14] for many of its history, AI research has been divided into sub-fields that always fail to speak with one another.[15] These sub-fields are supported technical considerations, like particular goals (e.g. "robotics" or "machine learning"),[16] the utilization of particular tools ("logic" or artificial neural networks), or deep philosophical differences.[17][18][19] Sub-fields have also been supported by social factors (particular institutions or the work of particular researchers).[15]
The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, linguistic communication processing, perception, and also the ability to maneuver and manipulate objects.[16] General intelligence is among the field's long-term goals.[20] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are employed in AI, including versions of search and mathematical optimization, artificial neural networks, and methods supported statistics, probability, and economics. The AI field draws upon computing, information engineering, mathematics, psychology, linguistics, philosophy, and plenty of other fields.
The field was founded on the idea that human intelligence "can be so precisely described that a machine is often made to simulate it".[21] This raises philosophical arguments about the character of the mind and also the ethics of making artificial beings endowed with human-like intelligence. These issues are explored by myth, fiction, and philosophy since antiquity.[22] Some people also consider AI to be a danger to humanity if it progresses unabated.[23][24] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[25]
In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of information, and theoretical understanding; and AI techniques became a necessary a part of the technology industry, helping to resolve many challenging problems in computing, software engineering, and research
Inscription à :
Articles (Atom)