Artificial intelligence

 

Artificial intelligence:

Introduction:

In contrast to the natural intelligence exhibited by humans and animals, artificial intelligence (AI) is intelligence demonstrated by machines. The study of intelligent agents, or any system that understands its environment and acts in a way that maximises its chances of succeeding, has been defined as the focus of AI research.

 

Artificial intelligence



Previously, robots that mimic and exhibit "human" cognitive abilities associated with the human mind, like "learning" and "problem-solving," were referred to as "artificial intelligence." Major AI researchers have now rejected this notion and are now describing AI in terms of rationality and acting rationally, which does not constrain how intelligence can be expressed.

 

A few examples of AI applications are cutting-edge web search engines like Google, recommendation systems like YouTube, Amazon, and Netflix, speech recognition software like Siri and Alexa, self-driving cars like Tesla, automated decision-making, and dominating the best strategic game systems (such as chess and Go).The AI effect is a phenomena where actions once thought to require "intelligence" are frequently taken out of the definition of AI as machines grow more and more capable. For instance, although being a commonplace technique, optical character recognition is typically left out of the definition of artificial intelligence.

 

Since its establishment as a field of study in 1956, artificial intelligence has gone through a number of waves of optimism, followed by setbacks and a reduction in financing (known as a "AI winter"), then new strategies, achievements, and increased investment. Since its inception, AI research has experimented with and abandoned a wide range of methodologies, including modelling human problem-solving, formal logic, extensive knowledge bases, and animal behaviour imitation.Machine learning that is heavily based in mathematics and statistics has dominated the subject in the first two decades of the twenty-first century. This approach has been very effective in solving many difficult problems in both industry and academia.

 

The numerous subfields of AI study are focused on specific objectives and the use of certain techniques. Reasoning, knowledge representation, planning, learning, natural language processing, sensing, and the capacity to move and manipulate objects are some of the classic objectives of AI research. One of the long-term objectives of the area is general intelligence, or the capacity to solve any problem. Artificial intelligence (AI) researchers have integrated and modified a wide range of problem-solving techniques, including as formal logic, artificial neural networks, search and mathematical optimization, as well as approaches from statistics, probability, and economics, to address these issues. Computer science, psychology, linguistics, philosophy, and many other disciplines are also influenced by AI.

 

The idea that human intellect "can be so thoroughly characterised that a machine may be constructed to imitate it" served as the foundation for the study. This sparked philosophical discussions about the mind and the moral ramifications of creating intelligent artificial entities, which have been topics of myth, literature, and philosophy since antiquity. Since then, computer scientists and philosophers have argued that if artificial intelligence is not directed toward useful ends, it may eventually pose an existential threat to humanity.

 

Artificial intelligence



History:

Artificial intelligences have been used as plot elements since antiquity and are frequently seen in works of fiction, such as Mary Shelley's Frankenstein and Karel apek's R.U.R. Many of the same questions that are currently being explored in artificial intelligence ethics were highlighted by these characters and their outcomes.

 

Philosophers and mathematicians pioneered the study of mechanical or "formal" reasoning in antiquity. The study of mathematical logic directly contributed to Alan Turing's theory of computation, which postulated that a machine could imitate every imaginable act of mathematical reasoning by randomly rearranging symbols as basic as "0" and "1." The Church-Turing thesis states that any formal reasoning process may be replicated by digital computers. This, coupled with related advancements in cybernetics, information theory, and neuroscience, prompted scientists to speculate about the feasibility of creating an electronic brain. McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons" is now widely regarded as the earliest piece of work in artificial intelligence.

 

Two theories regarding how to create artificial intelligence first surfaced in the 1950s. One idea, known as Symbolic AI or GOFAI, was to use computers to develop systems that could reason about the world and a symbolic representation of it. Allen Newell, Herbert A. Simon, and Marvin Minsky were among the supporters. The "heuristic search" approach, which linked intelligence to a problem of exploring a space of possibilities for answers, was closely related to this method. The second vision, also referred to as the connectionist strategy, aimed to develop intelligence via learning. This method's proponents, most notably Frank Rosenblatt, aimed to connect perceptrons in ways that were motivated by neural connections.The two theories of the mind (Symbolic AI) and the brain have been contrasted by James Manyika and others (connectionist). Manyika contends that symbolic methods predominated efforts to develop artificial intelligence during this time, in part because of their ties to the philosophical schools of Descartes, Boole, Gottlob Frege, Bertrand Russell, and others. Cybernetics- or artificial neural network-based connectionist techniques were once marginalised, but they have recently reemerged in prominence.

 

The first workshop on artificial intelligence was held in 1956 at Dartmouth College. The participants went on to develop and direct the field of AI research. They created programmes with their students that the media dubbed "astonishing":  machines learned checkers strategies, solved algebraic word problems, proved logical theorems, and spoke English. By the middle of the 1960s, the Department of Defense had constructed laboratories all over the world and was extensively funding research in the United States.

 

The goal of their area, according to researchers in the 1960s and 1970s, was to develop a machine that could have artificial general intelligence. They were persuaded that symbolic approaches would eventually succeed in achieving this. "Machines will be able, within twenty years, to accomplish whatever work a man can do," prophesied Herbert Simon. In his essay, Marvin Minsky predicted that "the challenge of developing 'artificial intelligence' will substantially be overcome within a generation." They had overlooked how challenging some of the last chores would be. The pace of development slowed, and in 1974, in response to Sir James Lighthill's criticism and persistent pressure from the US Congress to support more useful initiatives, the governments of the United States and Britain suspended experimental AI research.

 

The commercial success of expert systems, a type of AI software that imitated the knowledge and analytical abilities of human experts, in the early 1980s rekindled interest in AI research. The market for AI had surpassed $1 billion by 1985. At the same time, the U.S. and British governments decided to reinstate funding for academic research in response to Japan's fifth generation computer effort. But after the market for Lisp Machines crashed in 1987, AI once more came under fire, and a second, longer-lasting winter started.

 

The ability of the symbolic method to replicate all aspects of human cognition, particularly vision, robotics, learning, and pattern recognition, came to be questioned by many academics. A number of researchers started investigating "sub-symbolic" solutions to particular AI issues. Researchers in robotics, like Rodney Brooks, disregarded symbolic AI and concentrated on fundamental technical issues that would enable robots to move, survive, and learn about their surroundings. Geoffrey Hinton, David Rumelhart, and others rekindled interest in neural networks and "connectionism" in the middle of the 1980s. Neural networks, fuzzy systems, Grey system theory, evolutionary computation, and many more tools derived from statistics or mathematical optimization were developed as soft computing techniques in the 1980s.

 

By identifying precise answers to precise issues in the late 1990s and early 21st century, AI steadily rebuilt its reputation. Researchers were able to create conclusions that could be verified, use more mathematical techniques, and work with other fields because of their tight emphasis (such as statistics, economics and mathematics). Even though they were hardly ever referred to as "artificial intelligence" in the 1990s, solutions created by AI researchers were widely employed by the year 2000.

 

Machine learning and perception have advanced thanks to faster computers, better algorithms, and access to vast amounts of data; from 2012, data-hungry deep learning methods began to rule accuracy metrics.   According to Jack Clark of Bloomberg, 2015 marked a turning point for artificial intelligence since there were more than 2,700 software projects at Google that included AI, up from "sporadic usage" in 2012.  This, according to him, is the result of more readily available, reasonably priced neural networks, growing cloud computing infrastructure, and growing research tools and datasets.   One in five businesses claimed to have "integrated AI in certain offerings or processes" in a 2017 poll.

 

Many academic academics began to worry that the initial objective of building adaptable, fully intelligent robots was no longer being pursued by AI. The majority of current research uses statistical AI, including very effective methods like deep learning, which is primarily employed to solve certain issues. This worry gave rise to the field of artificial general intelligence, also known as "AGI," which by the 2010s had several well-funded universities.

 

 

No comments:

Post a Comment