Showing posts with label Artificial intelligence. Show all posts
Showing posts with label Artificial intelligence. Show all posts

Bioelectronics

 

Bioelectronics:

Introduction:

Bioelectronics was described as "the application of biological materials and biological structures for information processing systems and innovative devices" at the first C.E.C. Workshop, held in Brussels in November 1991. According to one definition, bioelectronics, and more specifically bio-molecular electronics, is "the study and development of bio-inspired (i.e. self-assembly) inorganic and organic materials, and of bio-inspired (i.e. massive parallelism) hardware architectures for the implementation of new information processing systems, sensors, and actuators, and for molecular manufacturing down to the atomic scale."In a 2009 report, the US Department of Commerce's National Institute of Standards and Technology (NIST) referred to bioelectronics as "the discipline deriving from the convergence of biology and electronics."

 

Bio-elecronics

The Institute of Electrical and Electronics Engineers (IEEE), which has published its Elsevier journal Biosensors and Bioelectronics since 1990, is one source for information in the topic. The objective of bioelectronics, according to the journal, is to: "...exploit biology and electronics in a broader framework that includes, for instance, biological fuel cells, bionics, and biomaterials for information processing, information storage, electronic components, and actuators. The interaction between biological materials and micro- and nano-electronics is an important factor."

 

History:

Scientist Luigi Galvani conducted the first documented investigation into bioelectronics in the 18th century by putting a voltage on a set of broken frog legs. Bioelectronics began when the legs began to move. Since the invention of the pacemaker and the development of the medical imaging business, electronics technology has been utilised in biology and medicine. According to a 2009 analysis of papers with the phrase in the title or abstract, Europe (43 percent) and the United States (23 percent) were the regions with the most activity (20 percent).

 

Material Used In It:

The use of organic electronic components in the field of bioelectronics is known as organic bioelectronics. When it comes to interacting with biological systems, organic materials (i.e., those containing carbon) have a lot of promise. Applications today concentrate on infection and neurology.

 

Conducting polymer coatings, an organic electronic material, demonstrate a significant advancement in material science. It was the most advanced type of electrical stimulation available. Better recordings and less "harmful electrochemical side reactions" were produced as a result of improved electrode impedance during electrical stimulation. In 1984 Mark Wrighton and colleagues created Organic Electrochemical Transistors (OECT), which could move ions. Due to the increased signal-to-noise ratio, the measured impedance is low. Magnuss Berggren developed the Organic Electronic Ion Pump (OEIP), a tool that might be used to target particular bodily areas and organs to apply medication.

 

Titanium nitride (TiN), one of the few materials with a solid track record in CMOS technology, proved to be extraordinarily robust and well suited for electrode applications in medical implants.

 

Bio-elecronics

Applications:

People with diseases and disabilities can live better lives because to bioelectronics. One portable tool that helps diabetic individuals manage and measure their blood sugar levels is the glucose monitor. Patients with epilepsy, chronic pain, Parkinson's, deafness, Essential Tremor, and blindness are treated with electrical stimulation. A variant of Magnuss Berggren's OEIP, the first bioelectronic implant system utilised in a living, free animal for therapeutic purposes, was developed by other researchers. It sent electric currents into the acid GABA.Chronic pain is influenced by a shortage of GABA in the body. The injured nerves would then receive appropriate GABA distribution and experience pain relief. When the Cholinergic Anti-inflammatory Pathway (CAP) in the Vagus Nerve is activated with vagus nerve stimulation (VNS), patients with conditions like arthritis experience less inflammation. VNS can also help patients with depression and epilepsy since they are more likely to have a closed CAP. However, not all electronic systems that are used to enhance human life are necessarily bioelectronic devices; rather, only those that include a close and direct interaction between electronic and biological systems are considered to be bioelectronic devices.

 

 

Analogue Electronics

 

Analogue electronics:

Introduction:

Contrary to digital electronics, where signals typically take only two levels, analog electronics are electronic systems with a continuously changeable signal. The proportional relationship between a signal and a voltage or current that represents the signal is referred to as "analog." The Greek word "analogueos," which means "proportional," is the source of the English word analogue.

 

Analogue electronics


An analogue signal transmits information using a property of the medium. An angular location of a needle, for instance, is used as a signal by an aneroid barometer to indicate changes in air pressure. Changes in electrical signals' voltage, current, frequency, or overall charge can be used to convey information. A transducer, which transforms one form of energy into another, translates information from another physical form (such as sound, light, temperature, pressure, or position) to an electrical signal (e.g. a microphone)

 

Each distinct signal value reflects a different piece of information, and the signals can take any value from a predetermined range. Each level of the signal indicates a distinct level of the phenomenon it describes, and any change in the signal is significant. Consider the signal as a temperature indicator, with one volt standing in for one degree Celsius. According to this approach, 10 volts correspond to 10 degrees, and 10.1 volts to 10.1 degrees.

 

Analogue electronics


The use of modulation is an additional means of transmitting an analog signal. This includes changing one or more aspects of a basic carrier signal. Amplitude modulation (AM) modifies the amplitude of a sinusoidal voltage waveform while frequency modulation (FM) modifies the frequency. There are many other methods, such phase modulation or altering the carrier signal's phase.

 

The variation in the sound pressure that strikes a microphone during an analog sound recording causes a corresponding variation in the current or voltage across the microphone. The current or voltage fluctuation grows proportionally as the sound level fluctuates while maintaining the same waveform or shape.

 

Analog signals can be used in mechanical, pneumatic, hydraulic, and other systems.

 

Random disturbances or fluctuations, some of which are brought on by the random thermal vibrations of atomic particles, are invariably present in analog systems. Any disturbance is comparable to a change in the original signal and appears as noise since all variations of an analog signal are significant. These random changes become more severe and cause signal deterioration as the signal is copied and recopied or sent across extended distances. Crosstalk from other signals or components that are poorly built could be additional sources of noise. Utilizing low-noise amplifiers and shielding both help to lessen these problems (LNA).

 

Analog and digital electronics interpret signals in different ways because the information is encoded in them differently. In the digital realm, all operations that can be applied to an analogue signal, such as amplification, filtering, limiting, and others, can also be carried out. Because any digital circuit's behavior can be explained using the principles of analogue circuits, every digital circuit is also an analog circuit.

 

Utilizing microelectronics has reduced the cost and increased accessibility of digital gadgets.

 

The level of the noise determines how it affects an analog circuit. The analogue transmission gets affected increasingly and loses use over time as noise level rises. Analog signals are considered to "fail gracefully" as a result. Intelligible information can still be found in analogue signals even when there is a lot of noise present. Contrarily, digital circuits are completely unaffected by noise up to a specific threshold, after which they experience catastrophic failure. The use of error detection and repair coding methods and algorithms for digital telecommunications can raise the noise threshold. However, there is still a point at which the link catastrophically fails.

 

Because the information in digital electronics is quantized, a signal can represent the same information as long as it stays within a given range of values. At each logic gate in digital circuits, the signal is regenerated, reducing or eliminating noise. [failed to verify] Signal loss in analog circuits can be recovered using amplifiers. But noise builds up over the entire system, and the amplifier itself will amplify the noise in accordance with its noise figure.

 

The amount of noise in the original signal and the noise that processing adds are the key elements that impact how precise a signal is (see signal-to-noise ratio). The resolution of analogue signals is constrained by fundamental physical factors like shot noise in components. In digital electronics, extra precision is provided by representing the signal with more digits. Since digital operations can typically be done without losing precision, the analogue-to-digital converter's (ADC) capability determines the practical limit for the number of digits. An analog signal is converted into a string of binary integers by the ADC. The ADC can be used in straightforward digital display devices like thermometers and light meters, but it can also be utilized for data collecting and digital sound recording. A digital signal is converted to an analog signal using a device called a digital-to-analog converter (DAC). A DAC transforms a stream of binary numbers into an analog signal. A DAC is frequently found in an op-gain-control amp's system, which may then be used to operate digital amplifiers and filters.

 

Analogue electronics


When compared to analogous digital systems, analog circuits are often more difficult to conceptualize. This is one of the primary causes of the rise in popularity of digital systems over analog ones. As opposed to digital systems, analogue circuits are typically constructed by hand and with far less automation. Since the early 2000s, platforms have been created that make it possible to describe analog design using software, allowing for quicker prototyping. A digital electronic gadget will, however, always require an analog interface in order to communicate with the outside world. For instance, the initial stage of the receive chain in every digital radio receiver is an analog preamplifier.

 

The only components in an analog circuit are resistors, capacitors, and inductors. Transistors and other active components are found in active circuits. Discrete components, or lumped parts, are used to construct conventional circuits. Distributed-element circuits, constructed from segments of transmission line, offer an option.

Artificial intelligence

 

Artificial intelligence:

Introduction:

In contrast to the natural intelligence exhibited by humans and animals, artificial intelligence (AI) is intelligence demonstrated by machines. The study of intelligent agents, or any system that understands its environment and acts in a way that maximises its chances of succeeding, has been defined as the focus of AI research.

 

Artificial intelligence



Previously, robots that mimic and exhibit "human" cognitive abilities associated with the human mind, like "learning" and "problem-solving," were referred to as "artificial intelligence." Major AI researchers have now rejected this notion and are now describing AI in terms of rationality and acting rationally, which does not constrain how intelligence can be expressed.

 

A few examples of AI applications are cutting-edge web search engines like Google, recommendation systems like YouTube, Amazon, and Netflix, speech recognition software like Siri and Alexa, self-driving cars like Tesla, automated decision-making, and dominating the best strategic game systems (such as chess and Go).The AI effect is a phenomena where actions once thought to require "intelligence" are frequently taken out of the definition of AI as machines grow more and more capable. For instance, although being a commonplace technique, optical character recognition is typically left out of the definition of artificial intelligence.

 

Since its establishment as a field of study in 1956, artificial intelligence has gone through a number of waves of optimism, followed by setbacks and a reduction in financing (known as a "AI winter"), then new strategies, achievements, and increased investment. Since its inception, AI research has experimented with and abandoned a wide range of methodologies, including modelling human problem-solving, formal logic, extensive knowledge bases, and animal behaviour imitation.Machine learning that is heavily based in mathematics and statistics has dominated the subject in the first two decades of the twenty-first century. This approach has been very effective in solving many difficult problems in both industry and academia.

 

The numerous subfields of AI study are focused on specific objectives and the use of certain techniques. Reasoning, knowledge representation, planning, learning, natural language processing, sensing, and the capacity to move and manipulate objects are some of the classic objectives of AI research. One of the long-term objectives of the area is general intelligence, or the capacity to solve any problem. Artificial intelligence (AI) researchers have integrated and modified a wide range of problem-solving techniques, including as formal logic, artificial neural networks, search and mathematical optimization, as well as approaches from statistics, probability, and economics, to address these issues. Computer science, psychology, linguistics, philosophy, and many other disciplines are also influenced by AI.

 

The idea that human intellect "can be so thoroughly characterised that a machine may be constructed to imitate it" served as the foundation for the study. This sparked philosophical discussions about the mind and the moral ramifications of creating intelligent artificial entities, which have been topics of myth, literature, and philosophy since antiquity. Since then, computer scientists and philosophers have argued that if artificial intelligence is not directed toward useful ends, it may eventually pose an existential threat to humanity.

 

Artificial intelligence



History:

Artificial intelligences have been used as plot elements since antiquity and are frequently seen in works of fiction, such as Mary Shelley's Frankenstein and Karel apek's R.U.R. Many of the same questions that are currently being explored in artificial intelligence ethics were highlighted by these characters and their outcomes.

 

Philosophers and mathematicians pioneered the study of mechanical or "formal" reasoning in antiquity. The study of mathematical logic directly contributed to Alan Turing's theory of computation, which postulated that a machine could imitate every imaginable act of mathematical reasoning by randomly rearranging symbols as basic as "0" and "1." The Church-Turing thesis states that any formal reasoning process may be replicated by digital computers. This, coupled with related advancements in cybernetics, information theory, and neuroscience, prompted scientists to speculate about the feasibility of creating an electronic brain. McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons" is now widely regarded as the earliest piece of work in artificial intelligence.

 

Two theories regarding how to create artificial intelligence first surfaced in the 1950s. One idea, known as Symbolic AI or GOFAI, was to use computers to develop systems that could reason about the world and a symbolic representation of it. Allen Newell, Herbert A. Simon, and Marvin Minsky were among the supporters. The "heuristic search" approach, which linked intelligence to a problem of exploring a space of possibilities for answers, was closely related to this method. The second vision, also referred to as the connectionist strategy, aimed to develop intelligence via learning. This method's proponents, most notably Frank Rosenblatt, aimed to connect perceptrons in ways that were motivated by neural connections.The two theories of the mind (Symbolic AI) and the brain have been contrasted by James Manyika and others (connectionist). Manyika contends that symbolic methods predominated efforts to develop artificial intelligence during this time, in part because of their ties to the philosophical schools of Descartes, Boole, Gottlob Frege, Bertrand Russell, and others. Cybernetics- or artificial neural network-based connectionist techniques were once marginalised, but they have recently reemerged in prominence.

 

The first workshop on artificial intelligence was held in 1956 at Dartmouth College. The participants went on to develop and direct the field of AI research. They created programmes with their students that the media dubbed "astonishing":  machines learned checkers strategies, solved algebraic word problems, proved logical theorems, and spoke English. By the middle of the 1960s, the Department of Defense had constructed laboratories all over the world and was extensively funding research in the United States.

 

The goal of their area, according to researchers in the 1960s and 1970s, was to develop a machine that could have artificial general intelligence. They were persuaded that symbolic approaches would eventually succeed in achieving this. "Machines will be able, within twenty years, to accomplish whatever work a man can do," prophesied Herbert Simon. In his essay, Marvin Minsky predicted that "the challenge of developing 'artificial intelligence' will substantially be overcome within a generation." They had overlooked how challenging some of the last chores would be. The pace of development slowed, and in 1974, in response to Sir James Lighthill's criticism and persistent pressure from the US Congress to support more useful initiatives, the governments of the United States and Britain suspended experimental AI research.

 

The commercial success of expert systems, a type of AI software that imitated the knowledge and analytical abilities of human experts, in the early 1980s rekindled interest in AI research. The market for AI had surpassed $1 billion by 1985. At the same time, the U.S. and British governments decided to reinstate funding for academic research in response to Japan's fifth generation computer effort. But after the market for Lisp Machines crashed in 1987, AI once more came under fire, and a second, longer-lasting winter started.

 

The ability of the symbolic method to replicate all aspects of human cognition, particularly vision, robotics, learning, and pattern recognition, came to be questioned by many academics. A number of researchers started investigating "sub-symbolic" solutions to particular AI issues. Researchers in robotics, like Rodney Brooks, disregarded symbolic AI and concentrated on fundamental technical issues that would enable robots to move, survive, and learn about their surroundings. Geoffrey Hinton, David Rumelhart, and others rekindled interest in neural networks and "connectionism" in the middle of the 1980s. Neural networks, fuzzy systems, Grey system theory, evolutionary computation, and many more tools derived from statistics or mathematical optimization were developed as soft computing techniques in the 1980s.

 

By identifying precise answers to precise issues in the late 1990s and early 21st century, AI steadily rebuilt its reputation. Researchers were able to create conclusions that could be verified, use more mathematical techniques, and work with other fields because of their tight emphasis (such as statistics, economics and mathematics). Even though they were hardly ever referred to as "artificial intelligence" in the 1990s, solutions created by AI researchers were widely employed by the year 2000.

 

Machine learning and perception have advanced thanks to faster computers, better algorithms, and access to vast amounts of data; from 2012, data-hungry deep learning methods began to rule accuracy metrics.   According to Jack Clark of Bloomberg, 2015 marked a turning point for artificial intelligence since there were more than 2,700 software projects at Google that included AI, up from "sporadic usage" in 2012.  This, according to him, is the result of more readily available, reasonably priced neural networks, growing cloud computing infrastructure, and growing research tools and datasets.   One in five businesses claimed to have "integrated AI in certain offerings or processes" in a 2017 poll.

 

Many academic academics began to worry that the initial objective of building adaptable, fully intelligent robots was no longer being pursued by AI. The majority of current research uses statistical AI, including very effective methods like deep learning, which is primarily employed to solve certain issues. This worry gave rise to the field of artificial general intelligence, also known as "AGI," which by the 2010s had several well-funded universities.