Science11 - Artificial Intelligence
Artificial intelligence (AI) is
rapidly becoming critical to the modern world.
This article will explore the science of artificial intelligence, where
it came from, what it is doing for us today, and what it might do for us in the
future.
I will start with a definition of
AI and how it works, then talk about the components of today’s AI systems,
followed by the history of AI, its applications to specific problems so far,
and what we can expect in the future. I
will conclude with a discussion of the potential of ultimate AI systems.
My principal sources include
“Artificial Intelligence,” “History of Artificial Intelligence,” and “Artificial
General Intelligence,” Wikipedia; “How Does Artificial Intelligence Work?” csuglobal.edu; “The History of Artificial
Intelligence,” sitn.hms.harvard.edu; “A Complete History of Artificial
Intelligence,” g2.com; “What is Artificial Intelligence: Types, History, and Future,” simplilearn.com;
“The Brief History of Artificial Intelligence,” ourworldindata.org; “Why AI
will never replace humans,” antino.io/blog; “AI Won’t Replace Human Intuition,”
and “The Future of AI: 5 Things to
Expect in the Next 10 years,” forbes.com; “What is Artificial General
Intelligence,” techtarget.com; “The Future of AI: How Artificial Intelligence Will Change the
World,” builtin.com; “The four biggest challenges in brain simulation,”
nature.com; and numerous other online sources.
Introduction to AI and How it
Works
AI
is the science that allows machines and computer applications to mimic human
intelligence by modeling human behavior so that it can use human-like thinking
processes to solve complex problems.
AI is accomplished by studying the patterns of the human
brain and by analyzing the cognitive process.
The outcome of these studies enables intelligent software and AI systems.
Born in the 1950s, the science of AI
has progressed irregularly, due to both technology limitations and periodic
funding restraints.
Two basic goals of AI have emerged: “narrow”
AI and “full” AI. The goal of narrow AI
is to solve specific tasks or problems, often repetitive, time-consuming jobs
that normally require human intelligence, such
as visual perception, speech recognition, decision-making, and translation
between languages. The goal
of full AI, called Artificial General Intelligence (AGI), is to achieve generalized human cognitive abilities in
software so that, faced with an unfamiliar task, the AGI system could find a
solution. The intention of an AGI system
is to be able to perform any task that a human being can. Some researchers extend this goal to computer
programs that experience sentience or consciousness.
The first generation of AI researchers were convinced that AGI
was possible and that it would exist in just a few decades. However, by the 1970s and 1980s, it became
obvious that researchers had grossly underestimated the difficulty of achieving
AGI. (See Artificial General
Intelligence below.)
However, in the 1990s and
early 21st century, researchers achieved real progress by focusing
on narrow AI, specific problems where they could produce verifiable results and
commercial applications. These "applied AI" systems are now used extensively
throughout industry, with applications found
in E-commerce, Education, Internet Operations, Road and Air Vehicles,
Healthcare, Marketing, Finance, Entertainment, and more. (See Today’s Applications of Narrow AI
below.)
Today’s
AI systems work by repeatedly analyzing large sets of data to learn from
patterns and features in the data that they analyze. Each time an AI system runs a round of data
processing, it tests and measures its own performance and develops additional
expertise.
Because
AI never needs a break, it can run through hundreds, thousands, or even
millions of tasks extremely quickly, learning a great deal in very little time,
and becoming extremely capable at whatever it’s being trained to accomplish.
To solve these problems, AI researchers have adapted and
integrated a wide range of problem-solving techniques - including search and
mathematical optimization, formal logic, artificial neural networks, and
methods based on statistics, probability, and economics. AI
also draws upon computer science,
psychology, linguistics, philosophy, and many other fields. AI isn’t just a single computer program or application,
but an entire discipline, or science.
Components of Today’s AI Systems
There are many different sub-fields of the overarching science of today’s
artificial intelligence. Each of the following components is commonly
utilized by AI technology today:
Machine Learning. Allows AI systems to learn automatically and
develop better results based on experience, all without being programmed to do
so. Machine Learning allows AI to find
patterns in data, uncover insights, and improve the results of whatever task
the system has been set out to achieve.
Deep Learning.
A specific type of machine learning that allows AI to learn and improve
by processing data. Deep Learning uses
artificial neural networks which mimic biological neural networks in the human
brain to process information, find connections between the data, and come up
with inferences, or results based on positive and negative reinforcement.
Neural Networks. Operate like networks of neurons in the human
brain, allowing AI systems to take in large data sets, uncover patterns amongst
the data, and answer questions about it.
Cognitive Computing. Imitates the interactions between humans and
machines, allowing computer models to mimic the way that a human brain works
when performing a complex task, like analyzing text, speech, or images.
Natural Language Processing.
Allows computers to recognize, analyze, interpret, and truly understand
human language, either written or spoken. Natural Language Processing is critical for
any AI-driven system that interacts with humans in some way, either via text or
spoken inputs.
Computer Vision.
Interprets the content of an image via pattern recognition and deep
learning, and lets AI systems identify specific objects in visual data.
History of AI
Precursors. From ancient times, various
mathematicians, theologians, philosophers, professors, and authors mused about
mechanical techniques, calculating machines, and numeral systems that
eventually led to the concept of mechanized “human” thought in non-human
beings.
Depictions
of all-knowing machines akin to computers were more widely discussed in popular
literature starting in the early 1700s.
Jonathan Swift’s novel Gulliver’s Travels mentioned a device
called the engine, which is one of the earliest references to modern-day
technology, specifically a computer.
This device’s intended purpose was to improve knowledge and mechanical
operations to a point where even the least talented person would seem to be skilled
- all with the assistance and knowledge of a non-human mind (mimicking
artificial intelligence).
In 1921, Karel Čapek, a Czech playwright, released his
science fiction play “Rossum’s Universal Robots.” His play explored the concept
of factory-made artificial people who he called robots - the first known
reference to the word. From this point
onward, people took the “robot” idea and implemented it into their research,
art, and discoveries.
In 1927, the sci-fi film Metropolis featured a
robotic girl who was physically indistinguishable from the human counterpart
from which it took its likeness. This
film is significant because it is the first on-screen depiction of a robot, and
thus lent inspiration to other famous non-human characters such as C-P30 in the
movie Star Wars.
Robotic girl from the 1927 film Metropolis. |
Note: The principle of
the modern computer was proposed by the British polymath Alan Turing in his seminal 1936 paper, “On Computable Numbers.” The first digital electronic calculating
machines were developed during World War II, and used to break German wartime
codes. After World War II, computers
rapidly improved.
By
the 1950s, we had a generation of scientists, mathematicians, and philosophers
with the concept of computers, AI, and intelligent robots culturally
assimilated in their minds. Early on, Alan
Turing explored the mathematical possibility of artificial intelligence. Turing
suggested that humans use available information as well as reason in order to
solve problems and make decisions, so why can’t machines do the same thing?
This was the logical framework of his 1950 paper, “Computing Machinery and
Intelligence” in which he discussed how to build intelligent
machines and how to test their intelligence.
Alan Turing was instrumental in the development of digital computers and artificial intelligence.
Birth of AI. Five
years later, the proof of concept was initialized through Allen Newell, Cliff
Shaw, and Herbert Simon’s, Logic Theorist. The Logic Theorist was a computer
program designed to mimic the problem-solving skills of a human and was funded
by Research and Development Corporation.
It’s considered by many to be the first artificial intelligence program,
and was presented at the Dartmouth Summer Research Project on Artificial Intelligence, hosted by John McCarthy and Marvin Minsky
in 1956.
In this historic conference, McCarthy brought together
top researchers from various fields for an open-ended discussion on artificial
intelligence, the term which he coined at the very event. Everyone whole-heartedly aligned with the
sentiment that AI was achievable. This event catalyzed the next twenty years of
AI research.
Progression of Narrow AI. From 1957 to 1974, narrow AI flourished. Computers could store more information and
became faster, cheaper, and more accessible. Machine learning algorithms (a set of rules to be followed) also improved, and people got better at knowing
which algorithm to apply to their problem.
Early demonstrations such as Newell and
Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA showed
promise toward the goals of problem solving and the interpretation of spoken
language respectively. These successes,
as well as the advocacy of leading researchers, convinced government agencies
such as the Defense Advanced Research Projects Agency to fund AI
research at several institutions. The
government was particularly interested in a machine that could transcribe and
translate spoken language as well as high throughput data processing.
By the middle of the 1960s, research in the U.S.
was heavily funded by the Department of Defense, and
laboratories had been established around the world.
The biggest obstacle was the lack of
computational power to do anything substantial: computers simply couldn’t store
enough information or process it fast enough.
In order to communicate, for example, one needs to know the meanings of
many words and understand them in many combinations. Hans Moravec, a doctoral student of McCarthy
at the time, stated that “computers were still millions of times too weak to exhibit
intelligence.” In 1974, as patience dwindled, so did the funding, and
research slowed for ten years.
In the 1980’s, AI was reignited by two sources:
an expansion of the algorithmic toolkit, and a boost of funds. John Hopfield and David Rumelhart popularized
“deep learning” techniques which allowed computers to learn using
experience. Edward Feigenbaum
introduced expert systems which
mimicked the decision-making process of a human expert. The program would ask an expert in a field
how to respond in a given situation, and once this was learned for virtually
every situation, non-experts could receive advice from that program. Expert systems were widely used in
industries.
The Japanese government heavily funded expert
systems and other AI related endeavors as part of their Fifth Generation Computer Project (FGCP). From 1982-1990, they invested $400 million
dollars with the goals of revolutionizing computer processing, implementing
logic programming, and improving artificial intelligence. Unfortunately, most
of the ambitious goals were not met.
However, it could be argued that the indirect effects of the FGCP
inspired a talented young generation of engineers and scientists. Regardless, funding of the FGCP ceased, and
AI fell out of the limelight again.
Ironically, in the absence of government funding
and public hype, AI thrived. During the
1990s and 2000s, many of the landmark goals of narrow artificial intelligence were
achieved. In 1997, reigning world chess
champion and grand master Gary Kasparov was defeated by IBM’s Deep Blue,
a chess playing computer program. This
highly publicized match was the first time a reigning world chess champion lost
to a computer and served as a huge step towards an artificially intelligent
decision-making program. In the same
year, speech recognition software, developed by Dragon Systems, was implemented
on Windows. This was a great
step forward in the direction of the spoken language interpretation.
Faster computers, algorithmic improvements, and
access to large amounts of data enabled advances in machine
learning and perception. In a 2017 survey, one in five companies
reported they had "incorporated AI in some offerings or processes.” The
amount of research into AI (measured by total publications) increased by 50% in
the years 2015-2019.
The language and image recognition capabilities
of AI systems developed very rapidly. The
chart below shows how we got here by zooming into the last two decades of AI
development. The plotted data stems from
a number of tests in which human and AI performance were evaluated in five
different domains, from handwriting recognition to language
understanding.
Within each of the five domains, the initial
performance of the AI system is set to -100, and human performance in these
tests is used as a baseline that is set to zero. This means that when the model’s performance
crosses the zero line is when the AI system scored more points in the relevant
test than the humans who did the same test.
AI systems have become steadily more capable and are now beating humans
in tests in all these domains.
Today’s Applications of Narrow
AI
Artificial
intelligence is no longer a technology of the future; narrow AI is here, and
much of what is reality now would have looked like sci-fi just recently. There’s virtually
no major industry that modern narrow AI hasn’t already affected. Some sectors are at the start of their AI
journey, others are veteran travelers. Both
have a long way to go. Regardless, the
impact AI is having on our present day lives is hard to ignore.
Here
is a partial list of AI applications that impact all of us today.
1.
E-Commerce
Personalized
Shopping: AI creates recommendations to engage better
with customers. These recommendations
are made in accordance with the customer’s browsing history, preference, and
interests. It helps in improving
relationships with customers and their loyalty towards the seller.
AI-powered
Assistants: Virtual shopping assistants and chatbots, designed
to simulate conversation with human users, help
improve the user experience while shopping online. Natural Language
Processing makes the conversation sound as human and personal as possible. Moreover, these assistants can have real-time
engagement with customers.
AI-powered virtual shopping assistants improve the user experience.
Fraud
Prevention: Credit card frauds and fake reviews are two
of the most significant issues that E-Commerce companies deal with. By considering usage patterns, AI can help
reduce the possibility of credit card frauds taking place. Many customers prefer to buy a product or
service based on customer reviews. AI
can help identify and handle fake reviews.
Manufacturing: AI
solutions help forecast load and demand for factories, improving their
efficiency, and allow factory managers to make better decisions about ordering
materials, completion timetables, and other logistics issues.
Human
Resources: AI helps with blind hiring. Machine learning software can scan job
candidates' profiles and resumes to provide recruiters an understanding of the
talent pool they must choose from.
Retail: AI
systems are being consulted to design more effective store layouts and handle
stock management.
2.
Education
Administrative
Tasks: AI helps educators with tasks like
facilitating and automating personalized messages to students, back-office
tasks like grading paperwork, arranging and facilitating parent and guardian
interactions, routine issue feedback, managing enrollment, courses,
and HR-related topics.
Smart
Content: AI helps digitize content like video
lectures, conferences, and text book guides.
We can apply different interfaces like animations and learning content
through customization for students from different grades. AI helps create
a rich learning experience by generating and providing audio and video
summaries and integral lesson plans.
Without even the direct involvement of the lecturer or the teacher, a
student can access extra learning material or assistance through Voice
Assistants and also provide answers to very common questions easily.
Personalized
Learning: AI can monitor students’ data thoroughly, and
habits, lesson plans, reminders, study guides, flash notes, frequency or
revision, etc., can be easily generated.
3.
Internet Operations
Spam
Filters: The email that we use in our day-to-day lives
has AI that filters out spam emails sending them to spam or trash folders,
letting us see the filtered content only.
Facial
Recognition: Our favorite devices like our phones, laptops,
and PCs use facial recognition techniques by using face filters to detect and
identify in order to provide secure access.
Apart from personal usage, facial recognition is a widely used AI
application even in high security-related areas in several industries.
Internet
Searches: Without the help of AI, search engines like
Google would not be able to deliver relevant and timely information to drive
countless daily decisions. AI figures
out what search results you will see and what related topics may be relevant to
help you get the “right” answers.
Chatbots: AI
chatbots respond to people online who use the "live chat" feature
that many organizations provide for customer service. AI chatbots are effective with the use of
machine learning, and can be integrated in an array of websites and
applications.
Voice
Assistants: Virtual assistants like Siri and Google
Assistant use voice queries, gesture-based control (human body
language), focus-tracking, and a natural-language user interface to
answer questions, make recommendations, and perform actions by delegating
requests to a set of Internet services. With continued use, they adapt to users'
individual language usages, searches, and preferences, returning individualized
results.
The Siri voice assistant operates from today’s smart phones.
Social
Media: On Instagram, AI considers your likes and the
accounts you follow to determine what posts you are shown on your explore
tab. AI helps Facebook understand conversations better. It can be used to translate posts from
different languages automatically. AI is
used by Twitter for fraud detection, removing propaganda, and hateful
content. Twitter also uses AI to
recommend tweets that users might enjoy, based on what type of tweets they
engage with.
Recommendation
Systems: Various platforms that we use in our daily
lives like E-commerce, entertainment websites, social media, video sharing
platforms, like YouTube, etc., all use a recommendation system to get user data
and provide customized recommendations to users to increase engagement. This is a very widely used AI application in
almost all industries.
4.
Road Vehicle Operation
Safety: GPS
technology provides users with accurate, timely, and detailed information to
improve safety. AI neural networks
automatically detect the number of lanes and road types behind obstructions on
the roads.
Driving
Efficiency: AI is heavily used by Uber and many logistics
companies to improve operational efficiency, analyze road traffic, and optimize
routes. Through mapping applications, AI
has streamlined the way we plan for and think about car travel. AI enables smart traffic lights to improve
traffic control.
In-Vehicle
AI:
AI improves the in-vehicle experience and provides additional systems
like emergency braking, blind-spot monitoring, and driver-assist steering.
Autonomous
Vehicles: Automobile manufacturing companies like
Toyota, Audi, Volvo, and Tesla use machine learning to train computers to think
and evolve like humans when it comes to driving in any environment and object
detection to avoid accidents.
AI-enabled autonomous vehicles are in testing today.
5. Air Travel
Safety: AI
brings valuable data and real-time information to pilots so they can use their
skills to make the best decisions possible, particularly in critical and
potentially life-saving situations.
Since the initial implementation of these sensors - and newer
technologies like wind shear and microburst detection - air travel has never
been safer. The availability of data via AI enables pilots to be better
prepared and significantly reduces weather-related issues.
Efficiency: When
you book a flight, it is often an AI system, and no longer a human, that decides what
you pay. When you get to the airport, it
is an AI system that monitors what you do at the airport. And once you are on the plane, an AI system
assists the pilot in flying you to your destination.
6.
Robotics
AI-assisted
robots are already a mainstay in automobile production. Robotics can also be used for carrying goods
in hospitals, factories, and warehouses, cleaning offices and large equipment, inventory
management, and cooking food.
Today’s automobile assembly lines employ AI-assisted robotic machines.
7.
Healthcare
Diagnosis: AI
quickly accesses and examines thousands of medical records, pulling relevant
information like preexisting conditions, drug interactions, or Covid status,
for example, to guide important diagnoses and treatment plans that will keep
patients safe.
Smart
Machines: AI helps build sophisticated machines that
can detect diseases and identify cancer cells - and do it faster with no less accuracy.
AI can help analyze chronic conditions
with lab and other medical data to ensure early diagnosis.
Personalized
Medicine: AI systems are trained to provide
personalized medicine, including giving reminders about when patients need to
take their medicine and suggestions for specific exercises patients should
perform to improve their recovery from injuries.
New
Drugs: AI also uses the combination of historical
data and medical intelligence for the discovery of new drugs.
8.
Agriculture
AI
identifies defects and nutrient deficiencies in the soil, using computer
vision, robotics, and machine learning applications. AI can analyze where weeds are growing. AI
bots can help to harvest crops at a higher volume and faster pace than human
laborers.
9.
Marketing
Using
AI, marketers can deliver highly targeted and personalized ads with the help of
behavioral analysis and pattern recognition. It also helps with retargeting audiences at
the right time to ensure better results and reduced feelings of distrust and
annoyance. AI can be used to edit and
optimize marketing campaigns to fit a local market's needs. AI can also
be used to handle routine tasks like performance, and campaign
reports.
10.
Finance
Tools: AI
tools detect and prevent fraudulent financial transactions, provide more
accurate assessments than traditional credit scores can, and automate all sorts
of data-related tasks that were handled manually. AI can also better predict and assess loan
risks.
Online
Banking: Some people no longer use brick-and-mortar
banks at all, conducting all their business online or via an app. AI completes mobile check deposits, checks
account balances, and enables bill pay.
AI-enabled online banking adds great flexibility to personal finance management.
11. Weapon Systems
Several
governments are developing AI-enabled autonomous weapons systems that
can search out targets, decide to engage, and attack and destroy the target -
completely without human involvement. Not
only will these killer robots become more intelligent, more precise, faster,
and cheaper; they will also learn new capabilities, such as how to form swarms
with teamwork and redundancy, making their missions virtually unstoppable.
Other military
applications for AI include wargaming and battle strategy development,
reconnaissance, and defense suppression.
12. Entertainment
Movies: AI helps with scriptwriting, pre-production
(planning and scheduling), formulating release strategies, predicting success at
the box office, casting, promotion, and creating spectacular visual
effects. It’s also become increasingly
popular for video editing, coloring, and music creation.
Streaming
in Real-Time: AI
aids in the personalization, packaging, and transmission of content in
real-time, enhancing the viewer’s experience. It also helps to increase ad
sales by allowing for tailored ad insertions. Live sports event ad earnings are
maximized with digital billboard replacement options.
Gaming: AI
can be used to create smart, human-like NPCs (nonplayer characters) to interact
with the players. It can also be used to
predict human behavior to improve game design and testing.
AI-enabled online gaming is increasing in popularity.
In
addition to the applications listed above, AI systems also increasingly
determine whether you get a loan, are eligible for welfare, or
get hired for a particular job.
Increasingly they even help determine who gets released from jail.
Other
applications predict the result of judicial decisions, create art (such as poetry or painting),
and prove mathematical theorems.
Future
of Narrow AI
Artificial intelligence is shaping the
future of nearly every industry and it will continue to act as a technological
innovator for the foreseeable future.
With companies spending billions of dollars on
narrow AI products and services annually, tech giants like Google, Apple, Microsoft and Amazon spending
billions to create those products and services, universities making AI a more
prominent part of their curricula, and the U.S. Department of Defense
upping its AI game, big things are bound to happen. Some of those developments are well on their
way to being fully realized; some are merely theoretical and might remain
so.
Note: Some of the
optimism regarding future narrow AI development is associated with Moore’s Law which predicts
that the speed and memory capacity of computers doubles every two years as a
result of the number of transistor components doubling every two years. The fundamental problem of "raw computer
power" is slowly being overcome. (The observation is named after Gordon Moore, the co-founder of Fairchild
Semiconductor and Intel.)
Predictions. In
addition to enabling continued vast efficiency and capability improvements in
the industry applications discussed earlier and others, AI is poised to
fundamentally restructure broader swaths of our economy and society over the
next decade. Here are four predictions from Gaurav Tewari, Founder and Managing
Partner of Omega Venture Partners technology investment firm.
1.
AI will transform the scientific method.
Important
science - think large-scale clinical trials or building particle colliders - is
expensive and time-consuming. In recent decades there has been considerable,
well-deserved concern about scientific progress slowing down. Scientists may no longer be experiencing the
golden age of discovery.
With
AI, we can expect to see orders of magnitude of improvement in what can be
accomplished. AI enables an
unprecedented ability to analyze enormous data sets and computationally discover
complex relationships and patterns. AI,
augmenting human intelligence, is primed to transform the scientific research
process, unleashing a new golden age of scientific discovery in the coming
years.
2.
AI will become a pillar of foreign policy.
We
are likely to see serious government investment in AI. U.S. Secretary of Defense Lloyd J. Austin III
has publicly embraced the importance of partnering with innovative AI
technology companies to maintain and strengthen global U.S. competitiveness.
The
National Security Commission on Artificial Intelligence has
created detailed recommendations, concluding that the U.S. government
needs to greatly accelerate AI innovation.
There’s little doubt that AI will be imperative to the continuing
economic resilience and geopolitical leadership of the United States.
3.
Addressing climate will require AI.
We
are currently working to mitigate the socioeconomic threats posed by climate
change. Many promising emerging ideas
require AI to be feasible. One potential
new approach involves analyzing the relationship of environmental policy to
impacts. This would likely be powered by
digital Earth simulations that would require staggering amounts of real-time
data and computation to detect nuanced trends imperceptible to human senses. Other new technologies such as carbon dioxide
sequestration (capturing and storing atmospheric
carbon dioxide) cannot succeed
without AI-powered risk modeling, downstream effect prediction, and the ability
to anticipate unintended consequences
AI
is poised to have a major effect on climate change and environmental
issues. Ideally, and partly through the
use of sophisticated sensors, cities will become less congested, less polluted,
and generally more livable.
4.
AI will enable truly personalized medicine.
One
compelling emerging application of AI involves synthesizing individualized
therapies for patients. Moreover, AI has the potential to one day synthesize
and predict personalized treatment options in near real-time - no clinical
trials required.
AI
is uniquely suited to construct and analyze individual biology’s and is able to
do so in the context of the communities an individual lives in. The human body is mind-boggling in its
complexity, and it is shocking how little we know about how drugs work. Without AI, it is impossible to make sense of
the massive datasets from an individual’s physiology, let alone the effects on
individual health outcomes from environment, lifestyle, and diet.
Issues. While
narrow AI is expected to produce great benefits for mankind in the future,
there are several issues that need to be considered.
1.
Job Displacement.
Many
people believe that AI will supplant humans in various ways. Oxford
University’s Future of Humanity Institute published the results of a 2017 AI survey,
“When Will AI Exceed Human Performance? Evidence from AI Experts.” It contains estimates from 352 machine
learning researchers about AI’s evolution in years to come. By 2026, a median number of respondents said,
machines will be capable of writing school essays; by 2027, self-driving trucks
will render drivers unnecessary; by 2031, AI will outperform humans in the
retail sector; by 2049, AI could author a best-selling book, and by 2053, AI
could be the next neurosurgeon. The researchers believed that there
is a 50% chance of AI outperforming humans in all tasks in 45 years, and of
automating all human jobs in 120 years.
Note: One recent development is far ahead of its
predicted availability. ChatGPT
(Generative Pre-trained Transformer) was launched as a chatbot prototype by
OpenAI on November 30, 2022, and quickly garnered attention for its detailed
responses and articulate answers across many domains of knowledge. The chatbot - which cannot think
for itself, but is trained to generate conversational text - can be used for a
wide array of applications, from writing college-level essays and
poetry in a matter of seconds to composing computer code and legal
contracts, or for more playful uses such as writing wedding speeches, hip-hop
lyrics, or comedy routines. It’s already
abundantly clear that ChatGPT has far-ranging implications and potential uses
for education, entertainment, research and especially our workforce.
Others argue that while narrow artificial intelligence is
designed to replace manual labor with a more effective and quicker way of doing
work, it cannot override the need for human input in the workspace.
Narrow AI systems lack sensory perception, natural language understanding,
social and emotional engagement, and untrained problem-solving skills. Good businesses recognize that these
capabilities and skills are needed in the customer-relation market place of
today and the future.
The World Economic Forum suggests that while machines
with AI will replace about 85 million jobs by 2025, about 97 million jobs will
be made available by the same year thanks to AI. So, the big question is: How
can humans work with AI, instead of being replaced by it?
These results should inform discussion amongst researchers
and policymakers about anticipating and managing trends in AI. One of the absolute prerequisites for AI to
be successful in many areas, is that we invest tremendously in education to
retrain people for new jobs.
2.
Weaponized
AI.
AI
provides a number of tools that are particularly useful
for authoritarian governments to employ cybercrime and terrorism: smart
spyware, facial recognition, and voice recognition allow widespread
surveillance; such surveillance allows machine
learning to classify potential enemies of the state and can
prevent them from hiding; recommendation systems can precisely target
propaganda and disinformation for
maximum effect. Applications such as the
recently-introduced ChatGPT could even create disinformation, and become a tool
for hackers and phishing schemes.
AI-assisted autonomous
weapons are already a clear and present danger, and will become more
intelligent, nimble, lethal, and accessible at an alarming speed. The deployment of autonomous weapons will be
accelerated by an inevitable arms race that will lack the natural deterrence of
nuclear weapons. By 2015, over 50 countries were reported to be
researching battlefield robots. Cybersecurity
systems and elections are also potentially vulnerable to bad actors employing
AI. It is not at all clear how we can control this
lethal threat to humanity.
Turkish autonomous attack drones.
3.
Privacy.
AI’s
reliance on huge data bases (called Big Data today) is already impacting
privacy in a major way. Look no further
than Amazon’s Alexa eavesdropping, just one example of tech gone
wild. Search engines and social media
platforms have been accused of greed-driven data mining. Without proper regulations and self-imposed
limitations, critics argue, the situation will get even worse.
Artificial
General Intelligence
Very little has been accomplished so far to meet the AGI goal of general human intelligence, the ability of an intelligent agent to understand or learn any intellectual task that a human being can. The same can be said about the ultimate goal of achieving computer programs that experience sentience or consciousness. AGI’s future is highly speculative.
History of AGI. The term "artificial general
intelligence" was used as early as 1997.
By 2010, AGI research had been founded as a
separate sub-field, and there were academic conferences, laboratories, and
university courses dedicated to AGI research, as well as private consortiums
and new companies.
Today,
most AI researchers have devoted little attention to AGI, with some claiming
that intelligence is too complex to be completely replicated. However, a small number of computer
scientists are still active in AGI research.
For those who believe that AGI goals can be achieved, estimates of the
time required to achieve success range widely from ten years to over a century.
Here
are a few achievements to date that at least inspire some optimism in AGI
researchers:
In 2005, the Human Brain Project was
started by a European research group hoping to recreate a complete human brain
inside a computer, with electronic circuits in the computer emulating neural
networks in the brain - a digital mind, composed of computer code, complete
with a sense of self consciousness and memory. The researchers thought that within a few
decades, we could have an AGI system that could talk and behave very much as a
human does. But little real progress was
made toward this goal, and in 2013, the project was rebranded with a new less
ambitious goal of “putting in place a cutting-edge research infrastructure that
will allow scientific and industrial researchers to advance our knowledge in
the fields of neuroscience, computing, and brain-related medicine.”
In 2016, a humanoid robot named Sophia was created by Hanson Robotics. She is known as the first “robot
citizen.” What distinguishes Sophia from
previous humanoids is her likeness to an actual human being, with her ability
to see (image recognition), make facial expressions, and communicate through
AI.
In 2016, the humanized robot Sophia was introduced by Hanson Robotics.
In
2022, DeepMind developed Gato, a "general-purpose"
system trained on 604 tasks
including playing Atari games, accurately captioning images, chatting naturally
with a human, and stacking colored blocks with a robot arm. According to DeepMind, Gato would be better than human experts
in 450 of the 604 tasks it has been trained for.
And,
as mentioned earlier, OpenAI introduced ChatGPT in late 2022. There is consensus that ChatGPT is not an
example of AGI, but it is considered by some to be too advanced to classify as
a narrow AI system.
Future of AGI.
Optimists still predict that AGI will improve
at an exponential rate, leading to breakthroughs that enable AGI systems to
operate at levels beyond human comprehension and control.
But human
brain simulation efforts over the last almost two decades do not look
promising.
The human brain contains one hundred billion
neurons (basic working unit of the
brain, a specialized cell designed to transmit information to other nerve
cells, muscle, or gland cells) and one thousand trillion synapses (junctions
that transmits signals between neurons), all working in parallel. Producing
a biologically faithful simulation of the brain would require an almost
limitless set of parameters, including the
brain’s extracellular interactions, and molecular-scale processes. There
are no known solutions to these problems of scale and complexity. Some
aspects of mind, such as understanding, agency (control over
voluntary actions and the outcomes of those actions),
and consciousness, might never be captured by digital brain simulations.
Simulation of the human brain is an extremely complex task.
Simply
stated, AGI is a very complicated problem and there are no clear paths
to a solution. The hoped for
“breakthroughs” are unknown to researchers today. Thus, many experts are
skeptical that AGI will ever be possible.
Others question whether achieving full AGI is
even desirable. More than a few leading AI figures subscribe
to a nightmare scenario, whereby superintelligent machines take over and
permanently alter human existence through enslavement or eradication.
English theoretical physicist, cosmologist,
and author Stephen Hawking warned of the dangers in a 2014
interview with the British Broadcasting Corp. "The development of full artificial
intelligence could spell the end of the human race," he said. "It
would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological
evolution, couldn't compete and would be superseded."
The slow
pace of AGI development may actually be a blessing. One expert opines, “Time to understand what
we’re creating and how we’re going to incorporate it into society, might be
exactly what we need.”
In the meantime, we’ll have to get along with
our favorite fictional intelligent robots, C3PO and R2D2.
Comments
Post a Comment