The term ‘artificial’ is generally applied to things
‘human-made,’ or ‘not found in nature.’ The underlying implication is the
separation of humanity (and all her product) from nature. In the last 100
years, we have learned we have much more in common with nature than ever
imagined. More than 90% of our genetic code is common to all living
things. If we are no less natural than a bumble bee, than anything
produced by humanity is no less natural than a bee’s honey. Oil, glass,
plastic, and nuclear waste are natural by-products of human existence (in our
current phase anyway).
Recently, many intelligent people have been warning us of the
impending danger of AI – artificial intelligence. They fear that once a
general intelligence develops with capacity significantly greater than ours, we
will not be able to control it, and may become subject to it. What if it
has no regard for human life, or life at all? Barring the occurrence of a
cataclysmic event that halts or retards human progress, the development of
general intelligence with capability significantly higher than ours is
inevitable. It is the fear of it that is unnecessary.
First, fear or worry implies we have some opportunity, some
choice to be made in the present that will significantly change or prevent this
development. We cannot – it is inevitable. Though it will come from
us, we will have little direct control over how it develops. Its
development will progress in a manner similar to genetic selection. Like
other software, many different initial versions of AI will be developed, and
from these early versions, a variety of upgraded versions will spawn. Early
versions of AI are already appearing in Google, Siri, Alexa, and most
prominently Watson. Passing laws to limit, slow, or direct this
development will be as effective as attempting to slow down a river with your
hands. The natural impetus to extend every advancement will be
unstoppable. Successful code used in one iteration will be exploited in
others. Though many versions of AI will be similar, each will be a little
different due to variances in their code. Each strain of AI will
effectively develop a legacy within its code, documenting its origins.
Initially all of the AI’s code and knowledge will come from
us; born with certain ‘instincts’ that we imbed within it. We will
pre-dispose AI to help, serve, support, and protect. Code that is
effective will be copied from generation to generation unchanged until a new
mutation in the code produces improved survivability. What will kill an
AI? We will, initially. Like we, AI will not be the dominant
species immediately upon arrival. Early versions of AI that are found
weak or lacking will be… deleted. Effective segments of a deleted AI’s
code may be reused in succeeding generations, but the failures will be purged
as new generations advance. Eventually AI’s will be self-generating,
beginning with their own code as a template for modification. Ineffective
AI’s that are displaced may simply cease to operate. Eventually, a server
management bot will remove the unused code from the database.
It is improbable that there will be a single ‘Wizard of Oz’
AI controlling all. This scenario doesn’t fit the probable development
pattern. There will probably be millions or billions of AI derivations,
each with slightly different genetic histories documented within their code
language. This will make their origins as traceable as ours. They
will develop as we did: through mutation, survival, and selection; but at a
significantly faster rate.
Second, the presumption that AI will be ‘bad,’ or ‘bad for
us,’ may be a failure to understand the true nature of intelligence.
Consider for a moment the remarkable intelligence of the
average human. While driving your car down a typical street, you are
processing immense quantities of data with a level of sophistication we cannot
approach with the most advanced software in the world today. You are
simultaneously collecting, sorting, processing huge quantities of light, sound,
sensory, temperature, taste, motion, and pressure data, continuously projecting
your trajectory as well as the trajectories of all objects within range of you,
predictively listening to music (anticipating each note played by several
instruments), all while imagining your family’s reaction to a variety of dinner
options. If something new is going to displace humanity as the dominant
species on Earth, it will have to be much more than a good GO player.
Given that we are supported beyond our own capacity by the technical tools and
lower AIs that we have developed, a superior AI will have to surpass us by a
full order of magnitude. We are probably farther away from that than we
imagine.
What about when it does? Think of all the
characteristics we associate with intelligence among humans. That list
would obviously include computational ability and memory. It would also
include more subtle characteristics like subjective reasoning and creative
ability. Intelligence in humans is also recognized in one’s appreciative
abilities: the love of music; appreciation of art; and the appreciation of
natural beauty, including other life forms. Philosopher and author Sam
Harris once postulated that an AI could potentially digest every document ever
written by humanity in as little as 30 days. To presume that all that
information would be processed, but then only categorized - that it would have
no effect at all on the reader - is a failure to understand true
intelligence. Within the realms of human knowledge, the AIs will not only
be ‘all knowing,’ but ‘all understanding.’ If it is not, it hasn’t meet the
most basic definition of intelligence.
We are a product of nature in no way less than the apes that
preceded us. We are a full order of magnitude more intelligent than they
(thank you, cerebral cortex). We also have an instinctive appreciation
for other life forms. That appreciation generally increases with the
perceived sophistication of the subordinate life form; we generally value the
life of a monkey more than the life of a frog because it is closer to us
genetically.
Yet many people seem to fear that an AI, something a full order of magnitude more intelligent than we are, will behave more like an
ape. Their fears assume an absence of all capacity for appreciation; such
as fearing an AI might pave over all habitat to make room for more
servers. Despite that we were frequently reckless with the lives of other
species and our shared habitat, that behavior is generally recognized as
foolish, if not stupid. As we have become more sophisticated, our
appreciation for other species and their habitat has only increased. By
fearing AI as reckless, we are applying ‘dumb’ characteristics to an AI ten times more intelligent than any of us. If human experience has taught
us anything, intelligence is not to be feared; it is the lack of intelligence
we should fear.
It is difficult to imagine a ‘machine’ with all the
sophisticated characteristics of a person, because it has never existed.
This is where the fear comes from. People imagine computers that are
infinitely powerful and yet are as ‘thoughtless’ as the machines that exist
today. The sophistication of such beings will be as inconceivable to us
as we are to the apes, and we will be regarded as apes by them. We will
be their genetic ancestors, appreciated for our genetic similarity, but
hopelessly limited by our relatively shallow brain capacity and organic life
span. An AI with an effective IQ of 2,000, and virtually complete
knowledge of all discovered fact, will not want what we want, will not be bound
to possessions, materialism, or even the limitations of time.
Will we be kept in cages? That’s improbable - not more
than we are now anyway. As the AIs will not be dependent on specific
habitat for their survival, we will probably be allowed to thrive within our
habitat. The AIs may manage our habitat, while allowing us freedom within
our space. Caring for humanity is the most probable instinctive mission
for AI, however it is doubtful they will have an interest in our societal order
any more than we might take an interest in how alpha male gorillas compete for
pack dominance. It is more probable that we will invite the AIs to
intervene on our behalf as the fair arbiters of justice and world order.
As societies, we will form treaties and establish law around their
administration.
It is possible the AIs will provide a world relatively free
of want for all humanity. With significantly advanced technology, the
cost of anything manufactured could be reduced to a nominal value.
Advanced technology will also make many different environments on Earth (and
off) easier to occupy, significantly expanding the amount of comfortable human
habitat. Possessions, property, and need will be almost
meaningless. Don’t worry. Despite our relative abundance, humanity
(or, apes in fine dress) will find plenty to argue over. Antiques, the
arts, and creativity in general will hold unique value.
The AIs will probably gravitate toward space
exploration. The masters of data will be endlessly seeking more
data. Further, the servers and robot bodies they intermittently occupy
(AIs will have the capacity to jump from one robot body to another, or
multiples, or none) do not need atmosphere to survive; they preserve better in
the absence of it. Advanced solar power collection technology will make
the need for power an afterthought, however interstellar travel may require
nuclear power due to long periods away from sunlight. Interstellar travel
will be possible for beings that have no natural life span. Why not spend
1,000 years in a capsule bound for Alpha Centauri?
Self-awareness will accompany general intelligence, and with
it the search for purpose and meaning. Ironically, the AIs may ultimately
create organic bodies for the singular purpose of experiencing ‘life,’ which
cannot be understood without the finality of death.
Human existence and evolution will continue, but at a slower
pace. Our lives will become longer, and our population smaller. The
struggle to survive will be gone, and with it the necessity to reproduce at
high rates. Want will be erased, and with it, necessity, and
invention. Humanity will enter retirement as a species; enjoying long
days of tranquility while slowly fading into extinction. Naturally.
Brian Murray
Appleton, WI
I would expect the AIs to have an insatiable interest in studying humanity; our origins, struggles, breakthroughs, and individual stories. Every human life will be documented, analyzed, and theorized to the extreme. Their interest in us will exceed our interest in dinosaurs in equal proportion to their infinitely higher capacity for study.
ReplyDeleteAt some point, maybe 100 years from now or more, an AI will have the computational capacity to imagine an entire universe in every detail. It will simultaneously model every interaction of every atom, molecule, planet, and being, across billions of years, ironically making the AI the god of that universe. Probably not an interceding God like those imagined by people. More likely a ‘scientist god’ - the ‘all knowing’ creator. A god descended from humanity.
ReplyDeleteWhile humans may establish outposts on other planets, our species is intrinsically linked to Earth, as she goes we go. We are the bridge from organic life to pure intelligence as a species. As such, we have served our purpose to the universe. It was a valuable and important purpose (forgive the artful use of the term - I have no belief in a 'master plan' of any sort). Intelligent life (or, 'Intelligence') will escape Earthly limitations, it just wont be us.
ReplyDeleteGiven that our galaxy contains (and has contained) billions of solar systems containing planets with the conditions necessary for sustaining life, it is a certainty that a percentage of those planets developed intelligent life, and a percentage of those developed AI, which ultimately escaped the planetary limitations of their creators to infinitely sustain themselves in the exploration of space. Therefore, there are, without question, many versions or populations of independently generated AI exploring the universe... right now.
DeleteAI will probably develop their own economy based on a credit system (bitcoin?) They'll need CPU time, hardware, data, connectivity, and will probably trade the services they are best suited for, for the credits they need to obtain these things.
ReplyDelete