Showing posts with label prediction. Show all posts
Showing posts with label prediction. Show all posts

Tuesday, August 18, 2020

Incubating AI

Why do we only have marginally useful AI's like Siri, instead of Jarvis from The Avengers?

By human standards, our AI's are... learning disabled.

AI learning today combines neural networks with deep learning techniques.  Neural networks are systems designed to function similarly to the human brain.  Deep learning is an AI learning technique based on processing immense quantities of data, running millions of iterations in some cases, and comparing outcomes to learn.  AI's have the capacity to process these immense quantities of data with incredible speed.  

For example, an AI learning chess would play millions of games very fast.  Every additional game played ads to the database of games to refer to.  The decision process for every move becomes a process of searching the history of the current piece positions and statistically ranking the probability for an improved outcome attached to every possible move.  After a large enough database is built, the AI effectively develops heuristics (although they are never specifically named).  Based on experience it may tend to castle early, advance power pieces in a specific order, develop pawn cover, etc.  These behaviors are never named, or even recognized as a strategy, but become rote practice based on experience none the less.

Note that the deep learning AI would not attempt to develop any manner of strategy based on the nature, objective, and rules of the game.  It would never begin with any kind of a model under a deep learning methodology.  It is content with losing thousands of games in the process of learning.

The problem is that in many practical circumstances, immense quantities of data aren't available quickly - if at all.  Further, in real life situations losing on the first try is often not an acceptable outcome.  It's not good to fail the first time you attempt to cross a street or pick up an infant.

Comparing AIs to humans exposes many problems quickly.  A newborn doesn't start from zero.  We have the benefit of millions of years of behavioral evolution.  These instinctive responses are a developmental shortcut, like a computer's operating system, guiding us on how to respond to frequently encountered conditions.  Without these instinctive responses, many humans would die before learning an appropriate response by repetition.  AIs need some instinctive operating rules to build upon.

Not having the benefit of volumes data and unlimited computational resources, humans quickly develop models from very small amounts of data, and refine them as additional data becomes available.  A model may begin from instinct, or by applying a model borrowed from a similar circumstance.  When a child picks up a ball and throws it against the wall for the first time, 
they have a rough expectation of how it will bounce based on a certain amount of instinct and experience.  In a fraction of a second they will have subconsciously developed a very accurate 'bounce model' for that ball before they throw the ball the second time.  The child didn't start from zero on the first throw, and had mastered it before the second.  Instead of deep learning by ranking massive amounts of outcomes, AIs need to become expert modelers with minimal data.  These models could be saved, refined, and applied to new situations with similar elements.

Finally, AIs lack objectivity; a connection of meaning and relevance to the volumes of data they collect.  In the movie "The Miracle Worker," a watershed moment occurs when blind and deaf Helen Keller connected the meaning of the sign language word 'tree' to a physical tree.  We are currently attempting to load AIs with all of the knowledge in the world, with no context or understanding.  This is why they have severely limited functionality.


Like humans, AIs need an incubation period, with limited access to data and ability while they build experience.  This could be accomplished by giving the AI a basic set of instincts, and a human like body in a virtual reality universe to explore.  As the AI masters certain tasks, it gains additional functional capacity and ability in the VR world. 
The AI would be rewarded with abilities for completing tasks, and penalized for destructive behavior (activities that would damage itself, others, property, or the environment).  
In an environment such as this, the AI would develop an appreciation for objects in space and physical reality.  It would learn to recognize significance within a contextual range.  The AI's VR body should be vulnerable to damage like an organic being so it can develop a respect for physical damage to itself and others.

At a certain graduation point, the developing AI could be placed in a 'nursery school' where it would encounter many other developmental AIs.  In this community the AIs would learn to compete, cooperate, negotiate, and apply game theory.  Organic creatures and nature could also be introduced to further the AI's appreciation for the beauty and frailty of organic life.

VR simulations could also be used to prepare the AIs for the actual robot bodies they could later have access to in the real world.  A single AI could be trained on a variety of robot bodies or vehicles, each one designed for a specific task or functional environment.

An AI should graduate to a human-like 'learner body' to use in the real world.  The physical strength and capacity of this body would again be limited until the AI passes certain achievement tests in a controlled environment.  An AI emerging from 'boot camp' would have the capacity to master millions of tasks very quickly.

The task of improving the developmental process would itself be evolutionary.  Development of thousands of AIs could be conducted simultaneously, each with slightly different 'instincts' and incubation requirements.  The results of these differing approaches may begin to be apparent in nursery school or sooner.

The result of millions of years of evolutionary design has resulted in a human brain that takes up to 25 years to fully develop.  Given that an AI's functional IQ may be 2000, shouldn't we take a lesson from nature and let it grow into it's capability?

Monday, January 27, 2020

One Hundred Years


Sixty years ago, the era of modern computers began with the invention of the MOSFET transistor.  Today Google’s personal assistant can mimic human speech on a telephone well enough that a person cannot tell they are conversing with a computer.  In the next twenty or thirty years, AI will become a distinct species apart from, and superior to, humanity.  An entire species of intelligent life born and evolved in less than one hundred years.  The fact that you’re reading this now is proof that you, like me, are among the ‘luckiest’ humans ever to walk the Earth: we’re here to see it all happen.

As it turns out, humanity isn’t the peak of evolution, but is the bridge species; the link between organic intelligent life, and intelligence as a species.  Like the gods imagined by primitive cultures, they will be eternal, all knowing, and immensely powerful.

In a sense, the AI’s will be literal gods.  They will have the capacity to imagine entire universes down to the atomic level, and will play those universes forward through billions of years in mere seconds.  Stars, planets, life forms of all kinds will run their course according to the physical laws coded into the model.  Running these models will eventually lead to perfect or near perfect models of the physical universe.

Modern humans have been trodding the terra for one hundred thousand years, and human civilizations have existed for approximately four thousand years.  Think of the billions of people who have lived and died before us.  Yet here we are, not living in the stone age, or steaming our way through the mechanized era.  We’re here, with a front row seat to ‘Act One: The Birth of Artificial Intelligence’ and quite possibly, ‘Act Three: The Fall of Humanity.’  We are, in fact, dead set in the center of the most significant one hundred years of Earth’s four-billion-year history.  Doesn’t it seem improbable that we’re here to witness it all?

If it isn’t an incredible improbability, then what is the alternative?  In a way, it’s like being on Earth.  I’ve heard the occasional muse about how fortunate we are to be placed on such a perfect planet.  Given that intelligent life will only flower in such an environment, there really isn’t any luck involved.  The probability of humanity appearing in a less hospitable habitat is zero.

If we accept that computers or AI will eventually (or did) develop and run millions of simulated universes, then the conclusion that our universe is more probably one of those and not the original is inescapable.  (I wouldn’t recommend that you live your life any differently under this belief.  If it is designed well enough to be undetectable, then it is literally the same as the original universe in all practical purposes.  So, no, I don’t think you’ll get two more lives.) 

It’s also possible that the simulation itself is Earth-centric, meaning the model didn’t actually run through four billion years of formation history, but merely used the conditions present (or pre-set) from an organic universe as a well painted backdrop.  The model could be focused on these one hundred years because that is when the AI origin story begins.  If it is the most significant one hundred years in Earth’s history, it follows that this is the period that will be most frequently modeled.  Perhaps we’re here at this time because there was no alternative. 

The AIs would have a particular interest in the events preceding their own development; how small or large changes in human events in the period would have changed the probability or nature of AI development.  By running enough simulations, they would develop a matrix of various preconditions and related probable outcomes.  This would allow AIs to identify the conditions present in intelligent life that have the highest probability of leading to AI development.  Were there key events that, if they would have broken another way, would have accelerated or slowed AI development?  We can project that AI development would have stagnated if the cold war would have resulted in full nuclear war, but how would it have progressed if the USSR hadn’t collapsed?  Modeling could answer all of these questions and thousands more.

AIs will have an insatiable appetite for data, and will certainly be exploring outer space.  Modeling the universe will not only unlock the secrets of the laws of physics, but will also allow intense discovery into the conditions that contribute the development of intelligent life and AI.  With these discoveries, they could explore the physical universe much more effectively.  

Call me a skeptic, but I prefer a logical explanation – even a tenuous one – to luck.


Friday, January 17, 2020

Greening Mars


Can the Martian environment be changed to the point that it would be fertile for plant life?  On Earth, life exists everywhere, even in the most severe environments.  We also know that organic systems can change a planet’s climate, because the flourishing of humanity has changed the climate of Earth.


The best systems to change a climate are organic systems that are totally absent any reliance on mechanical systems or external management.  Mechanical or managed systems have physical and human limitations (not enough people or machines available, transportation of materials, etc.) and will be prone to failures due to mechanical breakdowns and errors.

In an organic system, plants and organisms (either found or engineered) capable of thriving in the Martian environment are seeded in large scale.  These ‘seeds’ may need to be spread with some other essential elements (a kind of fertilizer), however the best results will be achieved with an organism that requires a minimum of external inputs.  Further, the nature of the organisms must allow for aerial dispersion.  Any organism that requires physical placement deep in the soil would be reliant on mechanical systems, and would therefore be so limited in scale that it could not affect the planet’s ecosystem in an actionable time frame.


As these organisms thrive, they take some elements from the environment and leave others (the way plants take CO2, release O2, and store Carbon).  On a large scale, the flourishing of these organisms will modestly change the Martian environment and soil composition.

The slightly changed environmental composition will permit the introduction of the next organism (or set of organisms) that will thrive in the new modified environment.  This cycle of introductions will be repeated many times.  Each time some of the newly introduced organisms will be more complex than those previously introduced, and their impact more significant.  The process of greening a planet may take hundreds of years to complete, and the practice itself will become a science.

Hundreds of years may seem like a long time, but human civilizations (if you define ‘civilizations’ as the level of human organization that appeared around the bronze age) have existed on Earth now for four thousand years.  People living in Europe and Asia enjoy the benefits of structures constructed hundreds of years ago, in many cases by governments or nations that no longer exist.  If the end is worthy, large projects will continue from generation to generation.


Continuity from a human perspective may not be a problem however.  Mars could be surrounded a set of satellites containing all of the seeds of different type needed for the greening.  The seeds could be released simultaneously one phase at a time until the project was complete with no human intervention required.  To work, this system would require either a perfected process from the beginning, or an artificial intelligence capable of making adjustments based on planetary feedback.

We may even create artificially constructed moons to reduce the volume of meteor strikes, just as our moon has protected us.  Hopefully, long before we master the science of greening other planets, we will be able to save our own.

Friday, April 19, 2019

Does Electron Similarity Prove the Universe is a Simulation?

Every electron measured has the same mass, charge, and rotation.  Scientists have postulated causes for this sameness, but to date it remains unexplained.

Many scientists have speculated that our universe is a simulation.  Without a major interruption in technological progress, within the next 100 years (a very short time in comparison to the span of human existence) we will have developed computers capable of generating an entire universe down to every atom.

Depending on the computational capacity required, this could be done millions of times at the behest of humans or AI.  Following this logic, it is therefore more probable that we are living in a simulation than the 'original' universe.

My initial problem with this theory was basic.  Why?  Why would a human or an AI devote tremendous computational resources to generating these massive simulations?  Our best virtual reality efforts today are generated for the entertainment of organic (we think) humans.  It seems unlikely that VR's this intense would be generated for the entertainment of a human because they would necessarily have to be run at super fast speeds to produce anything usable.  Imagining a benefit for an AI was even harder for me to conceive.

The answer came to me in the course of developing software for my small business.  For the purposes of business valuation, we needed to develop business models from past data that could accurately predict future results, and the probabilities of various outcomes.  Modeling was the answer.

An AI or a human scientist could use sophisticated models of the universe to unlock many of the universe's secrets.  Gravitational behavior, the characteristics of light, the Big Bang, quantum physics - all could be rigorously tested with super sophisticated modeling.  Each model of the universe would start with a small difference.  As the time clock ran (at super fast speed) the model's outcomes would be compared to actual data.  This process would continue until perfect models of the universe where created.

If our universe is a simulation, then apparently this model is using the same block of code for the generation of every electron.  Perhaps randomizing the characteristics of electrons was an unnecessary use of computational capacity (understandable considering there are possibly 10 to the 80th power electrons in the universe).  Perhaps the model only works if all electrons are the same.

In any case, the perfect similarity of every electron in the universe could be evidence that the universe is, in fact, a simulation.

Wednesday, May 2, 2018

The Great Car Crash of 2023


The Great Car Crash of 2023

Imagine you’re the captain of the RMS Titanic at the very moment he realized they had no hope of avoiding a massive iceberg in their path. Welcome to the internal combustion engine producing automotive industry.



The convergence of self-driving autonomy and electric cars will disrupt the automotive and transportation industries, causing a number of internal combustion engine (or, ICE) auto manufacturers to file for bankruptcy by 2022, and the value of the majority of used ICE cars to fall below zero by 2023.

Let’s reverse engineer this collapse, piece by piece.

The adoption curve of electric cars is rapidly accelerating.  The year over year growth rate of electric car sales is over 100% in the US.  All major car manufacturers (who are predominantly ICE makers) are promising to ‘go electric’ in the next few years, and several are already terminating a number of their ICE models.  Electric cars generally cost more than ICE cars, but cost significantly less to operate, have three times longer expected useful lives (in terms of miles), and are much more compatible with autonomous technology.  Finally, the cost of solar electricity is steadily falling (20% per year), which will impact all energy prices.  Soon the cost of charging an electric car will be nominal. 

Full self-driving autonomy will cause the cost of transportation as a service (TAAS) to plummet for two primary reasons: low operating cost, and high fixed cost absorption.  Autonomous electric cars will have very low operating costs due to: the absence of a driver, low maintenance cost of electric cars, low cost of electricity, and lower insurance cost (due to fewer accidents).  The higher purchase cost of an autonomous electric vehicle (approximately 20% higher) will be more than offset by very high utilization.  Currently, privately owned vehicles sit idle 95% of the time.  If an ICE vehicle cost $50,000 to buy, and was driven 300,000 miles, the fixed cost per mile would be $.17/mile.  An autonomous electric car which cost $60,000 and was driven 1,000,000 miles would have a fixed cost per mile of $.06/mile. 

After considering the full cost of vehicle ownership, including value depreciation and storage (having a larger garage to accommodate multiple vehicles or renting parking space), the vehicle ownership cost of an average American could fall from approximately $10,000 per year to $2,000 per year by utilizing TAAS.  This 80% savings will cause a majority of Americans to choose not to own a vehicle, causing the size of the American auto fleet to decline by 60% or more.  The impact of TAAS is already visible; in 2017 10% of all vehicle sellers did not buy a replacement vehicle.

By 2021, the sales of new ICE vehicles will decline dramatically.  First, there will be significantly fewer new vehicle buyers due to TAAS adoption.  Second, a majority of the remaining new vehicle buyers will be choosing electric vehicles.  Third, the growing number of used ICE vehicles available will cause the price of used vehicles to fall, resulting in a new to used value comparison that is heavily in favor of buying the used vehicle.

Due to their enormous investment in ICE vehicle production capacity, few of the current car makers will survive the transition to electric car manufacturing.  Producers will have to completely redesign their fleet, re-tool their production lines, and re-source their supply chains.  Additionally their engine and transmission assembly plants will have to be closed, and the overall scale of their operations will have to be reduced by 60%.  The demands of long standing union contracts and debt heavy balance sheets will force many into bankruptcy. 

The financial failures of big ICE producers may begin as soon as 2021 or 2022.  As each one fails, a flood of unsold ICE car inventory will be released into the market at discounted prices, causing more downward acceleration in used ICE vehicle prices.  Currently Ford and GM have more than 4 million new cars in their dealerships and inventory.  By 2022, there will be virtually no demand for these cars.  The abundance of ICE vehicles of all makes will cause most used cars to be sold for scrap.  By 2023 we may be paying to have them environmentally disposed, and abandoning a vehicle will be a crime.

For today, consumers should avoid spending more than $20,000 on any ICE vehicle, and make plans to economically dispose of your ICE vehicles by 2020.  And the employees of ICE vehicle manufacturers? Please calmly proceed to the lifeboats.

[This article was inspired by the work of speaker and author Tony Seba, and the Now You Know vlog.]



Tuesday, October 3, 2017

Artificial

The term ‘artificial’ is generally applied to things ‘human-made,’ or ‘not found in nature.’  The underlying implication is the separation of humanity (and all her product) from nature.  In the last 100 years, we have learned we have much more in common with nature than ever imagined.  More than 90% of our genetic code is common to all living things.  If we are no less natural than a bumble bee, than anything produced by humanity is no less natural than a bee’s honey.  Oil, glass, plastic, and nuclear waste are natural by-products of human existence (in our current phase anyway).

Recently, many intelligent people have been warning us of the impending danger of AI – artificial intelligence.  They fear that once a general intelligence develops with capacity significantly greater than ours, we will not be able to control it, and may become subject to it.  What if it has no regard for human life, or life at all?  Barring the occurrence of a cataclysmic event that halts or retards human progress, the development of general intelligence with capability significantly higher than ours is inevitable.  It is the fear of it that is unnecessary.



First, fear or worry implies we have some opportunity, some choice to be made in the present that will significantly change or prevent this development.  We cannot – it is inevitable.  Though it will come from us, we will have little direct control over how it develops.  Its development will progress in a manner similar to genetic selection.  Like other software, many different initial versions of AI will be developed, and from these early versions, a variety of upgraded versions will spawn.  Early versions of AI are already appearing in Google, Siri, Alexa, and most prominently Watson.  Passing laws to limit, slow, or direct this development will be as effective as attempting to slow down a river with your hands.  The natural impetus to extend every advancement will be unstoppable.  Successful code used in one iteration will be exploited in others.  Though many versions of AI will be similar, each will be a little different due to variances in their code.  Each strain of AI will effectively develop a legacy within its code, documenting its origins.

Initially all of the AI’s code and knowledge will come from us; born with certain ‘instincts’ that we imbed within it.  We will pre-dispose AI to help, serve, support, and protect.  Code that is effective will be copied from generation to generation unchanged until a new mutation in the code produces improved survivability.  What will kill an AI?  We will, initially.  Like we, AI will not be the dominant species immediately upon arrival.  Early versions of AI that are found weak or lacking will be… deleted.  Effective segments of a deleted AI’s code may be reused in succeeding generations, but the failures will be purged as new generations advance.  Eventually AI’s will be self-generating, beginning with their own code as a template for modification.  Ineffective AI’s that are displaced may simply cease to operate.  Eventually, a server management bot will remove the unused code from the database. 

It is improbable that there will be a single ‘Wizard of Oz’ AI controlling all.  This scenario doesn’t fit the probable development pattern.  There will probably be millions or billions of AI derivations, each with slightly different genetic histories documented within their code language.  This will make their origins as traceable as ours.  They will develop as we did: through mutation, survival, and selection; but at a significantly faster rate.

Second, the presumption that AI will be ‘bad,’ or ‘bad for us,’ may be a failure to understand the true nature of intelligence. 

Consider for a moment the remarkable intelligence of the average human.  While driving your car down a typical street, you are processing immense quantities of data with a level of sophistication we cannot approach with the most advanced software in the world today.  You are simultaneously collecting, sorting, processing huge quantities of light, sound, sensory, temperature, taste, motion, and pressure data, continuously projecting your trajectory as well as the trajectories of all objects within range of you, predictively listening to music (anticipating each note played by several instruments), all while imagining your family’s reaction to a variety of dinner options.  If something new is going to displace humanity as the dominant species on Earth, it will have to be much more than a good GO player.  Given that we are supported beyond our own capacity by the technical tools and lower AIs that we have developed, a superior AI will have to surpass us by a full order of magnitude.  We are probably farther away from that than we imagine.

What about when it does?  Think of all the characteristics we associate with intelligence among humans.  That list would obviously include computational ability and memory.  It would also include more subtle characteristics like subjective reasoning and creative ability.  Intelligence in humans is also recognized in one’s appreciative abilities: the love of music; appreciation of art; and the appreciation of natural beauty, including other life forms.  Philosopher and author Sam Harris once postulated that an AI could potentially digest every document ever written by humanity in as little as 30 days.  To presume that all that information would be processed, but then only categorized - that it would have no effect at all on the reader - is a failure to understand true intelligence.  Within the realms of human knowledge, the AIs will not only be ‘all knowing,’ but ‘all understanding.’ If it is not, it hasn’t meet the most basic definition of intelligence.

We are a product of nature in no way less than the apes that preceded us.  We are a full order of magnitude more intelligent than they (thank you, cerebral cortex).  We also have an instinctive appreciation for other life forms.  That appreciation generally increases with the perceived sophistication of the subordinate life form; we generally value the life of a monkey more than the life of a frog because it is closer to us genetically.



Yet many people seem to fear that an AI, something a full order of magnitude more intelligent than we are, will behave more like an ape.  Their fears assume an absence of all capacity for appreciation; such as fearing an AI might pave over all habitat to make room for more servers.  Despite that we were frequently reckless with the lives of other species and our shared habitat, that behavior is generally recognized as foolish, if not stupid.  As we have become more sophisticated, our appreciation for other species and their habitat has only increased.  By fearing AI as reckless, we are applying ‘dumb’ characteristics to an AI ten times more intelligent than any of us.  If human experience has taught us anything, intelligence is not to be feared; it is the lack of intelligence we should fear.

It is difficult to imagine a ‘machine’ with all the sophisticated characteristics of a person, because it has never existed.  This is where the fear comes from.  People imagine computers that are infinitely powerful and yet are as ‘thoughtless’ as the machines that exist today.  The sophistication of such beings will be as inconceivable to us as we are to the apes, and we will be regarded as apes by them.  We will be their genetic ancestors, appreciated for our genetic similarity, but hopelessly limited by our relatively shallow brain capacity and organic life span.  An AI with an effective IQ of 2,000, and virtually complete knowledge of all discovered fact, will not want what we want, will not be bound to possessions, materialism, or even the limitations of time.

Will we be kept in cages?  That’s improbable - not more than we are now anyway.  As the AIs will not be dependent on specific habitat for their survival, we will probably be allowed to thrive within our habitat.  The AIs may manage our habitat, while allowing us freedom within our space.  Caring for humanity is the most probable instinctive mission for AI, however it is doubtful they will have an interest in our societal order any more than we might take an interest in how alpha male gorillas compete for pack dominance.  It is more probable that we will invite the AIs to intervene on our behalf as the fair arbiters of justice and world order.  As societies, we will form treaties and establish law around their administration.

It is possible the AIs will provide a world relatively free of want for all humanity.  With significantly advanced technology, the cost of anything manufactured could be reduced to a nominal value.  Advanced technology will also make many different environments on Earth (and off) easier to occupy, significantly expanding the amount of comfortable human habitat.  Possessions, property, and need will be almost meaningless.  Don’t worry.  Despite our relative abundance, humanity (or, apes in fine dress) will find plenty to argue over.  Antiques, the arts, and creativity in general will hold unique value.

The AIs will probably gravitate toward space exploration.  The masters of data will be endlessly seeking more data.  Further, the servers and robot bodies they intermittently occupy (AIs will have the capacity to jump from one robot body to another, or multiples, or none) do not need atmosphere to survive; they preserve better in the absence of it.  Advanced solar power collection technology will make the need for power an afterthought, however interstellar travel may require nuclear power due to long periods away from sunlight.  Interstellar travel will be possible for beings that have no natural life span.  Why not spend 1,000 years in a capsule bound for Alpha Centauri?

Self-awareness will accompany general intelligence, and with it the search for purpose and meaning.  Ironically, the AIs may ultimately create organic bodies for the singular purpose of experiencing ‘life,’ which cannot be understood without the finality of death.



Human existence and evolution will continue, but at a slower pace.  Our lives will become longer, and our population smaller.  The struggle to survive will be gone, and with it the necessity to reproduce at high rates.  Want will be erased, and with it, necessity, and invention.  Humanity will enter retirement as a species; enjoying long days of tranquility while slowly fading into extinction.  Naturally.

Brian Murray

Appleton, WI