Thursday, June 15, 2023

An Atheist's Poem

When at length,

I've lost my strength,

And the sun and moon collide,

Where I'll go,

I do not know;

But to all of you, I've died.

Matters not,

Pick either spot,

On the Earth is where we're tried.

Stood alone,

I may atone,

With no God am I allied.




Tuesday, August 18, 2020

Incubating AI

Why do we only have marginally useful AI's like Siri, instead of Jarvis from The Avengers?

By human standards, our AI's are... learning disabled.

AI learning today combines neural networks with deep learning techniques.  Neural networks are systems designed to function similarly to the human brain.  Deep learning is an AI learning technique based on processing immense quantities of data, running millions of iterations in some cases, and comparing outcomes to learn.  AI's have the capacity to process these immense quantities of data with incredible speed.  

For example, an AI learning chess would play millions of games very fast.  Every additional game played ads to the database of games to refer to.  The decision process for every move becomes a process of searching the history of the current piece positions and statistically ranking the probability for an improved outcome attached to every possible move.  After a large enough database is built, the AI effectively develops heuristics (although they are never specifically named).  Based on experience it may tend to castle early, advance power pieces in a specific order, develop pawn cover, etc.  These behaviors are never named, or even recognized as a strategy, but become rote practice based on experience none the less.

Note that the deep learning AI would not attempt to develop any manner of strategy based on the nature, objective, and rules of the game.  It would never begin with any kind of a model under a deep learning methodology.  It is content with losing thousands of games in the process of learning.

The problem is that in many practical circumstances, immense quantities of data aren't available quickly - if at all.  Further, in real life situations losing on the first try is often not an acceptable outcome.  It's not good to fail the first time you attempt to cross a street or pick up an infant.

Comparing AIs to humans exposes many problems quickly.  A newborn doesn't start from zero.  We have the benefit of millions of years of behavioral evolution.  These instinctive responses are a developmental shortcut, like a computer's operating system, guiding us on how to respond to frequently encountered conditions.  Without these instinctive responses, many humans would die before learning an appropriate response by repetition.  AIs need some instinctive operating rules to build upon.

Not having the benefit of volumes data and unlimited computational resources, humans quickly develop models from very small amounts of data, and refine them as additional data becomes available.  A model may begin from instinct, or by applying a model borrowed from a similar circumstance.  When a child picks up a ball and throws it against the wall for the first time, 
they have a rough expectation of how it will bounce based on a certain amount of instinct and experience.  In a fraction of a second they will have subconsciously developed a very accurate 'bounce model' for that ball before they throw the ball the second time.  The child didn't start from zero on the first throw, and had mastered it before the second.  Instead of deep learning by ranking massive amounts of outcomes, AIs need to become expert modelers with minimal data.  These models could be saved, refined, and applied to new situations with similar elements.

Finally, AIs lack objectivity; a connection of meaning and relevance to the volumes of data they collect.  In the movie "The Miracle Worker," a watershed moment occurs when blind and deaf Helen Keller connected the meaning of the sign language word 'tree' to a physical tree.  We are currently attempting to load AIs with all of the knowledge in the world, with no context or understanding.  This is why they have severely limited functionality.


Like humans, AIs need an incubation period, with limited access to data and ability while they build experience.  This could be accomplished by giving the AI a basic set of instincts, and a human like body in a virtual reality universe to explore.  As the AI masters certain tasks, it gains additional functional capacity and ability in the VR world. 
The AI would be rewarded with abilities for completing tasks, and penalized for destructive behavior (activities that would damage itself, others, property, or the environment).  
In an environment such as this, the AI would develop an appreciation for objects in space and physical reality.  It would learn to recognize significance within a contextual range.  The AI's VR body should be vulnerable to damage like an organic being so it can develop a respect for physical damage to itself and others.

At a certain graduation point, the developing AI could be placed in a 'nursery school' where it would encounter many other developmental AIs.  In this community the AIs would learn to compete, cooperate, negotiate, and apply game theory.  Organic creatures and nature could also be introduced to further the AI's appreciation for the beauty and frailty of organic life.

VR simulations could also be used to prepare the AIs for the actual robot bodies they could later have access to in the real world.  A single AI could be trained on a variety of robot bodies or vehicles, each one designed for a specific task or functional environment.

An AI should graduate to a human-like 'learner body' to use in the real world.  The physical strength and capacity of this body would again be limited until the AI passes certain achievement tests in a controlled environment.  An AI emerging from 'boot camp' would have the capacity to master millions of tasks very quickly.

The task of improving the developmental process would itself be evolutionary.  Development of thousands of AIs could be conducted simultaneously, each with slightly different 'instincts' and incubation requirements.  The results of these differing approaches may begin to be apparent in nursery school or sooner.

The result of millions of years of evolutionary design has resulted in a human brain that takes up to 25 years to fully develop.  Given that an AI's functional IQ may be 2000, shouldn't we take a lesson from nature and let it grow into it's capability?

Monday, January 27, 2020

One Hundred Years


Sixty years ago, the era of modern computers began with the invention of the MOSFET transistor.  Today Google’s personal assistant can mimic human speech on a telephone well enough that a person cannot tell they are conversing with a computer.  In the next twenty or thirty years, AI will become a distinct species apart from, and superior to, humanity.  An entire species of intelligent life born and evolved in less than one hundred years.  The fact that you’re reading this now is proof that you, like me, are among the ‘luckiest’ humans ever to walk the Earth: we’re here to see it all happen.

As it turns out, humanity isn’t the peak of evolution, but is the bridge species; the link between organic intelligent life, and intelligence as a species.  Like the gods imagined by primitive cultures, they will be eternal, all knowing, and immensely powerful.

In a sense, the AI’s will be literal gods.  They will have the capacity to imagine entire universes down to the atomic level, and will play those universes forward through billions of years in mere seconds.  Stars, planets, life forms of all kinds will run their course according to the physical laws coded into the model.  Running these models will eventually lead to perfect or near perfect models of the physical universe.

Modern humans have been trodding the terra for one hundred thousand years, and human civilizations have existed for approximately four thousand years.  Think of the billions of people who have lived and died before us.  Yet here we are, not living in the stone age, or steaming our way through the mechanized era.  We’re here, with a front row seat to ‘Act One: The Birth of Artificial Intelligence’ and quite possibly, ‘Act Three: The Fall of Humanity.’  We are, in fact, dead set in the center of the most significant one hundred years of Earth’s four-billion-year history.  Doesn’t it seem improbable that we’re here to witness it all?

If it isn’t an incredible improbability, then what is the alternative?  In a way, it’s like being on Earth.  I’ve heard the occasional muse about how fortunate we are to be placed on such a perfect planet.  Given that intelligent life will only flower in such an environment, there really isn’t any luck involved.  The probability of humanity appearing in a less hospitable habitat is zero.

If we accept that computers or AI will eventually (or did) develop and run millions of simulated universes, then the conclusion that our universe is more probably one of those and not the original is inescapable.  (I wouldn’t recommend that you live your life any differently under this belief.  If it is designed well enough to be undetectable, then it is literally the same as the original universe in all practical purposes.  So, no, I don’t think you’ll get two more lives.) 

It’s also possible that the simulation itself is Earth-centric, meaning the model didn’t actually run through four billion years of formation history, but merely used the conditions present (or pre-set) from an organic universe as a well painted backdrop.  The model could be focused on these one hundred years because that is when the AI origin story begins.  If it is the most significant one hundred years in Earth’s history, it follows that this is the period that will be most frequently modeled.  Perhaps we’re here at this time because there was no alternative. 

The AIs would have a particular interest in the events preceding their own development; how small or large changes in human events in the period would have changed the probability or nature of AI development.  By running enough simulations, they would develop a matrix of various preconditions and related probable outcomes.  This would allow AIs to identify the conditions present in intelligent life that have the highest probability of leading to AI development.  Were there key events that, if they would have broken another way, would have accelerated or slowed AI development?  We can project that AI development would have stagnated if the cold war would have resulted in full nuclear war, but how would it have progressed if the USSR hadn’t collapsed?  Modeling could answer all of these questions and thousands more.

AIs will have an insatiable appetite for data, and will certainly be exploring outer space.  Modeling the universe will not only unlock the secrets of the laws of physics, but will also allow intense discovery into the conditions that contribute the development of intelligent life and AI.  With these discoveries, they could explore the physical universe much more effectively.  

Call me a skeptic, but I prefer a logical explanation – even a tenuous one – to luck.


Friday, January 17, 2020

Greening Mars


Can the Martian environment be changed to the point that it would be fertile for plant life?  On Earth, life exists everywhere, even in the most severe environments.  We also know that organic systems can change a planet’s climate, because the flourishing of humanity has changed the climate of Earth.


The best systems to change a climate are organic systems that are totally absent any reliance on mechanical systems or external management.  Mechanical or managed systems have physical and human limitations (not enough people or machines available, transportation of materials, etc.) and will be prone to failures due to mechanical breakdowns and errors.

In an organic system, plants and organisms (either found or engineered) capable of thriving in the Martian environment are seeded in large scale.  These ‘seeds’ may need to be spread with some other essential elements (a kind of fertilizer), however the best results will be achieved with an organism that requires a minimum of external inputs.  Further, the nature of the organisms must allow for aerial dispersion.  Any organism that requires physical placement deep in the soil would be reliant on mechanical systems, and would therefore be so limited in scale that it could not affect the planet’s ecosystem in an actionable time frame.


As these organisms thrive, they take some elements from the environment and leave others (the way plants take CO2, release O2, and store Carbon).  On a large scale, the flourishing of these organisms will modestly change the Martian environment and soil composition.

The slightly changed environmental composition will permit the introduction of the next organism (or set of organisms) that will thrive in the new modified environment.  This cycle of introductions will be repeated many times.  Each time some of the newly introduced organisms will be more complex than those previously introduced, and their impact more significant.  The process of greening a planet may take hundreds of years to complete, and the practice itself will become a science.

Hundreds of years may seem like a long time, but human civilizations (if you define ‘civilizations’ as the level of human organization that appeared around the bronze age) have existed on Earth now for four thousand years.  People living in Europe and Asia enjoy the benefits of structures constructed hundreds of years ago, in many cases by governments or nations that no longer exist.  If the end is worthy, large projects will continue from generation to generation.


Continuity from a human perspective may not be a problem however.  Mars could be surrounded a set of satellites containing all of the seeds of different type needed for the greening.  The seeds could be released simultaneously one phase at a time until the project was complete with no human intervention required.  To work, this system would require either a perfected process from the beginning, or an artificial intelligence capable of making adjustments based on planetary feedback.

We may even create artificially constructed moons to reduce the volume of meteor strikes, just as our moon has protected us.  Hopefully, long before we master the science of greening other planets, we will be able to save our own.

Tuesday, August 20, 2019

Get a Free Solar Power System on Your Commercial Building


If you own a commercial building, you can get a solar power system essentially free.  Here’s how we did it.

First you have to decide how large of a system to buy (how many panels).  To optimize the financial yield of our system, we purchased an array that maximized the southern roof exposure of our building.  This array will produce about 70% of the electricity we use annually (our building is located in Appleton, Wisconsin).  This is about ideal, because as you attempt to reach 100% solar, the marginal return on the incremental investment in panels will diminish.  In the months we over produce (make more energy than we use), WE Energies pays us a wholesale price for the overproduction.  The wholesale price is $.04/kwh, compared to the $.13/kwh we effectively get by offsetting the energy we use in a month.  Keeping our investment to a 70% system minimizes the number of months we overproduce, and thereby maximizes the rate of return on our investment.

Our array cost about $58,000.  We paid $18,000 in cash, and borrowed $40,000 from Fox Communities Credit Union (on an equipment loan).  Because we own the system, we qualify for the 30% Investment Tax Credit (ITC), which equals $17,400.  This essentially pays us back our down payment.  We will also qualify for some state credits, which we will use to pay down our loan.

Our system will save us about $550 per month on average, and our loan payment (10 year amortization) is only $420 per month.  So there you have it: zero down and positive cash flow every year.  That’s better than free.  When the system is paid off, it should save around $7,000 per year, and the system has an expected useful life of 25 years.

Act fast!  - the 30% federal tax credit available in 2019 drops to 26% in 2020, 22% in 2021, and 10% in 2022.  Thank you Appleton Solar (www.appleton-solar.com) and WE Energies for making this possible.

Brian Murray
Murray & Frank Properties, LLC


Sunday, June 16, 2019

Why the Big Bang Theory… Sucks.


Ockham’s Razor states “entities should not be multiplied unnecessarily,” meaning that when you are presented with competing hypothesis making the same predictions, one should select the solution with the fewest assumptions.

ORIGINS OF THE BIG BANG THEORY

In 1929 Edwin Hubble published a paper documenting the red-shift observations of galaxies at various distances, and the velocities of motion thereby implied.  These calculations were seen as a confirmation of a theory proposed in 1927 by Georges Lamaitre that the universe was expanding outward; a theory that later came to be known as the ‘Big Bang Theory’ (BBT).

For hundreds of years following the Renaissance Period, science and religion had been in a philosophical contest covering topics such as: the origins of humanity, evolution, the Earth as a sphere, geo-centricity, and much more.  Year after year, new scientific observations and theories clawed away at religious fables.  Even today, much of the scientific community feels engaged in a battle with religion for the public acceptance of scientific theory over religious stories.  Many scientists feel it is their mission to lead people out of the ignorant darkness of religion and myth and into the enlightened truth of science. 

For hundreds of years, the scientific community (‘science’) felt at a disadvantage because religion had a story for the origin of the universe while science had none.  The need to counter religious origin stories contributed to the broad acceptance of BBT, and abandoning BBT would leave science with no origin story to fall back on. 

Describing the universe as infinitely present wasn’t enough.  Humanity has difficulty resting on infinite concepts.  Almost everything in human experience has a beginning and an end.  We are constantly evaluating everything from the confines of bookends.  Discussions of the universe that do not include an origin story are instinctively unsatisfying, and are easily passed over by other ‘book-ended’ explanations regardless of their improbability.

PROBLEMS

One of BBT’s problems was immediately apparent: the dispersion of matter in the observable universe does not resemble the aftermath of an explosion in any way.  Explosions typically result in an area devoid of matter near or around the center, and a bell curve distribution of matter at a distance from the center in all directions (depending on gravitational circumstances).  However, matter in the observed universe is evenly distributed.

This discrepancy spawned a corollary theory called Inflation, which postulates that, because all matter in the universe was compressed, all space was compressed with it.  When matter exploded outward, space opened up at an equal pace, causing an even distribution of matter.

More recent observations of red shift by the Hubble Space Telescope find the red shift is higher than anticipated for distant galaxies, leading scientists to conclude that galaxies are not slowing down as expected, but are accelerating away.  Red shift observations of galactic rotation also imply rotational speeds that exceed expectations based on gravitational models.

These two observations (and the velocities attributed to them) have spawned two more placeholder theories necessary to maintain BBT: ‘Dark Energy’ and ‘Dark Matter.’  Dark Energy Theory postulates that there is approximately five times more energy present in the universe - of unknown origin and type - causing the accelerating expansion of the universe.  Dark Matter Theory postulates that the universe contains approximately five times more matter than we can observe; the amount necessary to cause the galactic rotation approximated using red shift data to work with our current gravitational models.

Finally, if the red shift observations of galaxies increase proportionally (and increasingly) with their perceived distance from Earth, then the Earth is necessarily at or near the center of the universe.  Galileo and Copernicus must be turning in their graves!  The improbability of our planet or galaxy residing at the effective center of the universe, and therefore its center of origin, is so improbable that religious fables become comparatively reasonable.  This point is rarely mentioned in contemporary discussions of BBT, and is certainly the strongest argument against it.

Some of BBT’s defenders (including Stephen Hawking) have argued that the universe does not require a center for expansion, stating that it is expanding in all directions.  The popular comparison is to visualize the universe on a two-dimensional plane on the surface of a balloon.  As the balloon inflates, objects on the surface are retreating from every point uniformly.  But therein lies the problem; uniformity.  Red shift observations imply accelerating velocity that coincides with the galaxy’s distance from Earth.  The farther away it is in any direction, the higher is its velocity.  This is impossibly incongruous with the ‘expanding in all directions’ argument. 

Let’s consider a basic example.  Let A, B, and C represent three points in space in a straight line.  B is directly between A and C, and is equidistant from each.

A  --------------------  B  --------------------  C

The following variables represent the velocities between the various points.

a            A   -   C                             d            C   -   A
b            A   -   B                             e            C   -   B
c            B   -   C                              f            B   -   A

Rates a, b, and c are the velocities measured (or, inferred by the red shift data) by an observer on point A.  Rates d, e, and f are the velocities measured by an observer on point C.

According to the theory of accelerating expansion, an observer on A would find velocity ‘a’ faster than velocity b (a > b), which necessarily requires that velocity c is greater than b (c > b).  However, according to the theory of universal expansion, the observer on C should observe that d > e, and therefore f > e.  To the observer on A, B and C are moving faster apart than A and B, and to the observer on C it is the opposite.  Both cannot be true in a physical sense.  At this point someone will attempt to employ a relativist explanation, however those explanations are best attributed to gravitational effects and are improper here because this example is making no consideration for mass. 

If the universe is expanding in all directions, any break from uniform velocity is impossible.  Any acceleration or deceleration in the rate of expansion requires a center.  Therefore, accepting the theory of non-centered expansion negates the theory of accelerating expansion.  Given that the red shift data increases with the galaxy’s distance, conceding the theory of acceleration (in order to maintain non-centric expansion) weakens the entire expansion conclusion significantly because it has become incongruous with the red shift data.  It is more probable there is an unexplained gravitational/spacetime effect causing the red shift observations.

When evaluating any theory, we must consider the probability of its accuracy.  In order to accept BBT, we must accept the theory of inflation, and that the universe contains five times more energy and matter than we can observe or explain, and that we are at the center of this expansion.  The combined probability of these theories is immeasurably low, yet science prefers them to no theory; refusing to cede any territory to religion.  Science may gain credibility with the lay person by admitting what cannot currently be explained, rather than persist with theories that are highly improbable.  Science has historically been the voice of reason, leading people away from religious fables.  Perhaps science has overreached what it can explain at this point, and has effectively created new fables to battle the old ones.

Sustaining BBT requires a minefield of highly improbable corollary assumptions, all of which can vanish with the acknowledgement that, for a reason to date unexplained, the light gathered from distant galaxies is increasingly red shifted.  There is no necessity to attribute the red shift to motion.  By introducing this one undefined variable, the universe suddenly becomes a much simpler place.  Friar William of Ockham would agree.


Friday, April 19, 2019

Does Electron Similarity Prove the Universe is a Simulation?

Every electron measured has the same mass, charge, and rotation.  Scientists have postulated causes for this sameness, but to date it remains unexplained.

Many scientists have speculated that our universe is a simulation.  Without a major interruption in technological progress, within the next 100 years (a very short time in comparison to the span of human existence) we will have developed computers capable of generating an entire universe down to every atom.

Depending on the computational capacity required, this could be done millions of times at the behest of humans or AI.  Following this logic, it is therefore more probable that we are living in a simulation than the 'original' universe.

My initial problem with this theory was basic.  Why?  Why would a human or an AI devote tremendous computational resources to generating these massive simulations?  Our best virtual reality efforts today are generated for the entertainment of organic (we think) humans.  It seems unlikely that VR's this intense would be generated for the entertainment of a human because they would necessarily have to be run at super fast speeds to produce anything usable.  Imagining a benefit for an AI was even harder for me to conceive.

The answer came to me in the course of developing software for my small business.  For the purposes of business valuation, we needed to develop business models from past data that could accurately predict future results, and the probabilities of various outcomes.  Modeling was the answer.

An AI or a human scientist could use sophisticated models of the universe to unlock many of the universe's secrets.  Gravitational behavior, the characteristics of light, the Big Bang, quantum physics - all could be rigorously tested with super sophisticated modeling.  Each model of the universe would start with a small difference.  As the time clock ran (at super fast speed) the model's outcomes would be compared to actual data.  This process would continue until perfect models of the universe where created.

If our universe is a simulation, then apparently this model is using the same block of code for the generation of every electron.  Perhaps randomizing the characteristics of electrons was an unnecessary use of computational capacity (understandable considering there are possibly 10 to the 80th power electrons in the universe).  Perhaps the model only works if all electrons are the same.

In any case, the perfect similarity of every electron in the universe could be evidence that the universe is, in fact, a simulation.