Friday, October 31, 2008

Systemic Stability dressed in Statistical Sensationalism

While playing Fallout 3, I found myself marveling once again that we have not totally annihilated ourselves as a species. In fact, I am still amazed that nothing came of the cold war. The fact that a nuclear attack on any country would result in an equally devastating counterstrike seems like an easily dismissed concern in the eyes of a psychopath with determination, no sense of self preservation, and the right connections - or so mass fiction would seem to hint.

But it hasn't happened yet. I live in what is undoubtedly the most hated country in the world, and the largest aggression-based atrocity we have had to weather lately involved a four digit death toll at the receiving end of some planes, or possibly some explosives, because those steel beams in the wreckage looked shopped and I've seen quite a few pixels in my days.

A coworker presented me with an interesting probabilistic thought experiment in the field of reliability. Let's say you have a somewhat complex system of 100 parts, and any given part is 95% reliable (it will work appropriately 95% of the time and fail the other 5%). Let's say this is a touchy system; if one component fails, then the entire system fails. The probability of the system working is 0.95 raised to the 100th power, which is 0.00592, a little over half of a percent. There is over a 99% chance that such a system would fail. Though each part is almost completely reliable, the fact that each part is necessary toward our goal of success makes the system holistically unreliable. If we can increase the reliability of every component to 99%, there is only about a 37% chance the system would work. We need to ensure about a 99.3% reliability rate on each component to even get the odds of a coin flip.

There are approximately 6.7 billion people on this planet. If we consider each person to be a component of the system we call humanity (or if we want to be a little less egocentric, the system we call planet Earth), I wonder what we can estimate our component reliability to be. If there was "only" a 5% chance, per person, of that person launching a nuke and instigating our final hours (95% chance against), the odds of us surviving another day are too low for my windows calculator to display. The same can be said if the odds of pacifism are 99% per person. Same for 99.99%.

Assuming we can't have 100% odds against a nuclear holocaust, what percentage would you feel comfortable with? Would you feel comfortable if the odds of such an Armageddon happening tomorrow were only 1%? Let's see what level of component reliability we have to have to obtain that goal. If the odds of every human being behaving and not finding some way to trick a few nations into playing fallout volleyball was 99.999999% per person, would you feel safe? I wouldn't, as 99.999999% raised to the 6.7 billionth power is 7.98e-30, or 1 in 125,236,359,038,394,908,283,678,232,890. Playing poker with a new hand played every minute, you're just as likely to be dealt a royal flush each hand for 366,720,740,041,699,000 years (about 26 million times the established age of the universe) as you are to avoid witnessing a nuclear winter tomorrow if that's the only per-person reliability we can expect.

Tacking on another couple of 9's, giving us a per-person reliability of 99.99999999%, puts us at the halfway mark. Heads and you can sigh in relief, tails and you should be signing up for lodging in a vault, preparing to drink gallons of Nuka Cola to replenish health with relatively low radiation poisoning, and practicing your melee skills since post-Armageddon low-levels have to be so close to raiders and mutants to use their guns effectively that they just as well utilize the extra damage that a readily available sledge hammer offers. Also, only music from the 1930's will survive the explosions, and don't expect anyone to find time over the next two centuries to write new songs.

If we want to reach our original goal of a 99% chance of avoiding a nuclear war tomorrow, we have to up the odds in our favor slightly more - to about a 99.99999999985% pacifism rate per person. That's quite a high component reliability that we must maintain for our system to work isn't it?

But wait, there's more!


The above figures will only get us through the next day, remember? I don't know about you, but I would like the world as we know it to last a little bit longer, at least another year. If we assume that a 99.99999999985% pacifism rate per person guarantees us a 99% chance of surviving for the next 24 hours, we can calculate that the odds of surviving for another week are about 93.2% (99% raised to the 7th power). Still pretty favorable, though perhaps not as much as we would like as we are talking about the largest death toll of our species in like, ever. But raise that 99% to the 365th power (366th if it's a leap year, but we'll aim a little lower for now) and the odds of the human race lasting the next year become 2.5%. Let's spoil ourselves a little and aim for successfully surviving a half a century of such risk; that puts us at the abysmally low survival odds of 2.2e-80%, even ignoring an inevitably increasing population (to be fair, I have been including infants in the 6.7 billion population estimate - the fun of sensationalism!). If we can assume an even stricter 99.99999999999999% cooperation rate per person, then we can almost guarantee a 99% chance of surviving the next half a century (assuming we make it through 2012). We've already made it over half a century since Hiroshima, so perhaps we are getting close with our presumed per-person cooperation rate.

Voodoo math aside, it just seems amazing to me that no properly motivated terrorist or corrupt/bored government has decided to end it all for our species. Perhaps we are a more tranquil species than I thought?

Friday, October 24, 2008

The Cost and Benefit of Introspection

One of the interests that I have developed over the years is understanding the differences and similarities between computers and the human mind. Finding many flaws in modern programming, and being too stubborn to admit that they may just be flaws in my own habits, I have steered this interest toward creating a development environment that is more “natural” to use. I have been operating under the assumption that it will be easier to change the near entirety of modern programming as we know it instead of changing my own instincts. I've never worked so hard to maintain my laziness.

As the years have passed and I find myself forced into more mundane programming tasks through college projects and work assignments, I find myself wondering if I am not chasing yet another pipe dream. The ideals that I chase include elevating most programmers beyond viewing their code as a set of files and directories that turn text into arbitrary memory addresses, yet I myself keep stumbling into other abstractions, equally arbitrary and unnatural. The fact that I can't even define what I mean here by “natural” and “unnatural” is perhaps the cementing alarm that what I seek is unattainable. My habit of chasing the unattainable leaves me yet fruitless in my side-project endeavors — but man have I been having fun these past few months just trying to revolutionize programming, even if it is an impossible goal.

The Dealio


I called this project “Rational Thoughts” when I first devised it, as my goal was to create a development environment (back then a mere language) that was more “Rational”, closer to how the human mind perceives things, and thus hypothetically easier to work with. This was a mistaken direction. Human beings are, by and large, hardly “rational” thinkers; we are actually quite emotional*. Even at my most philosophical, when the voices in my head are reminding me that I can't prove my own volition and that there is no faithless evidence for a god or an afterlife, there is an even louder voice causing me to feel sorrow, and a yearning for when I was younger and kept happy by my own naivety about such matters. Likewise, it is perceived happiness that drives us to flourish intellectually, more so than it is our intellect that drives us to be happy. Modern computers, and thus I argue the programming languages designed to interface to them, are already much more rational than we have ever been, and I sure as hell don't want to try and implement emotions in my programming language**.

In case it has not been made obvious, this is not an admittance of defeat. This is an attempt to organize my thoughts and get some feedback on a project that has become so large I can not perceive its entirety at any given moment. Specks of it drift past my eyes, goals from the past, yet I can no longer keep track of which of these goals are compatible and which are mutually exclusive. I have a particular goal on my mind, which I will present soon. But first, I want to talk about introspection.

* ever find yourself cursing at your compiler for generating a compile error, as if your unrestrained outburst would intimidate it into compiling?
** I am perhaps operating under another false assumption, that rational thinking and emotional thinking are somehow opposite each other. Perhaps a topic worthy of another article? Until then, or until it comes up in a comment, I shall continue operating under that assumption…<_< >_>

Introspection


It has been my greatest tool in designing this project. How can I fathom something that operates analogously to the human mind if I can not fathom the human mind? So I desire to study the human mind, and it just so happens that I have one with me at all times.

The first problem with introspection is the observer effect. By using mental processing to consciously look at your — uh — mental processing, you are limiting the amount of mental resources you have to do the observed processing, similar to the framerate drop you get when debugging a game. It is because of this phenomenon I have started to wonder about the direction I am heading. I was once a person who believed that self-introspection, or doing more things consciously and less things subconsciously (or at least being more aware of what the subconscious is doing), was the path to “enlightenment” (a cliched word, but I can't think of any better) and free will. Now I am beginning to think that many things are done subconsciously for optimization reasons, as if the subconscious was running assembly commands through the low level bus that drives our processors. If we started taking more and more mental processes out of our subconscious and doing them “ourselves”, wouldn't we slow down in the same way that an assembly program rewritten in unoptimized Java wouldn't perform well?

The second problem with introspection is avoiding a corrupt memory state, which I believe we humans are still calling “insanity”. If you are not used to it, constantly asking questions such as “why am I straight instead of gay?” or “would I be as evil as Hitler if I was born into his exact world state?” can cause discomfort. It seems from my experience that doing this enough can cause less discomfort via emotional “callouses” that form in your brain, which I can only assume operate by allowing the “rational” part of your mind to continue processing ego-threatening thoughts while the “emotional” guy sits on the sidelines believing that love is about romance and not hormones, or whatever it is that keeps him happy***. I believe that taking this process too far will result in a psychopathic mindset, so handle with care.

It is my obsession with introspection that gives me guidance when working on Rational Thoughts, but it is also what holds me back at times. I desire a programming environment where any question the user asks may be answered easily: “What was the value of this variable seventeen frames ago?” “Why is this variable negative here?” “What will the value of this thing be in two years?" Of course, these questions can be answered through multiple invocations of a program and debugging, but I want an environment that is able to look into its own “mind” and help out on a level closer to how the human mind views things. A development environment that is not so much rational as it is “clever”. But after so many years, I am beginning to wonder if I am any closer to understanding how my own mind works. After all, mightn't it take more processing than a mind has to understand itself, an inescapable paradoxical property of intelligence?

***I like that guy, he comes up with witty asides to lighten blog posts that reek of technicality and emo-philosophical hogwash