So, artificial intelligence. Theoretically, it is possible to program a computer with such a depth of complexity that it can learn, has functioning morals, and can react to entirely new situations in a logical way based on past experiences. Obviously, this method is rather impractical. I’m rather more partial to the natural selection method.
As we all know, every old computer has its own “personality.”
Old Man: “You want a printout? Give that old CPU a good swift kick!”
This “personality” results from every program installed, every action taken, every disk defragmented, and every virus downloaded.
Take two completely identical computers fresh out of the box. Assume that this is an ideal world, and that both are exactly the same in hardware, with no factory defects or abnormalities. Install the same program on both. And now, for the first difference, computer #1 gets defragmented after installation, whereas computer #2 is not. Now assume that both computers are sent to the same website via the same ISP. Both computers contract a virus that is set to corrupt and destroy a single specific memory block. #1’s memory block is destroyed, but #2’s is not, as it happened to be a block that shifted position and was never returned to its original state, lacking defragmentation. #1 downloads an update that erases the virus, but the damage is done. #2 never downloads the update. #1 downloads a software patch, however, to fix damage done by excessive disk fragmentation. The patch causes a single discrepancy in the OS binary code, however, which over the course of the next two years effects a cascade of minor binary changes.
Already, we have two VERY different computer systems from either each other or the original. Each one is unique. Even erasing the drive wouldn’t completely rid the system of traces of the original. Here is where it gets odd.
DNA mutates. A lot. Most of these mutations/transcription errors are destroyed or corrected, but every now and then one slips through. Mostly these mutations are harmless, but occasionally one comes along that is either harmful or beneficial. Harmful mutations, like the fatal transcription errors in a computer’s code, can kill the organism off. Beneficial transcription errors can increase the effectiveness of the computer.
Theoretically, eventually enough beneficial “mutations” could occur in a computer system as to create, at the least, a crude form of “artificial intelligence,” possessed of its own unique “personality.” Not saying it’s practical, just theoretically possible.