Home > Thinking > Artificial Intelligence

Artificial Intelligence

Reading Dr. Michio Kaku’s book Physics of the Future, I thought a bit about how we evolved and how AI could do so much better.

Imagine a central Knowledge Database (which I’ll call KDb for short), and a type of robot that is preprogrammed to make a new model every week.┬áDuring the first iteration, this KDb holds the laws of physics and basic information about the world, and these robots would be programmed in such a way that anything endangering their integrity would be added to the KDb.

For example, a first-iteration model would bump into a wall, damaging some circuitry under the metal panel of its arm. During the following night, it would upload wirelessly to the KDb details of the event (location, force of the hit, material of the wall, etc). Every week, the central computer (or motherbrain, or however it would be called) would gather all events of the week, and calculate how best to prevent these events from happening again.

It could reason that the next iteration’s robots would need a stronger chassis, intra-wireless connections within its body, better cameras, better algorithms to detect collisions, or a better mobility. Then the robot factory would receive the recommendations, and start building the next iteration. The central computer would upload to the KDb that walls can be dangerous when hit with a certain amount of force. The only thing preventing the central computer from thinking about giving them explosives or weapons (in order to destroy the wall first) would be that the collision could be with a fellow robot.

Another example: a robot gets acid on its chest. This doesn’t damage it, but its chest is marred – it sends an event to the KDb. The central computer decides that the best course of action is to add that kind of acid to the list of things that a robot should avoid (maybe with a danger rating), along with how the acid looked like. Given enough knowledge about how acid and normal water each usually look like, these robots would, in a few iterations, be able to avoid acid.

And, every month or so, robots would go back to the factory to be recycled for materials, constantly leaving the better, newer versions out in the world.

 

In only two years, this kind of robot would go through a hundred iterations, and with them, the fastest evolution ever. The key point here, of course, is the KDb – a collective knowledge. Add to that an incentive to constantly build better and faster computers, and there would be no stopping them.

Categories: Thinking Tags:
  1. No comments yet.
  1. No trackbacks yet.
You must be logged in to post a comment.