View Full Version : Comment: A.I.


SwitchBlade
16-06-2003, 03:02 PM
It's been a while since SwitchBlade posted a comment here, mostly as they've been appearing in Your Symbian (http://www.yoursymbian.com) *subscribe now*. So it was felt that the time was right for another. With all the recent goings on with releases of the Matrix and Terminator 3 coming soon, it is time for a look at A.I.

A.I. or Artificial Intelligence to give it it's full name, is an idea stretching back to Asimov with the main thrust being to create a machine that can ideally think in the same way as a human. In films this usually culminates in the A.I. disagreeing with it's human "masters", as with Skynet in Terminator deciding to eradicate all human life, the machines in the Matrix rising up after years of slavery to steal man's dominance and make him their slave, or in the case of the anime Bubblegum Crisis flaws being used in the A.I. to control and manipulate the machines to someone's evil goal.

The achievement in most films interpretations of A.I. is that an almost perfect interpretation of human intelligence is created and the machine either lacks a conciense, percieves a threat from us, or just get's annoyed at being as used as a slave. In the short term, or at least the forseeable future, this type of A.I. is not going to be easilly created as in creating the A.I. you give the machine rules and these create choices, as the machine encounters a problem it has to choose an option based on the rules it has been given. We lack the ability to create intelligence with a broad enough spectrum of rules and choice so as to create machines that will turn round and decide we are worth killing, but despite the lack of advanced A.I. programming A.I. does exist in the world around us.

A.I. vacuum cleaners exist, and lawn mowers on similar principles. The vacuum cleaner moves around the room and uses it's A.I. to avoid obstacles such as walls and furniture, the lawn mowers use it to avoid plants not meant to be cut down. The machine's life is based on rules, the vacuum cleaner for example know's it's world is flat and that anything going up above floor height must be cleaned around. Now imagine that someone for a joke places the vacuum cleaner in a hall above some stairs, as it goes about it's business the vacuum cleaner will not have encountered stairs, and being programmed for a small room will not have any rules designed for what choice to make here, now as the stair is not an obstacle the obvious choice is for the cleaner to carry on cleaning and end up in a pile at the bottom of the stairs. To make the cleaner work around stairs the programming will need to be advanced and the A.I. will have more options when it encounters that situation. It will not have the intelligence to learn from it's mistake first time and alter it's own rules, as it will invariably not realise it did anything wrong as the choice it made made sense in it's programming.

Creating an A.I. is like bringing up a child, you teach the child the rules of life, such as stepping out in front of a bus is dangerous, that flames are hot, and to stop it or they'll go blind. The A.I. needs teaching all the rules that will govern it in it's existence, and to save time and money only the necessary rules will be given to the A.I., as for learning for itself this would involve a very extensive set of rules so that the A.I. knew what it was attempting to do and could decide whether it suceeded or not. A.I. can only be used in sections, like the doctor program where you enter your symptoms and the A.I. checks it's extensive record of illnesses for one that best matches the criteria set to it, in a similar manner to the way a real doctor would, although the A.I. would not be able to diagnose from any more than basic symptoms where a real doctor would be able to use various senses to make diagnosies from his/her experience.

While A.I. is something we encounter everyday in one form or another, it is a long way from becoming something that will attempt to take over the world on a whim, correction, it isn't that far away but that relies on the people who can decide they want to take over the world to not institute A.I. to do it for them. A.I. can help make the world a better place for people, it can also make it a worse place depending on who designs and create's the A.I. Creating the ideal A.I. is a long way off, and when/if that does happen hopefully people will look at the project *very* carefully, and not make mistakes as seen in films where the A.I. is too "free" and intelligence and certain rules are maintained to protect us, and the A.I. from itself.

Probably the worst comment I've written so far, but hell, I'm just killing time until I'm off to work. Let me know what you think, make some comments.

TANKERx
16-06-2003, 06:08 PM
Interesting comments there. My response isn't a knock, just some things that came to my mind as I was reading.

My view on AI is that it should be best described as Artificial Application of Rules.

For me, true intelligence is that of our ability to spontaneously burst into such colourful creativity which wells up from a living soul. As a man of faith, I see the part played by the soul as that which will be impossible for man to replicate.

The corrupt world in which we live can so break a person that he/she may want to terminate their own existence. Could a machine ever be brought to a point where it terminates its own power supply because it cannot calculate its own worth? Sure, this is a terrible place to be when a person feels that this is the only way out, but it is also a human place to be.

Or could a machine know the sheer exhilleration of standing in a shower of heavy rain, the beauty of a mountaintop view, the joy of interacting with another machine/person or even the pleasure of meeting your dog, wagging his tail, when you come home from work?

I think that we could program a computer along the lines of defining a variable for (for example) joy and say; IF GIRL_SAID_YES=TRUE THEN Joy=Joy+500 ELSE Joy=Joy-1000, but that would be what someone has told the machine to do.

There's an intelligence that makes me give money to a charity that feeds starving people, but it's a different part of me that hurts for them. The former can be replicated in a machine, but I don't think the latter can. It would be logical for a machine to give something to another in order to help another, but where is the logic in feeling bad because another is suffering?

If I have nothing to give, I can still feel bad for a suffering person, but to do so would not be logical for a machine. Either it can help, or it can't - end of story.

When a lover is awaiting a telephone call from the person they love, they don't just run a code that says WAIT UNTIL PHONE_RINGS=True. There's a whole lot of other, illogical stuff that goes on under the hood. Stuff over which the person has absolutely no control.

I think that the intelligent vacuum cleaner may become so intelligent, it will calculate that because Eric has walked through the house with dirty shoes, the floor needs cleaning again - but it's soul and capability to actually live will always be a mere attribute which we create in our imaginations when we say "this blasted computer has decided to crash again!".

Just my thoughts.

:D

LAuRA
17-06-2003, 06:06 AM
Forgive me for intruding, but this discussion made me want to add some views of mine. So TANKERx is basically saying that if spontanious feelings could be reproduced, then we could talk about 'real' A.I.

Even as we speak, scientists around the world are working their butts off to figure out the secret of human emotions. Some things are known already: we know some of the brain structures that are involved, and we know some of the chemical substances - transmitters - that are involved. We can even manipulate them a bit. But to be able to really figure out how the system works is still very far ahead. The scientist in me would like to believe that eventually we have it all figured. Not in my lifetime, maybe not even during the next generation, but eventually. And the logical next step then would be to reproduce it in an artificial form.

The romantic in me on the other hand believes that there always remains something that we cannot comprehend. That there is always something 'else', that the whole is more than just it's parts. What this 'else' is, has different meanings to different people. Some think it comes from the lineup of the stars the moment we're born, some think is is written in our names. Some believe in other things.

Similarly the romantic in me would like to believe that when I spell out my name, or lay down cards, someone can tell me who I am, what are my weaknesses, my strenghts (ref. some recent waffle). It would be comforting to believe that someone is there to give instant solutions, instant help, that all the answers are right there. The scientist in me thinks it's all sham, that there must be a logical explanation to all of the coincidences that sound true. Where this dilemma gets me in the matter of religion, I will not get into.

Instead, going beck to the original theme, if we eventually were able to figure out how our neural circuits work to produce spontanious thoughts and emotions, we would then be able to reproduce the whole thing in an artefact. The only difference then between human and A.I would be the fact that humans evolved by themselves and A.I's were produced by us. Correct? But if life is sci-fi and we all live in a Matrix, how do we know? Maybe we were just wired to think that we evolved by ourselves? ;)

Ewan
17-06-2003, 08:19 AM
True A.I. (by that I mean something indistinguishable from a human) will happen, for this simple reason.

A brain works by an electrical impuls either jumping or no jumping over a gap. This is a single neuron, and so it's state is either on or off.

A computer works by an electrical impulse being stored, so it's state is either on or off.

Spot the similarity?

So at the root of the problem, the brain and the computer (while in different media) work the same way - so there should be nothing to stop us given time and the will to do so.

Dazler
17-06-2003, 11:04 AM
I had a massive post, but the fucking editor deleted it ...
to bad...

SwitchBlade
17-06-2003, 02:28 PM
While I understand the technology similarities that Ewan points out make man and machine similar, the underlying problem is I can't see how you could design an A.I. with problem solving skill greater than following certain rules. Then there's creativity and random stimulation, while I understand that our souls are just stored in the electrical makeup in the brain, I don't see how you could transfer something so complex without making a carbon copy of a human brain.

Ewan
17-06-2003, 03:16 PM
Carbon has a valecy of 4. So does silicon - you COULD just build a replica neuron in software (and allow the BRain OS to build new neurons as required), hook it to a whole lod of I/O ports and stick it in a stuffed teddy bear.

Treat it like a baby and see what happens - it works for babies!

(Wasn;t some mad brit scientist doing something like this)

LAuRA
17-06-2003, 06:54 PM
In a sense Ewan is right (says the scientist in me). There are similarities between human brain and the advanced computer techology. If you do a search using words like 'learning neural networks' or 'self-organizing maps' you will find huge amount of links to research where the artificial networks has been created so that they 'learn' unsupervised, spontaniously. I'm not too familiar with those projects, but I know they exist and maybe eventually they will figure everything out. In the meanwhile the scientist in me also points out that the human brain is far more complicated than just a bunch of single neurons being on or off. There actually are neurons that do not follow the all-or-nothing -principle, that interact with each other in a more subtle way. There are inhibitatory as well as excitatory connections, there are numerous transmitter substances that can have either inhibitory or excitatory effect on the synapses and there are myriads of combinations of all the relevant factors that take place in any given moment. Everything happens parallel and in a dense network. So the task to figure everything out is unbelievably huge. But impossible? Perhaps not...

Unless (says the romantic me) there is something else, that cannot be dissected from us or reproduced...

janafunk
18-06-2003, 04:57 PM
Hi

I have always been intrigued in this subject and during my time at university (1996 to 2000) I tired to get as much info regarding this. But I was never satisfied with the level of work that was being done….they are still using ancient programming languages for AI, and well you talk about self-learning maps ect… IMHO they are still wasting time.

I know that there are a lot of similarities in our brain and binary 1’s 0’s but the biggest problem in mimicking the brain is the fact that it has had millions of years to evolve to this stage and in order for a computer brain to do so….it will also take such a long time. Plus the fact that computers, as far as I know, do not have the ability to carry out multiple-simultaneous-parallel-processing, which our brain does at the rate of trillions of cycles per nano second.

So you can see where the limitations are…..

But I do hope that in my lifetime AI brains become a reality.

Don’t get me wrong, there are so called knowledge based systems, where it mimics thinking but only on a superficial level, ie. Doctor diagnoses system. But this is now where near a fully functioning AI Brain

That’s all I can say for now…..got to get home!!

TANKERx
18-06-2003, 05:19 PM
If AI does become a reality (I had a lecturer in Unversity who believed that when it becomes available, computers could be converted to religion and go to heaven), should computers have 'human' rights?

If they become human enough, will they inevitably inherit the selfishnes and hatred that blights our species?

Will we have a Second Renaisance or would we be wise to respect this new 'ife form' if indeed we are to recognise it as such?

LAuRA
18-06-2003, 07:06 PM
The cynicist in me says that there are no worries. That the full accomplishment of 'real' A.I would cost so must and take so much effort that the project would be terminated before it got to the final stages. That it would be financed up to the level where it would help to produce spare parts for humans and maybe some help in routine tasks (as it already is doing) but anything beyond that... "sorry but we don't have the necessary funds for it"! So the question of whether those creatures would end up in heaven or not, would not be relevant.

But the question about what emotions they would inherit and what not is intriguing. The obvious answer would of course be that if the machines were created by humans, there would be no way of avoiding the human weaknesses. So far we have not been able to control the less desirable emotions within ourselves. We would first have to be able to figure out how to inhibit say selfishness within ourselves and only after that can it be controlled in artificial life forms. And I don't even mean in practice, I mean theoretically. That is difficult enough. Because I believe sellfishness often stems from another source - biological need for reproduction or something...

But hey, I just realized something: the A.I's would probably not have the regenerating tendencies of us humans, right? They would not make babies? If they would not have the reproductive urges (which has been one of the fundamental things in us humans for tens of thousands of years), maybe the emotions they would create for themselves (from the neural networks that were built by us humans) would indeed be free from hatred and selfishness?