I think therefor I am...

octane

I have issues, OK!
OK, I like this thread because it got people thinking.

Well, think about this: do you think that we will one day create sentient beings?

Be they machines or a combination of machine and living tissue.

In short, will we create AI? Don't just say: 'yes' or 'no' :confused: give reasons...
 
I think that the current definitions of sentience, consciousness and intelligence are too sloppy and broad to accurately answer the question.
Is a chess-playing program intelligent? Is is sentient? Is a child or a chimp intelligent?
There are a lot of opinions regarding the first question: yes, it is, because it applies an `intelligent' algorithm and strategy to solve a problem.However, it does so in a completely un-human way, by brute force calculation.
Is it sentient? Even here opinions diverge considerably. Since it has an internal representation of its world (the chess board), it is `aware' of objects and processes (pieces and moves) therein and it reacts to them, bringing change through its own actions. But would you say that it `wants' to win? Or `intends', `strives', `aches', `likes' to win? Would it get frustrated at loosing and exult at winning?
Now look at the baby child and a chimpanzee: we would surely say that they are conscious beings, even with a (still) limited form of self-consciousness. However, they (still) lack all kinds of intelligence: abstract, logical reasoning for instance, long-term planning, etc. They also lack a way of structurally expressing themselves through language or art.

So I think, first of all, that the question of intelligence and the question of sentience/awareness/consciousness should be treated separately and that, secondly, before beginning such a debate we try to define (by example) what is and what is not intelligent or conscious.

Will we one day create intelligent beings? Yes, sure, even Quake bots exhibit a limited form of intelligence within a formally defined environment.

Will we one day create sentient beings? I doubt it, I think that is a more radical, qualitative jump. However, who knows what technology will bring?
 
Cat said:
I think that the current definitions of sentience, consciousness and intelligence are too sloppy and broad to accurately answer the question.

:D

I was deliberately ambiguous with my question. Call me cruel, but I just wanted to see if anyone was paying attention :eek:

Quake avatars present an interesting basis for a debate. For instance, to tackle the issue of awareness first would help a great deal in helping us get a grip on the idea of AI.

So what if a Quake character decided that it didn't want to kill you. What if it decided it wanted to help you. What mental faculties would it need to be able to make that kind of decision?

For me, the true test of AI would be a device [machine / automata] to make a decision that is not based on something it already knows, but is instead learned or a decision that is based on a combination of pre-existing knowledge.

The interesting issue with AI or indeed ourselves is that our brains are not made up of long rule books or synaptic laws that govern us, we are instead equipped with a near-infinite number of tools to work with.

So this begs the question, is our awareness a product of our knowledge? After all, a baby has a very limited concept of self.

But is this because of their under-developed brain or because they don't 'know' anything yet?

Ah, questions within questions...
 
Right, for a start, i think that we only have a chance of creating sentience (under cats definition and opposed to intelligence) if we base it on a system which approaches the level of connectivity of a neuron (brain cell). Humans have more than 100 billion neurons, and some integrate and balance up to half a million inputs (the Purkinje cell for those interested). There are more connections in the human brain than stars in our galaxy.
I read and hear people going on about transistor based intelligence, and how soon it will happen, how blase people some people are about it, and makes me kinda mad. For instance, people like Ray Kurzweil who says:

"Within 25 years, we'll reverse-engineer the brain and go on to develop superintelligence. Extrapolating the exponential growth of computational capacity (a factor of at least 1000 per decade), we'll expand inward to the fine forces, such as strings and quarks, and outward. Assuming we could overcome the speed of light limitation, within 300 years we would saturate the whole universe with our intelligence."

(more here if you feel the need.)

Its just unrealistic. I accept you may be able to make a computer with as many transistors as their are neurons in a brain, but the interconnectivity and flexibility will not be there. Equally, if you made a computer system based on units with as much connectivity as a brain cell, you' never get enough of them together. I'm not saying you just get to this level of connectivity and 'poof' sentience appears, but i think it will be a prerequisite for sentience, however you actually plan to achieve it.
As for reverse engineering the brain in the next 25 years, don't make me laugh. I studied a lot of neuroscience in my undergrad degree and what was evidently clear is how much we still don't know. Of all the fields i studied, it was the one where most often the professor would stop and go:
"Well....we don't really know what this bit does, erm...it looks a bit like one of these but we really don't know. If any of you kids ever work it out, come tell us, we'll have a job waiting."
Brains are not static things with fixed connections, they are dynamic organs which constantly alter themselves in response to inputs.

Sentience, to mind my mind, requires an hugely complex, interconnected and amazingly flexible piece of 'hardware' (or 'wetware' as some now say) to run on. I don't see computers providing this kind of environment anytime soon.

(minor rant over ;) )
 
ora said:
Right, for a start, i think that we only have a chance of creating sentience (under cats definition and opposed to intelligence) if we base it on a system which approaches the level of connectivity of a neuron (brain cell)...

A rant? If that's a rant, then I'd love to hear what you consider rational.

All valid, all sensible.

Yes, it is unlikely that we will see AI sprout from silicon.

I think that we're seeing the limits of nuts 'n' bolts / silicon 'n' wire technology and to take all of our expectations to the next level, we need to start thinking about 'wetware'.

As for reverse-engineering the human brain .. err, no!

And herein lies the problem of creating something that is self-aware.

How do we create a self-aware organism if we don't fully understand the constructs and mechanics of our own mind?
 
The interesting issue with AI or indeed ourselves is that our brains are not made up of long rule books or synaptic laws that govern us, we are instead equipped with a near-infinite number of tools to work with.
Don't be so optimistic or chauvinistic about brains, there's more rule-governed behaviour going on that we would like to admit ...

As for reverse-engineering the human brain .. err, no!
Well, actually, if that would be the only thing going on, I would agree with you. however, we are also forward engineering the brain. Let me explain: imagine someone at intel stole a prototype PPC980 from IBM, but didn't have the instruction set ... well, he would be forced to reverse-engineer all the functions that the processor is capable of, simply by poking at it and looking for regularities in the outcome. With the brain we aren't only limited to that, but we can also forward engineer: we already know what we are capable of, so we can look at what `wetware' would be necessary to instantiate that kind of behaviour. Example: we know that we can see colours, so let's check what structures we would need in our eyes and brains to instantiate that. This way we discovered the functioning of rods and cones and underlying structures. If we only started from them and wondered what they could do, we wouldn't have figured out so fast.

So since we can also forward-engineer the brain, it is much easier (but still damn hard) to find brain structures and the related mental functions.

I agree with Ora that we would need enormously complex machines to even simulate human thought, but I think that it is not hard, and is even being done currently, to simulate and maybe even instantiate lower types of consciousness in software. All kinds of experiments with artificial neural networks have shown a lot of emergent behaviour that maybe, with more complex machines might make some elementary form of awareness possible in the future.

Octane: what you in fact require from the bots is not just independent action but individual action. this would require form the bot to recognise itself as an independent entity, with its own wishes, preferences even feelings and to act "selfishly" on them. It would require full-fledged free will, will all the problems attached. I think this is too high a requirement, but I understand your position. I think we can already call their behaviour quite intelligent. It's not that they fight like girls playing soccer ("Look! A ball! Let's go and kick it all together! Run!" :D ), but they employ co-operative tactics, can be pro-active, consider alternatives (ruch and attack or heal first) etc. They just make these decisions on a lower level that we do: based on blind rules. Still, this can give rise to the same level of intelligence that e.g. a fish or amphibian has.
 
ora said:
Humans have more than 100 billion neurons, and some integrate and balance up to half a million inputs (the Purkinje cell for those interested). There are more connections in the human brain than stars in our galaxy.

I read and hear people going on about transistor based intelligence, and how soon it will happen, how blase people some people are about it, and makes me kinda mad...

That said, I was reading the UK edition of Macworld where a theory among a number of scientists [non mentioned in the article, unfortunately] that the internet could become such an animal.

So although we have been quite disparaging about AI-on-silicon, the internet does add a very different dimension to the issue -- specifically, the number of interconnects required.

The internet holds a vast repository of information and data [the two are not the same] and all are machine-accessible.

Now I know we're not likely to see SkyNet rise to ascendance by means of an insurrection mounted against the human race, but with this wealth of knowledge, there's certainly a rich brew of key ingredients readily available.

Also, with technologies like XML, gaining access to this 'knowledge' becomes more meaningful...
 
Cat said:
Don't be so optimistic or chauvinistic about brains, there's more rule-governed behaviour going on that we would like to admit ...

I was making shortcuts, but the 'rules' are more like boundaries and limits.

The 'fight or flight' reaction is a good example. This may well be as limited as the options you're provided with become.

Those other physical reactions that provide you with no choice at all are involuntary and almost purely instinctive.

Cat said:
... We already know what we are capable of...

I'll let that pass. I know what you mean, but there are an inordinate number of things we are capable of doing but are unaware of.

Yes, maybe we are being too ambitious. After all, Stephen hawking once said: 'intelligence has no survival value' and he's right!

How far would my BA(hons) degree get me in the Columbian rain forest?

And often, the simplest organisms are the most successful.

The Wired News web site used to be an excellent source of technology news .. not so good these days.

One such article was where some little three-wheeled robots were given parts analogous to an eye, a mouth and an ear.

Then they were given the wherewithal to talk, hear and a need to find food as well as an inquisitive desire to explore, but as an option.

A few of them were thrown in a room and left to their own devices.

Some of them 'died' through lack of power while others had found the little power points and were quite happy chirping away aimlessly to one another while running around the room bumping into stuff.

After a while, they mapped out the various obstacles in the room and discovered the digital equivalent of being bored, I no doubt!

Periodically, the researchers would move stuff around to see how they reacted.

Such simple [simple for those guys and not for me, I hasten to add!] can provide a lot of insight into just how awareness evolved...
 
octane said:
That said, I was reading the UK edition of Macworld where a theory among a number of scientists [non mentioned in the article, unfortunately] that the internet could become such an animal.

Lol- have you read Neuromancer? :)
As happens in the best Sci-Fi, Gibson discussed an issue that only know are scientists really able to study. SF is great, it allows (relatively) rational discussion of issues that scientists can't get near as they depend on quantitative info. On the other hand it generates lots of lame soap opera in space.

octane said:
A rant? If that's a rant, then I'd love to hear what you consider rational

I'm a scientist cross training as a journalist, i think my writing style and sense of what is rational writing is a little confused right now.


Cat said:
I agree with Ora that we would need enormously complex machines to even simulate human thought, but I think that it is not hard, and is even being done currently, to simulate and maybe even instantiate lower types of consciousness in software. All kinds of experiments with artificial neural networks have shown a lot of emergent behaviour that maybe, with more complex machines might make some elementary form of awareness possible in the future.

True, but its a matter of scale. The problem becomes exponentially more difficult as the number of units and level of connectivity increases. I happen to think that its a higher exponent for brains than that involved in chip development, but thats just my opinion.

I think that whatever your dividing lines of intelligence/sentience are, you have to admit that humans have brain faculties unique on this planet (even its that we are the only ones that use them, rather than the only ones that have them). That would suggest that you'd be having to get to our level of neural development in order to make the kind of human-like sentience we seem to seek. (defined as something i can talk to, in a loose Turing kinda way).

All that said, their seem to be one advantage to the silicon route. Unlike organic life, it can have a common language in-built from the start, possibly quite a big boost considering that language is considered by many as crucial to our sort of intelligence/sentience.
 
Another method gaining insight into how the brain works is studying a brain that's gone awry.

Again, an article from the Wired News site discussed perception. I guy had a very strange condition that meant he say words in colours.

Individual characters could make one colour, while a word contain a particular character would be a different colour again.

Tp make matters worse, groups of words [sentences!] would be a range of colours!

There are people who can even taste colours that they see.

Why does this happen? Well, the mechanics are largely unknown, but what _is_ known is that the barriers between the various lobes in the brain have somehow broken down allowing one part of the brain to end up processing data that it would otherwise not have to deal with.

This kind of thing must be invaluable to neurologists since when you see how something doesn't work, it must at least close the door the various theories, thus thinning the possible options to explore...
 
Octane, of you're interested in that kind of stuff, then read Into The Silent Land by Paul Broks and The Curious incident of the Dog in the Nightime by Mark Haddon.
 
ora said:
Octane, of you're interested in that kind of stuff, then read Into The Silent Land by Paul Broks and The Curious incident of the Dog in the Nightime by Mark Haddon.

I'll look them up, thanks...
 
Those other physical reactions that provide you with no choice at all are involuntary and almost purely instinctive.
Not at all, I was talking of quite high-level functions related to i.a. sight and speech recognition. There's a lot of quasi-subliminal pattern-matching and pattern-correcting going on at a very late stage in information processing in the brain.
Example: depth perception. When presented with the drawing of a cube, you see it as a cube, because you brain filters it into that category. Any `normal' (western) person, even of quite young age, will interpret a certain configuration of lines automatically as being 3D. Think about Quake and how realistic things are. Now think again: it doesn't resemble the real-world at all, all the angles, horizon, depth-lines are distorted (especially at fov 120 :D ), but you still interpret it as 3D and use this 3D model in your actions. You can do this so fast, that it doesn't require seemingly any processing power at all. If I wanted to program a robot to do the same, I'd need the whole VT cluster to do it in real time (and it might not be enough). You identify edges, fill in colours, judge depth, construct a 3D environment in several stages, and accurately move yourself in it. That is a lot of information processing and problem solving going on: all rule based and easy to fool with a lot of well-known tricks (e.g. optical illusions). Yet is is so flexible that it can perform flawlessly in the real world as well as in Quake.

Octane: the phenomenon you seem to be referring to is synesthesia: one sense triggers a response from the others. Seeing letters/words or numbers as coloured is the most common one (within one sense), seeing sounds the next more common (between senses). Actually you don't see sounds or soundwaves, you just get iTunes visuals with you sounds. As far as i've been told, speech appears to yield quite boring effects however ...
 
I'm not endorsing the idea of the 'Net being the womb of AI, I'm merely pointing out that in a strange way, the internet is like a huge anonymous, amorphous brain itself.

But as for Sci-Fi .. ah, a love of mine! ::angel::

We should all ask: 'what if...'

Ah, but a man's reach should exceed his grasp - or what's a heaven for?..
 
Octane: i'm doing a course right now on Science in a Fictional context. It's great, I'm reading foundation and Do androids dream for work!
 
ora said:
Octane: i'm doing a course right now on Science in a Fictional context. It's great, I'm reading foundation and Do androids dream for work!

I had no idea such a course existed!

I'm trying to finish two novels off to complete the trilogy, but I just don't have the time :(

I enjoy researching the ideas that I come up with. That's actually the most intensive part of what I do .. when I'm doing it, that is!..
 
Octane: Its only a part of an MSc in SCience Communication, but it is a pretty unusual course. I'm looking forward to writing academic analyses of how science is portrayed in fiction, it'll be a challenge, expressing idea's i've only had before in non-academic contexts. Its lovely when work and recreation coincide.:)
 
Ora: for me, the challenge in writing is bringing as much credibility to fiction as possible.

So rather than relying on pseudo-science, I do a _lot_ of research to make the plot devices as real as can be...
 
Well, think about this: do you think that we will one day create sentient beings?

Easy - we already do this by having children.

As for AI.

I'm all for AI. Its only a matter of time before computers will walking around and talking to us. There are some programs following the path of teaching computers and making computers that can learn. We become more intelligent though our experiences and learning from them. How do you have a bad experience if you are a machine. What is the equivelent to pain or anguish in a computer? This is need to learn not to do things again or avoid them if at all possible. What is the equivelent to the pride one feels from accomplishment? This is needed to reinforce what worked well for you and make it something you want to feel again. How do we replicate these biological pleasure and pain responses which are so integral to learning and intelligence? Can we? At the moment we give programs limits and rules to protect them from harm just as we have rules and limits governed by the various societieswe live in. However as humans we have free will and can and do choose to break the rules and exceed the boundaries. When I was a little boy I used to jump up and down on my bed despite my parents telling me not to. I finally learned (the hard way) why I they had that rule when I fell and cracked my head open. Part of what makes us so intelligent is that we question the rules and boundaries to learn. Do we write into the AI program that it is OK to break some rules? More later.....
 
There is a very simple question we need to ask, and it's the same question that needs to be asked when people discuss the issue of whole-human cloning -- why?

Why would we want to create a sentient being? Is this simply to satiate our pride or would there be a practical purpose to such an endeavor.

Of course there are many practical reasons, for example.: search & rescue; warfare; exploration; construction .. the list goes on.

To accomplish such a feat, we all seem to agree that there are innumerable questions we must first ask of ourselves.

Are _we_ ready or able to answer those questions?

Probably not...
 
Back
Top