cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

Advances in Computing

Former Member
0 Likes
318

What are the next great advances in computing?

1. Artificial Intelligence

AI is the holy grail of computing. In the 20th century many believed we would have true AI by the end of century, but success was slower than predicted.

AI that is as smart as a human, and able to perform any intellectual task that a human can is the objective. What would happen if we created intelligent robots? I do not think that we can program self-awareness. However if there were robots that were intelligent, wouldn't they become a threat to mankind?

Al Lal

Accepted Solutions (0)

Answers (2)

Answers (2)

JPReyes
Active Contributor
0 Likes

Someone has been reading too many Issac Asimov novels...

matt
Active Contributor
0 Likes

I'm not that convinced there's a great deal of natural intelligence, never mind artificial.

How would we know when we'd created AI? Is AI defined by "being able to pass the Turing Test" or should other criteria be used. Would a non-human but natural intelligence fail the Turing Test. What would happen if it was conducted in English against someone for who English isn't a first language?

What is intelligence?

Former Member
0 Likes

Would this () be what the discussion is about? Is this Artificial Intelligence? Or Natural Intelligence? Or as Matthew put it, what is Intelligence?

LOL

T00th

Former Member
0 Likes

>

> I'm not that convinced there's a great deal of natural intelligence, never mind artificial.

>

> How would we know when we'd created AI? Is AI defined by "being able to pass the Turing Test" or should other criteria be used. Would a non-human but natural intelligence fail the Turing Test. What would happen if it was conducted in English against someone for who English isn't a first language?

>

> What is intelligence?

The Turing test is acceptable. The AI being designed by humans will have human like intelligence.

Perhaps we can test it in other ways. Like maybe, making it get a degree in computer science, and then seeing how well it is able to function as a programmer.

Will the AI be good or evil, or neither?

What will the relation between man and AI be like: master and servant, or parent and child, or something else?

Al Lal

matt
Active Contributor
0 Likes

Like maybe, making it get a degree in computer science, and then seeing how well it is able to function as a programmer.

You really think that is evidence of intelligence? 😮

( If anyone is insulted by that - go read my bio under skills profile on my business card ). 😄

Former Member
0 Likes

>

> Like maybe, making it get a degree in computer science, and then seeing how well it is able to function as a programmer.

>

> You really think that is evidence of intelligence? :-O

>

> ( If anyone is insulted by that - go read my bio under skills profile on my business card ). :-D

What test would you suggest?

The AI should have the capability to learn any field of knowledge and work productively in it.

We are assuming that the AI is very intelligent by human standards.

Something more challenging: ask the AI to get a doctorate in theoretical physics, and then do fundamental research.

Al Lal

Former Member
0 Likes

Get the AI to do this equation: -

2 / 0 = ?

What is the response of the AI. I would suspect after it has tried it once and crashed the next time it would respond with 'Cannot divide by zero'.

Now ask an adult human to do the same equation. Their response will be C'annot divide by zero'. Why because when they were younger they were told that fact. Is the human intelligent or just following a set of rules.

Former Member
0 Likes

Hi Martin,

the way I see it, both the human and the AI "learn" in this case that a divide by zero is not possible. The human is taught, in that he is told while the AI learns it with experience.

In my opinion, a "general" AI, where you can look at the AI as a "human" equal is a very very long way off. The first thing we need to do is take baby steps - an AI unit that manages the traffic system of a city for example, or AI that manages aircraft landings at airports rather than regular ATCs.

Maybe after 20 or 30 years watching such AIs in action, can it be probably be possible that we can build a "human" AI, i.e, something which is equal, if not close, to a human consciousness. What Abhinav said - ask it to do theoritical research for example. Such a task requires immense heuristic capabilities which cannot be attained in just a few years.

I am sure AI will be more and more prevelant in our societies, but I do not think that we shall be able to own robotic butlers or have robotic friends with whom you can go out drinking anytime soon.

T00th

Edited by: Sameer Jagirdar on May 14, 2008 5:32 PM

Former Member
0 Likes

AI will never learn that it cannot divide a number by 0. It has to be told it cannot.

AI has to follow a set of rules. Humans follow sets of rules.

matt
Active Contributor
0 Likes

It appears that Gödel's incompleteness theorems do not apply to humans. By definition, algorithmic computers cannot.

Ergo, there is more to humans than just following a set of rules.

You won't get AI until you have a machine that can think non-algorithmically.

Oh, and in some mathematical systems, there is meaning to dividing by zero.

Former Member
0 Likes

Define a human process that does not use a set of rules.

Is that comment 'Oh, and in some mathematical systems, there is meaning to dividing by zero.' to score points or something. There is not point to make that statement with regards to this thread. I know divide by zero can be represented in mathmatical terms, but for this example I think the simple divide by zero point if fine, dont you.

matt
Active Contributor
0 Likes

Sorry - should have prefixed it with a "slightly OT". Could a computer come up with a mathematical system whereby division by zero is defined?

A human process that can't be done by following rules - at least not the same kind of rules that a computer MUST follow? That's easy - develop propositions that can't be deduced from their axiomatic base. E.g. the continuum hypothesis.

Gödel showed that it is always possible to have propositions - the continuum hypothesis is one - that can't be deduced from a system of (sufficiently complex) axioms - e.g. arithmetic. A Turing machine is equivalent to an axiomatic system. All computers (so far) are equivalent to Turing machines. Therefore there are propositions that can't be deduced by computers. But they can be by humans. Or to put it another way, Gödel proved that humans can think up (true but not provable) things that computers can't. (Just fyi, the Continuum Hypothesis is not only not provable one way or the other, there's not even consensus of whether it is true or not - in the framework of the arithmetical axioms. There are systems where it is taken axiomatically to be true or false, but these themselves will always have propositions that can't be decided ).

Actually, it could be incorrect to say that Gödel's incompleteness theorems preclude AI. Though Penrose thinks it does, others say that it only applies to computers that are Turing machines. If a non-turing machine computer was developed (perhaps one of these new quantum computing machines?), then perhaps that would be, perhaps by definition, AI.

matt

Former Member
0 Likes

I see you are one of those people who have to reply smartly to threads without keep to the topic just to try and prove your own intelligence.

At this point I will cease to continue with this topic of conversation as I have better things to do than have a grammer contest.

matt
Active Contributor
0 Likes

Huh? The topic was advances in computing and mentioned artificial intelligence - how did I go off topic? A slight diversion perhaps. And where was the grammar contest?

All I've mentions so far was part of my undergraduate studies. And that wasn't even at a premier university.

The next great advance will be quantum computing. As this somehow reduces algorithms that run exponentially to polynomial or even linear runtime, it will have a massive effect. Suddenly, it will be possible to break 2K bit encryption in a reasonable time. On the plus side, you'll also get quantum encryption, which is totally unbreakable. Until the theory beyond quantum mechanics gets discovered anyway.

Now, quantum computing could lead to AI. Some physicists, such as Penrose, think that intelligence, Martin notwithstanding, is a quantum phenomenum. How would we control such computers? By reading lots of Asimov and taking note!

matt.

Former Member
0 Likes

We send probes to another planet - Mars.

We collect data, dirt, etc. from that planet, i.e., Mars using that probe...

We remotely control the said probe and bring it back to Earth from Mars......

Hell, what else is AI about? A robot does not need to have human-like features but respond to commands and perform tasks.....

It is out of place even for techies of business systems to debate on AI so let us leave it to the uber geeks from the scientific institutions of the world like NASA....my 2 cents!

Former Member
0 Likes

Sending a probe to Mars is not AI.

Edited by: Martin Shinks on May 23, 2008 11:33 AM

David
Advisor
Advisor
0 Likes

Sending a probe to Mars is not AI

The "rover" versions that have gone there do have a small degree of AI built in to them to analyze and negotiate terrain. Those operations could not wait through the communication period required for remote control.

Former Member
0 Likes

What AI do they have. They are simply following a set of rules programmed back on this god forsaken planet.

Former Member
0 Likes

I don't think you would have to code all of the rules back here in your garage.

You could give it a seed to take with to Mars (something like a Macadania nut with a heat resistant shell which hardens and softens depending on the environment, and grows) which has the ability to relicate (sorry) replace (thats better) itself based on things which it has learnt, as it goes about learning to learn. It could update its own analytical abilities and it's vehicle based on what it finds (using the data, or parking it as junk or harmfull) and using it again at some future point (also previously harmfull bits) at some later point in time when they have become (updated) standards conform. A challange would be the storage media for the "library-of-learning". A virtual vehicle which drives next to it would also be cool.

You could perhaps even do that in ABAP if the surphase of Mars is conform with IFRS balance sheet and P&L formats. Until then, I think that we need to secure our garages down here on this specific planet to prevent the AI from getting out of it, as they would probably make realy unexpected month end closing journal entries