on ‎2008 May 14 5:13 AM
What are the next great advances in computing?
1. Artificial Intelligence
AI is the holy grail of computing. In the 20th century many believed we would have true AI by the end of century, but success was slower than predicted.
AI that is as smart as a human, and able to perform any intellectual task that a human can is the objective. What would happen if we created intelligent robots? I do not think that we can program self-awareness. However if there were robots that were intelligent, wouldn't they become a threat to mankind?
Al Lal
Request clarification before answering.
Someone has been reading too many Issac Asimov novels...
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I'm not that convinced there's a great deal of natural intelligence, never mind artificial.
How would we know when we'd created AI? Is AI defined by "being able to pass the Turing Test" or should other criteria be used. Would a non-human but natural intelligence fail the Turing Test. What would happen if it was conducted in English against someone for who English isn't a first language?
What is intelligence?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
>
> I'm not that convinced there's a great deal of natural intelligence, never mind artificial.
>
> How would we know when we'd created AI? Is AI defined by "being able to pass the Turing Test" or should other criteria be used. Would a non-human but natural intelligence fail the Turing Test. What would happen if it was conducted in English against someone for who English isn't a first language?
>
> What is intelligence?
The Turing test is acceptable. The AI being designed by humans will have human like intelligence.
Perhaps we can test it in other ways. Like maybe, making it get a degree in computer science, and then seeing how well it is able to function as a programmer.
Will the AI be good or evil, or neither?
What will the relation between man and AI be like: master and servant, or parent and child, or something else?
Al Lal
>
> Like maybe, making it get a degree in computer science, and then seeing how well it is able to function as a programmer.
>
> You really think that is evidence of intelligence? :-O
>
> ( If anyone is insulted by that - go read my bio under skills profile on my business card ). :-D
What test would you suggest?
The AI should have the capability to learn any field of knowledge and work productively in it.
We are assuming that the AI is very intelligent by human standards.
Something more challenging: ask the AI to get a doctorate in theoretical physics, and then do fundamental research.
Al Lal
Get the AI to do this equation: -
2 / 0 = ?
What is the response of the AI. I would suspect after it has tried it once and crashed the next time it would respond with 'Cannot divide by zero'.
Now ask an adult human to do the same equation. Their response will be C'annot divide by zero'. Why because when they were younger they were told that fact. Is the human intelligent or just following a set of rules.
Hi Martin,
the way I see it, both the human and the AI "learn" in this case that a divide by zero is not possible. The human is taught, in that he is told while the AI learns it with experience.
In my opinion, a "general" AI, where you can look at the AI as a "human" equal is a very very long way off. The first thing we need to do is take baby steps - an AI unit that manages the traffic system of a city for example, or AI that manages aircraft landings at airports rather than regular ATCs.
Maybe after 20 or 30 years watching such AIs in action, can it be probably be possible that we can build a "human" AI, i.e, something which is equal, if not close, to a human consciousness. What Abhinav said - ask it to do theoritical research for example. Such a task requires immense heuristic capabilities which cannot be attained in just a few years.
I am sure AI will be more and more prevelant in our societies, but I do not think that we shall be able to own robotic butlers or have robotic friends with whom you can go out drinking anytime soon.
T00th
Edited by: Sameer Jagirdar on May 14, 2008 5:32 PM
It appears that Gödel's incompleteness theorems do not apply to humans. By definition, algorithmic computers cannot.
Ergo, there is more to humans than just following a set of rules.
You won't get AI until you have a machine that can think non-algorithmically.
Oh, and in some mathematical systems, there is meaning to dividing by zero.
Define a human process that does not use a set of rules.
Is that comment 'Oh, and in some mathematical systems, there is meaning to dividing by zero.' to score points or something. There is not point to make that statement with regards to this thread. I know divide by zero can be represented in mathmatical terms, but for this example I think the simple divide by zero point if fine, dont you.
Sorry - should have prefixed it with a "slightly OT". Could a computer come up with a mathematical system whereby division by zero is defined?
A human process that can't be done by following rules - at least not the same kind of rules that a computer MUST follow? That's easy - develop propositions that can't be deduced from their axiomatic base. E.g. the continuum hypothesis.
Gödel showed that it is always possible to have propositions - the continuum hypothesis is one - that can't be deduced from a system of (sufficiently complex) axioms - e.g. arithmetic. A Turing machine is equivalent to an axiomatic system. All computers (so far) are equivalent to Turing machines. Therefore there are propositions that can't be deduced by computers. But they can be by humans. Or to put it another way, Gödel proved that humans can think up (true but not provable) things that computers can't. (Just fyi, the Continuum Hypothesis is not only not provable one way or the other, there's not even consensus of whether it is true or not - in the framework of the arithmetical axioms. There are systems where it is taken axiomatically to be true or false, but these themselves will always have propositions that can't be decided ).
Actually, it could be incorrect to say that Gödel's incompleteness theorems preclude AI. Though Penrose thinks it does, others say that it only applies to computers that are Turing machines. If a non-turing machine computer was developed (perhaps one of these new quantum computing machines?), then perhaps that would be, perhaps by definition, AI.
matt
Huh? The topic was advances in computing and mentioned artificial intelligence - how did I go off topic? A slight diversion perhaps. And where was the grammar contest?
All I've mentions so far was part of my undergraduate studies. And that wasn't even at a premier university.
The next great advance will be quantum computing. As this somehow reduces algorithms that run exponentially to polynomial or even linear runtime, it will have a massive effect. Suddenly, it will be possible to break 2K bit encryption in a reasonable time. On the plus side, you'll also get quantum encryption, which is totally unbreakable. Until the theory beyond quantum mechanics gets discovered anyway.
Now, quantum computing could lead to AI. Some physicists, such as Penrose, think that intelligence, Martin notwithstanding, is a quantum phenomenum. How would we control such computers? By reading lots of Asimov and taking note!
matt.
We send probes to another planet - Mars.
We collect data, dirt, etc. from that planet, i.e., Mars using that probe...
We remotely control the said probe and bring it back to Earth from Mars......
Hell, what else is AI about? A robot does not need to have human-like features but respond to commands and perform tasks.....
It is out of place even for techies of business systems to debate on AI so let us leave it to the uber geeks from the scientific institutions of the world like NASA....my 2 cents!
I don't think you would have to code all of the rules back here in your garage.
You could give it a seed to take with to Mars (something like a Macadania nut with a heat resistant shell which hardens and softens depending on the environment, and grows) which has the ability to relicate (sorry) replace (thats better) itself based on things which it has learnt, as it goes about learning to learn. It could update its own analytical abilities and it's vehicle based on what it finds (using the data, or parking it as junk or harmfull) and using it again at some future point (also previously harmfull bits) at some later point in time when they have become (updated) standards conform. A challange would be the storage media for the "library-of-learning". A virtual vehicle which drives next to it would also be cool.
You could perhaps even do that in ABAP if the surphase of Mars is conform with IFRS balance sheet and P&L formats. Until then, I think that we need to secure our garages down here on this specific planet to prevent the AI from getting out of it, as they would probably make realy unexpected month end closing journal entries
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.