The schome community Go to the Schome Wiki
January 27, 2020, 10:29:15 AM *
Welcome, Guest. Please login or register.

Login with username, password and session length
News: To register for the wiki or forum email ONE sentence explaining why you want to join the Schommunity to Peter[dot]Twining[at]newcastle[dot]edu[dot]au
(Replace [dot] with . and [at] with @)
 
   Home   Help Search Login Register  
Pages: 1 [2] 3 4   Go Down
  Print  
Author Topic: Staffing problems? No worries!  (Read 21252 times)
Doctor Schomer
The Hawaiian Shirts
Hero Member
***


Posts: 1498



View Profile Email
« Reply #15 on: April 16, 2008, 11:34:18 AM »

.....and that differs from the human brain how?  ???
Logged

-Doctor Schomer

Acedemician of Cognitex Labs.
An¡mus
SP Global Moderator
Hero Member
****


Posts: 3410


Original Schomer


View Profile WWW Email
« Reply #16 on: April 16, 2008, 04:05:33 PM »


Are you suggesting that people don't think about what others have said when they are alone?
Logged
Marko Schomer
The Hawaiian Shirts
Hero Member
***


Posts: 631


View Profile Email
« Reply #17 on: April 16, 2008, 05:28:03 PM »

.....and that differs from the human brain how?  ???

The example of the Chinese room is a good one. Someone who seems to know Mandarin Chinese, but actually is just giving responses which are set out in a book cannot really be said to understand anything, compared to someone who has actually learnt Putongua. When something comes up which isn't in the book, the Chinese room can't give an answer, whereas the person who actually has understanding can use their knowledge of how the language works to construct a feasible reply. The Chinese room system can be added to, but because the number of combinations in grammar are infinite (a phrase, sentence, or paragraph can be infinitely long), the system can never be complete. A chatbot, because it doesn't truly understand what it's doing, can never reach the level of fluency of a human (though in future, different designs of intelligently learning intelligences might be up to the task).
Logged
Doctor Schomer
The Hawaiian Shirts
Hero Member
***


Posts: 1498



View Profile Email
« Reply #18 on: April 16, 2008, 08:58:16 PM »


 proccessing it and coming up with an apropriate response

that IS how the human brain functions...granted, it has many varibles, like emotions, enviroment at the time...etc

Marko, that is no longer accepted, the chinease room is obsolete, I can't remember what the revoking argument is, but it was a good one!
Logged

-Doctor Schomer

Acedemician of Cognitex Labs.
Explo Schomer
SP Global Moderator
Hero Member
****


Posts: 2386


Nice, eh?


View Profile Email
« Reply #19 on: April 16, 2008, 10:47:53 PM »

: Doctor
I can't remember what the revoking argument is, but it was a good one!

Useful...  :P
Logged

'I am the gadfly'-or at least, I'd like to be

Question everything, including this.
An¡mus
SP Global Moderator
Hero Member
****


Posts: 3410


Original Schomer


View Profile WWW Email
« Reply #20 on: April 17, 2008, 02:50:13 PM »


I wonder if AI reaches such a level that offense to robots would be considered racism
Logged
Tara SParker
SP team
Full Member
*


Posts: 153


View Profile Email
« Reply #21 on: April 17, 2008, 08:26:19 PM »


The proper article in New Scientist is not yet available online, and I do not at have my paper copy. However - this is an interesting link in the light of this discussion to a Daily Mail summary (yes I know but we can not all read the Guardian all the time).

Briefly it is a presentation of a study that may suggest we do not have free will. As humans we have a lot more in common with machines...


http://www.dailymail.co.uk/pages/live/articles/technology/technology.html?in_article_id=560149&in_page_id=1965

What do you think? Do we have a soul?

Tara
Logged
Doctor Schomer
The Hawaiian Shirts
Hero Member
***


Posts: 1498



View Profile Email
« Reply #22 on: April 17, 2008, 08:29:42 PM »

It isnt that hard to fortell that some basic emotions may interfere;

since we're on the topicl; the game called 'Go' has lost it's grandmaster thanks to one of my loveable silicon friends :)

Explo; I don't have time to find the exact one, here's a link;

http://plato.stanford.edu/entries/chinese-room/#4

Tara, there is only one part of the human atatomy that has a souls...well, soles....
By the by, how did you get on with that program I sent you?
Logged

-Doctor Schomer

Acedemician of Cognitex Labs.
Explo Schomer
SP Global Moderator
Hero Member
****


Posts: 2386


Nice, eh?


View Profile Email
« Reply #23 on: April 17, 2008, 09:01:29 PM »

As far as I can see (and I'll readily admit I read only a fraction of the page), there is no conclusive counterargument for the Chinese Room, but rather many different arguments, the authors of which believe are correct, and which Searle himself continues to refute. Personally, I am inclined to agree with Searle in that current systems could never achieve strong AI, but believe that an associative system bottom up system closer to that of the human brain could indeed 'understand' given at least a century. Of course, predictions are inevitably wrong most of the time, and the definition of 'understand' is open to interpretation.
Logged

'I am the gadfly'-or at least, I'd like to be

Question everything, including this.
Doctor Schomer
The Hawaiian Shirts
Hero Member
***


Posts: 1498



View Profile Email
« Reply #24 on: April 18, 2008, 01:54:35 PM »

Now I remember! It was something from Hugo de Garis;
He's quite an interesting individual, and he more or less predicated what is to come soon.
That is; a War between AI supporters, and non-AI supporters.

To run as an example to the chinease room, lets take a real-life example; your a guard at an airport using a sniffer system (a neural network) to check for explosives in lugage. the machine alerts you that one bag does have explosives, to whom (or what) does the credit belong to?

I prefer working with neural networks, they are redundant, they are extremely fast and they can process corrupted data, they will gracefully fail and no matter how hard you try, you cannot say they are not a 'brain', because they are. Though normally a unit consisting of only a few thousend neurons.

Animus, you seem to oppose that Strong AI is possible, may I point out the Blue Brain Project sucsesffully ran a a rodents brain? (for a human-time of 4 seconds, but acutally 3 mins to the rodent, smaller brain means faster sense of time)
« Last Edit: April 18, 2008, 02:13:31 PM by Doctor Schomer » Logged

-Doctor Schomer

Acedemician of Cognitex Labs.
An¡mus
SP Global Moderator
Hero Member
****


Posts: 3410


Original Schomer


View Profile WWW Email
« Reply #25 on: April 18, 2008, 02:56:51 PM »


I'm not sure what people hope to achieve by playing god, a better understanding of the world? I don't think that will happen. Who says everything has to be created anyway why can't we have just started to exist? ???
Logged
Marko Schomer
The Hawaiian Shirts
Hero Member
***


Posts: 631


View Profile Email
« Reply #26 on: April 18, 2008, 03:31:53 PM »

Briefly it is a presentation of a study that may suggest we do not have free will. As humans we have a lot more in common with machines...

I don't think we do have free will. However, we have far more complex systems of calculation than any computers have, and look likely to have for a considerable period of time. Computers may have greater processing power, but lack the ability to greatly develop programming in a wide variety of areas (toddlers are certainly more adept at learning). Really, we can't expect computers to be as advanced as humans- 60 years of development is nothing compared to 4 billion years of evolution.
Logged
Explo Schomer
SP Global Moderator
Hero Member
****


Posts: 2386


Nice, eh?


View Profile Email
« Reply #27 on: April 18, 2008, 03:57:30 PM »

That is; a War between AI supporters, and non-AI supporters.

A war is most definitely the wrong word. Comparing the deaths of millions of people to a particularly fiery debate is not equivalent.

Anyway, the sniffer system deserves the merit for detecting the bomb, but that does not mean that it understands what a bomb is, nor why it must find a bomb, etc. Therefore, much of the merit goes to the police officer for being able to understand this and prevent it from passing through security, thus saving lives, although this is not something we would consider particularly meritorious, as we all know what bombs are.
Logged

'I am the gadfly'-or at least, I'd like to be

Question everything, including this.
Decimus Schomer
The Hawaiian Shirts
Hero Member
***


Posts: 3536



View Profile Email
« Reply #28 on: April 18, 2008, 04:32:30 PM »

I prefer working with neural networks, they are redundant
There is that, but computers can be made to be redundant. Though they're generally not, as it's pretty expensive sometimes

they are extremely fast
That's always helpful.

and they can process corrupted data, they will gracefully fail
Any decent computer program would either give you an error message or something like that. Not-so-decent programs might crash, but they're generally noticeable (and usually fixable) during debugging :P

Slightly related to what Marko said, the main difference I see between a neural net and a computer is that a neural net is designed for learning how to process data, rather than being told how to do so. However, computers can sometimes do that anyway.
Logged
Doctor Schomer
The Hawaiian Shirts
Hero Member
***


Posts: 1498



View Profile Email
« Reply #29 on: April 18, 2008, 05:42:32 PM »

Ah yes, but when they fail, (this only works with enticement nets) they can correct their own errors.

I agree, a war is the wrong word, I could safely say it might lead to civil unrest.

Animus; we've destroyed so many species, and created so very few. Lets balance it out...
...honestly, I cannot understand your mind-set; 'humans are best' seems to be your motto, I throughly disagree...
Logged

-Doctor Schomer

Acedemician of Cognitex Labs.
Pages: 1 [2] 3 4   Go Up
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 2.0.15 | SMF © 2006-2008, Simple Machines LLC Valid XHTML 1.0! Valid CSS!