Hard Light Productions Forums

Off-Topic Discussion => General Discussion => Topic started by: aldo_14 on March 30, 2004, 04:21:12 pm

Title: Thing on AI
Post by: aldo_14 on March 30, 2004, 04:21:12 pm
http://www.ai.mit.edu/people/brooks/papers/representation.pdf
 
Thought I'd post this, because it's actually a pretty interesting read.   It's relating to 'real-world' AI (i.e. for use with robotics), rather than the more conceptual form you see in computers.
Title: Thing on AI
Post by: diamondgeezer on March 30, 2004, 04:28:09 pm
Aargh, PDF

*beats PDF with a stick*

Get away from me you horrid thing
Title: Thing on AI
Post by: Grey Wolf on March 30, 2004, 04:30:13 pm
You know you could just copy the link, go to Google, and view it as an HTML, right? Here, I'll even give you a link to it: http://www.google.com/search?q=cache:kdPe-im9WZEJ:www.ai.mit.edu/people/brooks/papers/representation.pdf+&hl=en&start=1&ie=UTF-8
Title: Thing on AI
Post by: diamondgeezer on March 30, 2004, 04:41:17 pm
Cool. Now sum it up for me
Title: Thing on AI
Post by: aldo_14 on March 30, 2004, 05:00:23 pm
Basically - the key to AI is in understanding the real world.  Traditional AI (well, as it was then...) abstracts the real world details, providing a simplified version that is easier for the AI to understand.

Brooks is arguing that it is better to develop an AI which is capable of acting in a limited way, within a complex environment.  i.e. that intelligence comes not from the decision making process, but in the ability to conceptualise and understand real world input.

Part of this argument is that it is incorrect to 'reject' sensory input as being a seperate module to the AI (in that the intelligence is developed, and that sensory input systems can be developed independently).  A secondary issue is in that the environment itself determines intelligence - i.e. complex behaviour is caused by simple agent able to act within a complex environment.