Robots have always been a controversial concept. Yes, expanding technology is fantastic, and it can benefit the world in incredible ways. However, when does it get into the Uncanny Valley? When do robots become too lifelike?
I think that having human like robots would be cool, but at the same time the idea terrifies me. I suppose that the only real reason I think that human-like robots would be cool is because I like to imagine them like the robots in Futurama.
|Highly doubtful that they would make robots like this, but... eh.|
On the other hand, I tend to worry about robots overtaking the world. Once they've been made to have such intelligence that they understand how they function, wouldn't they be able to realize that they could take care of themselves? Also, what would be the point of humans then? Since I doubt that the robots would be the Futurama type, just other citizens of the world, they'd probably be servants. That leads into the "Is it right to make them slaves if they have feelings/consciousness/whatever human-like emotion robots could possible have in the future?
Truthfully, I don't know very much about robots. There have been some fascinating advances in robot technology, some of which is terrifying. I can appreciate technology getting better, but when have we gotten to a point that's taking it too far?
Because seriously, robots are starting to know how to do quite a lot of things.
I think it's possible to get robots to human intelligence. I'm still convinced that the Roomba has the brain of one of those demented little dogs that like to chew on things and run around in circles.
What are your thoughts on the increasing intelligence of robots?