Image source: Wikipedia
Fears of AI (artificial intelligence) are still showing up the media, most recently with another quote from Stephen Hawking warning that it might be the end of us, with Elon Musk, due to his own anxious statements, now being referenced whenever the subject comes up. I’ve written many times before why I think these fears are mostly misguided.
But another question that sometimes comes up, is when should we be concerned about how we treat an AI? When would we have an ethical responsibility toward an AI? I think it’s at about the same point that they become dangerous.
Except in terms of property, no one currently worries much about how we treat our computers. I use my laptop for my own needs, and when it has reached the end of its useful life, I replace it with a newer model. I have no concern about the laptop…
View original post 1,372 more words