皮皮学,免费搜题
登录
logo - 刷刷题
搜题
【简答题】
In the beginning of the movie I, Robot, a robot has to decide whom to save after two cars plunge into the water—Del Spooner or a child. Even though Spooner screams "Save her! Save her! " the robot rescues him because it calculates that he has a 45 percent chance of survival compared to Sarah’s 11 percent. The robots decision and its calculated approach raise an important question: would humans make the same choice? And which choice would we want our robotic counterparts to make? Isaac Asimov evaded the whole notion of morality in devising his three laws of robotics, which hold that 1. Robots cannot harm humans or allow humans to come to harm; 2. Robots must obey humans, except where the order would conflict with law 1; and 3. Robots must act in self-preservation, unless doing so conflicts with laws 1 or 2. These laws are programmed into Asimov's robots—they don't have to think, judge, or value. They don't have to like humans or believe that hurting them is wrong or bad. They simply don't do it. The robot who rescues Spooners life in I, Robot follows Asimovs zero law: robots cannot harm humanity (as opposed to individual humans) or allow humanity to come to harm—an expansion of the first law that allows robots to determine what's in the greater good. Under the first law, a robot could not harm a dangerous gunman, but under the zero law, a robot could kill the gunman to save others. Whether it's possible to program a robot with safeguards such as Asimov’s laws is debatable. A word such as "harm "is vague (what about emotional harm? Is replacing a human employee harm?), and abstract concepts present coding problems. The robots in Asimov’s fiction expose complications and loopholes in the three laws, and even when the laws work, robots still have to assess situations. Assessing situations can be complicated. A robot has to identify the players, conditions, and possible outcomes for various scenarios. It's doubtful that a computer program can do that—at least, not without some undesirable results. A roboticist at the Bristol Robotics Laboratory programmed a robot to save human proxies ( 替身 ) called “H-bots” from danger. When one H-bot headed for danger, the robot successfully pushed it out of the way. But when two H-bots became imperiled, the robot choked 42 percent of the time, unable to decide which to save and letting them both "die." The experiment highlights the importance of morality: without it, how can a robot decide whom to save or what's best for humanity, especially if it can't calculate survival odds?
手机使用
分享
复制链接
新浪微博
分享QQ
微信扫一扫
微信内点击右上角“…”即可分享
反馈
参考答案:
举一反三
【单选题】中国恢复对香港行使主权的时间是
A.
1984年12月20日
B.
1987年4月13日
C.
1997年7月1日。
D.
1999年12月20日
【单选题】选择促销主题时应注意的问题说法错误的是
A.
主题要有广泛关注的社会意义
B.
主题通俗顺口,容易明白与记忆
C.
主题传达的信息清楚明白
D.
主题与促销目的的表述是一样的
【单选题】What does the woman mean?
A.
She's very happy to hear the news.
B.
She's sad to hear the news.
C.
She's surprised to hear the news.
【单选题】马来丝虫病急性期的临床表现为
A.
象皮肿
B.
睾丸鞘膜积液
C.
“流火”
D.
附睾炎
【判断题】中餐厅客人入座后,首先斟倒免费的欢迎茶或询问客人饮什么茶。 ( )
A.
正确
B.
错误
【简答题】中国恢复对行使主权的时间是( )
【单选题】马来丝虫病急性期的临床表现为
A.
象皮肿
B.
睾丸鞘膜积液
C.
"流火"
D.
乳糜尿
E.
附睾炎
【单选题】分子中含有二个氨基的是(    )
A.
苯胺    
B.
尿素    
C.
乙酰苯胺    
D.
四甲基氢氧化铵
【单选题】中国恢复香港行使主权的时间( )
A.
1997年7月1日
B.
1999年12月20日
C.
1997年12月20日
D.
1999年7月1日
【单选题】某机器原来生产产品A,利润为500元,现在改生产产品B,所花材料费为800元,则生产产品B的机会成本为( )[1分]
A.
300元
B.
500元
C.
800元
D.
1300元
相关题目:
参考解析:
知识点:
题目纠错 0
发布
创建自己的小题库 - 刷刷题