皮皮学,免费搜题
登录
logo - 刷刷题
搜题
【简答题】
Passage Four In the beginning of the movie I, Robot, a robot has to decide whom to save after two cars plunge into the water—Del Spooner or a child. Even though Spooner screams "Save her! Save her! " the robot rescues him because it calculates that he has a 45 percent chance of survival compared to Sarah’s 11 percent. The robots decision and its calculated approach raise an important question: would humans make the same choice? And which choice would we want our robotic counterparts to make? Isaac Asimov evaded the whole notion of morality in devising his three laws of robotics, which hold that 1. Robots cannot harm humans or allow humans to come to harm; 2. Robots must obey humans, except where the order would conflict with law 1; and 3. Robots must act in self-preservation, unless doing so conflicts with laws 1 or 2. These laws are programmed into Asimov's robots—they don't have to think, judge, or value. They don't have to like humans or believe that hurting them is wrong or bad. They simply don't do it. The robot who rescues Spooners life in I, Robot follows Asimovs zero law: robots cannot harm humanity (as opposed to individual humans) or allow humanity to come to harm—an expansion of the first law that allows robots to determine what's in the greater good. Under the first law, a robot could not harm a dangerous gunman, but under the zero law, a robot could kill the gunman to save others. Whether it's possible to program a robot with safeguards such as Asimov’s laws is debatable. A word such as "harm "is vague (what about emotional harm? Is replacing a human employee harm?), and abstract concepts present coding problems. The robots in Asimov’s fiction expose complications and loopholes in the three laws, and even when the laws work, robots still have to assess situations. Assessing situations can be complicated. A robot has to identify the players, conditions, and possible outcomes for various scenarios. It's doubtful that a computer program can do that—at least, not without some undesirable results. A roboticist at the Bristol Robotics Laboratory programmed a robot to save human proxies ( 替身 ) called “H-bots” from danger. When one H-bot headed for danger, the robot successfully pushed it out of the way. But when two H-bots became imperiled, the robot choked 42 percent of the time, unable to decide which to save and letting them both "die." The experiment highlights the importance of morality: without it, how can a robot decide whom to save or what's best for humanity, especially if it can't calculate survival odds?
手机使用
分享
复制链接
新浪微博
分享QQ
微信扫一扫
微信内点击右上角“…”即可分享
反馈
参考答案:
举一反三
【单选题】关于亚硝酸钠滴定法的叙述,错误的有( )
A.
对有酚羟基的药物,均可用此方法测定含量
B.
水解后呈芳伯氨基的药物,可用此方法测定含量
C.
芳伯氨基在碱性液中与亚硝酸钠定量反应,生成重氮盐
D.
在强酸性介质中,可加速反应的进行
E.
反应终点多用永停法指示
【简答题】在施工公开招标中,有A、B、C、D、E、F、G、H等施工单位报名投标,经资格预审均符合要求,但建设单位以A施工单位是外地企业为由不同意其参加投标。评标委员会由5人组成,其中当地建设行政管理部门的招标投标管理办公室主任1人、建设单位代表1人、政府提供的专家库中抽取的技术、经济专家3人。评标时发现,B施工单位投标报价明显低于其他投标单位报价且未能合理说明理由;D施工单位投标报价大写金额小于小写金额;F...
【单选题】下面这颗电阻的阻值是多少?——元件包中含有五颗如下电阻。 其中:黑0,棕1,红2
A.
1K欧姆
B.
10K欧姆
C.
100K欧姆
D.
300欧姆
【单选题】下面这颗电阻的阻值是多少?——元件包中含有一颗如下电阻。 其中:黑0,棕1,橙3
A.
1K欧姆
B.
10K欧姆
C.
100K欧姆
D.
300欧姆
【单选题】如果要从一张幻灯片“淡出”到下一张幻灯片,应使用( )。
A.
动作设置
B.
添加动画
C.
幻灯片切换
D.
页面设置
【判断题】施工单位不能当招标人
A.
正确
B.
错误
【单选题】下面这颗电阻的阻值是多少?
A.
1K欧姆
B.
10K欧姆
C.
100K欧姆
D.
300欧姆
【单选题】下列骨性标志可以在体表摸得着
A.
髋臼
B.
小转子
C.
髂前上棘
D.
髌面
【单选题】如果要从第一张幻灯片"淡出”到下一张幻灯片,应使用( )。
A.
动作设置
B.
添加动画
C.
幻灯片切换
D.
页面设置
【单选题】下面这颗电阻的阻值是多少?——元件包中含有七颗如下电阻。 其中:黑0,橙3
A.
1K欧姆
B.
10K欧姆
C.
100K欧姆
D.
300欧姆
相关题目:
参考解析:
知识点:
题目纠错 0
发布
创建自己的小题库 - 刷刷题