皮皮学,免费搜题
登录
logo - 刷刷题
搜题
【简答题】
(1)This year marks exactly two countries since the publication of Frankenstein; or, The Modern Prometheus, by Mary Shelley. Even before the invention of the electric light bulb, the author produced a remarkable work of speculative fiction that would foreshadow many ethical questions to be raised by technologies yet to come. (2)Today the rapid growth of artificial intelligence (AI) raises fundamental questions:”What is intelligence, identify, orconsciousness? What makes humans humans?”What is being called artificial general intelligence, machines that would imitate the way humans think, continues to evade scientists. Yet humans remain fascinated by the idea of robots that would look, move, and respond like humans, similar to those recently depicted on popular sci-fi TV series such “Westworld” and “Humans”.Just how people think is still far too complex to be understood, let alone reproduced, says David Eagleman, a Stanford University neuroscientist. “We are just in a situation where there are no good theories explaining what consciousnesss actually is and how you could ever build a machine to get there.” (3)But that doesn’t mean crucial ethical issues involving AI aren’t at hand. The coming use of autonomous vehicles, for example, poses thorny ethical questions. Human drivers sometimes must make split-second decisions. Their reactions may be a complex combination of instant reflexes, input from past driving experiences, and what their eyes and ears tell them in that moment. AI “vision” today is not nearly as sophisticated as that of humans. And to anticipate every imaginable driving situation is a difficult programming problem. (4)Whenever decisions are based on masses of data, “you quickly get into a lot of ethical questions,” notes Tan Kiat How, chief executive of a Singapore-based agency that is helping the government develop a voluntary code for the ethical use of AI. Along with Singapore, other governments and mega-corporations are beginning to establish their own guidelines. Britain is setting up a data ethics center. India released its AI ethics strategy this spring. (7)On June 7 Google pledged not to “design or deploy AI” that would cause “overall harm,” or to develop AI-directed weapons or use AI for surveillance that would violate international norms. It also pledged not to deploy AI whose use would violate international laws or human rights. (5)While the statement is vague, it represents one starting point. So does the idea that decisions made by AI systems should be explainable, transparent, and fair. (6)To put it another way: How can we make sure that the thinking of intelligent machines reflects humanity’s highest values? Only then will they be useful servants and not Frankenstein’s out-of-control monster.
手机使用
分享
复制链接
新浪微博
分享QQ
微信扫一扫
微信内点击右上角“…”即可分享
反馈
参考答案:
举一反三
【单选题】土钉支护的构造要求:土钉长度为开挖深度的( )倍
A.
0.25-0.5
B.
0.5-1.2
C.
1.2-1.5
D.
1.5-2.0
【简答题】西门子802s系列系统中从NCK到PLC的数据是()。
【单选题】— Sorry, Joe, I didn't mean to! — Don't call me 'Joe'. I'm Mr. Parker to you, and _____ you forget it! [     ]
A.
do
B.
didn't
C.
did
D.
don't
【判断题】西门子802s系列系统中从PLC到NCK的数据是可读可写的。
A.
正确
B.
错误
【简答题】西门子802s系列系统中从PLC到NCK的数据是()。
【多选题】对基数效用原理提出质疑与修正的西方经济学者有( )。
A.
埃奇沃思
B.
帕累托
C.
希克斯
D.
边沁
【多选题】建筑基坑采用土钉墙支护时,其设计及构造要求中下述正确的是( )。
A.
土钉墙墙面坡度不宜大于1:0.5
B.
土钉长度宜为开挖深度的2~3倍
C.
土钉间距宜为1~2m
D.
土钉与水平面夹角宜为5°~20°
【判断题】西门子802s系列系统中从NCK到PLC的数据是可读可写的。
A.
正确
B.
错误
【单选题】The mother said, 'I haven't the faintest idea.' What does that mean?
A.
I don't know what to do.
B.
I haven't found any idea.
C.
I have no plan.
D.
I don't know at all.
【多选题】建筑基坑采用土钉墙支护时,其设计及构造要求中正确的是()。
A.
土钉墙墙面坡度不宜大于1:0.5
B.
土钉长度宜为开挖深度的2~3倍
C.
土钉间距宜为1~2m
D.
土钉与水平面夹角宜为5°~20°,注浆材料强度等级不宜低于M10
相关题目:
参考解析:
知识点:
题目纠错 0
发布
创建自己的小题库 - 刷刷题