Passage One Questions 46 to 50 are based on the following passage . In the beginning of the movie I , Robot , a robot has to decide whom to save after two cars plunge into the water — Del Spooner or a child . Even though Spooner screams " Save her ! Save her !" the robot rescues him because it calculates that he has a 45 percent chance of survival compared to Sarah's 11 percent . The robot's decision and its calculated approach raise an important question : would humans make the same choice ? And which choice would we want our robotic counterparts to make ? Isaac Asimov evaded the whole notion of morality in devising his three laws of robotics , which hold that 1. Robots cannot harm humans or allow humans to come to harm ; 2. Robots must obey humans , except where the order would conflict with law 1; and 3. Robots must act in self - preservation , unless doing so conflicts with laws 1 or 2. These laws are programmed into Asimov's robots — they don't have to think , judge , or value . They don't have to like humans or believe that hurting them is wrong or bad . They simply don't do it . The robot who rescues Spooner's life in I , Robot follows Asimov's zeroth law : robots cannot harm humanity ( as opposed to individual humans ) or allow humanity to come to harm — an expansion of the first law that allows robots to determine what's in the greater good . Under the first law , a robot could not harm a dangerous gunman , but under the zeroth law , a robot could kill the gunman to save others . Whether it's possible to program a robot with safeguards such as Asimov's laws is debatable . A word such as " harm " is vague ( what about emotional harm ? Is replacing a human employee harm ?), and abstract concepts present coding problems . The robots in Asimov's fiction expose complications and loopholes in the three laws , and even when the laws work , robots still have to assess situations . Assessing situations can be complicated . A robot has to identify the players , conditions , and possible outcomes for various scenarios . It's doubtful that a computer program can do that — at least , not without some undesirable results . A roboticist at the Bristol Robotics Laboratory programmed a robot to save human proxies (替身) called " H - bots " from danger . When one H - bot headed for danger , the robot successfully pushed it out of the way . But when two H - bots became imperiled , the robot chocked 42 percent of the time , unable to decide which to save and letting them both " die ." The experiment highlights the importance of morality : without it , how can a robot decide whom to save or what's best for humanity , especially if it can't calculate survival odds ? 46. What question does the example in the movie raise ? A ) Whether robots can reach better decisions . B ) Whether robots follow Asimov's zeroth law . C ) How robots may make bad judgments . D ) How robots should be programmed . 47. What does the author think of Asimov's three laws of robotics ? A ) They are apparently divorced from reality . B ) They did not follow the coding system of robotics . C ) They laid a solid foundation for robotics . D ) They did not take moral issues into consideration . 48. What does the author say about Asimov's robots ? A ) They know what is good or bad for human beings . B ) They are programmed not to hurt human beings . C ) They perform duties in their owners' best interest . D ) They stop working when a moral issue is involved . 49. What does the author want to say by mentioning the word " harm " in Asimov's laws ? A ) Abstract concepts are hard to program . B ) It is hard for robots to make decisions . C ) Robots may do harm in certain situations . D ) Asimov's laws use too many vague terms . 50. What has the roboticist at the Bristol Robotics Laboratory found in his experiment ? A ) Robots can be made as intelligent as human beings some day . B ) Robots can have moral issues encoded into their programs . C ) Robots can have trouble making decisions in complex scenarios . D ) Robots can be programmed to perceive potential perils .