The Cyclopses, according to mythology, was a race of badtempered and rather stupid one-eyed giants. Not, perhaps, a great portend for a new generation of robots. But Andrew Davison, a computer scientist at Imperial College, London, thinks one eye is enough for a robot, provided its brain can think fast enough. For a robot to work autonomously, it has to understand its environment. Stereoscopic vision, integrating the images from two ‘eyes’ looking at the same thing from different angles is one approach to achieve this, but it involves a lot of complicated computer processing. The preferred method these days, therefore, is Simultaneous Localization and Mapping (SLAM), which uses sensors such as laser-based range finders that ‘see’ by bouncing beams of light off their surroundings and timing the return.
Dr. Davison, however, wants to replace the range finders, which are expensive and fiddly, with a digital camera, which is small, cheap, and well-understood. With this in mind, he is developing ways to use a single, moving video camera to create continually updated 3D maps that can guide even the most hyperactive of robots on its explorations. His technique involves collecting and integrating images taken from different angles as the camera goes on its travels. The trick is to manage to do this in real time, at frame rates of 100–1,000 per second.
The shape of the world pops out easily from laser data because it represents a direct contour map of the surrounding area. A camera captures this geometry indirectly, and so needs more (and smarter) computation if it is to generate something good enough for a self-directing robot. An answer is a form of triangulation, tracking features, such as points and edges, from one frame to the next. With enough measurements of the same set of features from different viewpoints, it is possible, if you have a fast enough computer programme, to estimate their positions and thus, by inference, the location of the moving camera.
However, developing such a program is no mean feat. In the milliseconds between successive frames, relevant information from each fresh image must be extracted and fused with the current map to produce an updated version. The higher the frame rate, the less time there is to do this work.
1. According to the passage, integration of images from two ‘eyes’ is termed as:(A) Computer processing
(B) SLAM
(C) Stereoscopic vision
(D) Sensors
(E) Autonomous angles
2. From the passage, each of these can be inferred, EXCEPT:(A) Digital cameras are cheaper than range finders.
(B) Range finders allow robots to see with one eye.
(C) The Cyclops is a mythical creature.
(D) To work independently, a robot must be able to understand its surroundings.
(E) Range finders have the ability to create 3D maps.
3. According to the passage, why is a digital camera preferred over range finders?(A) Development of images is better.
(B) It is small and economical and well-understood.
(C) It is more fiddly.
(D) It can continuously update images.
(E) It can upload 3D maps.
4. What is the main purpose of the author in writing the passage?(A) To explain why SLAM is better than stereoscopic vision.
(B) To advocate the use of digital cameras in place of range finders.
(C) To analyze emerging techniques in computers.
(D) To praise a scientist for his groundbreaking work.
(E) To discuss techniques for use in self-guided robots.