Awards - Computer Vision, Robotics, and Visualization
NASA continues to expand its reach into the solar system by initiating a steady stream of new robotic exploration missions to scout locations for future human missions (i.e., Launching a New Era in Space Exploration). In doing so, breakthrough technologies are needed to successfully support long-term missions. These missions will aim to reduce the level of human intervention in the operation of planetary rovers. Earth-based, human operators cannot easily know the structure of the environment and cannot appropriately guide the rovers along long paths. Thus, rovers are currently limited to traverses on the order of a few tens of meters. To increase the science return, future planetary missions will require the ability to traverse longer distances at higher speeds so as to achieve regional or even global planetary exploration. To achieve this objective, rovers will need to generate high-quality 3D maps of the terrain, detect possible hazards more reliably, employ more sophisticated path planning algorithms and effectively interact with scientists and operators.
Computer vision, robotics, and visualization have and will continue to play an important role in NASA’s space exploration missions. Computer vision can enable rovers to explore planetary surfaces quickly and safely, landers to descend precisely, and orbiters to maintain efficient orbits. Real-time path planning algorithms can enable exploration rovers to autonomously navigate given noisy sensor data. In recent NASA missions, rovers have been able to successfully plan local paths and avoid obstacles using automated algorithms while operating at relatively low speeds. Immersive visualization tools can provide high-resolution, 3D representations of remote environments to planetary scientists, robot operators, mission planners, and the public.
The University of Nevada, Reno (UNR), the University of Nevada, Las Vegas (UNLV) and the Desert Research Institute (DRI) propose to advance NASA’s computer vision, robotics, and visualization technologies with the purpose of improving planetary exploration and understanding. Specific research objectives include: (a) create higher quality 3D planetary maps by combining optical imagery with LiDAR, and developing advanced segmentation algorithms for detecting hazards and other objects by combing Stereo Maps with Digital Elevation Maps (DEMs), (b) achieve robust and adaptive, real-time motion planning by taking physical constraints into account in the planning process, and (c) aid mission planning and scientific analysis with high-fidelity immersive visualization tools. The proposed project activities will be conducted in close collaboration with the Intelligent Robotics Group (IRG) at NASA Ames Research Center (http://ti.arc.nasa.gov/groups/intelligent-robotics/). Results from this research will support and improve NASA’s ability to perform long-duration exploration missions to solar system destinations such as the Moon, near-Earth objects, and Mars with ever more frequency and sophistication, with robots leading the way for future human explorers.
The proposed activities reflect the objectives of the NASA EPSCoR program, while also promoting the education, research, and public service priorities of the Nevada NASA Space Grant Consortium (NSGC). Specifically, the project aims to (1) build significant research capacity in Nevada in areas of strategic importance to NASA while at the same contributing to NASA’s research priorities; (2) achieve national research competitiveness and self-sufficiency; (3) improve Science, Technology, Engineering and Mathematics (STEM) education and enhance Nevada’s graduate programs in Computer Science and Engineering; (4) improve economic development in Nevada; and (5) promote the education, research, and public service priorities of the NASA Space Grant Consortium. Specific objectives to achieve our goals include:
- Develop novel co-registration and segmentation algorithms to improve the quality of 3D planetary maps and accuracy of obstacle/object detection.
- Develop robust and adaptive real-time path planners for extraterrestrial navigation.
- Develop immersive visualization for training purposes, and better planning of future missions.
- Partner with NASA Centers, government labs, and industry, enabling Nevada’s higher education institutions (UNR, UNLV, DRI) to become nationally competitive and self-sufficient.
- Recruit qualified students, especially from underrepresented groups, and involve them in creative research at Nevada’s institutions and internships at NASA centers and industry labs.
- Expose a diverse body of students to NASA’s mission and demonstrate the importance of STEM areas in NASA applications through innovative seminars, and advance course offerings.
- Disseminate the results of this project to NASA and to the broader scientific community.
- Pursue collaborations with local industry and nationwide aerospace industry in order to transfer research results into the private sector, contributing to the economic viability of Nevada.
- Reach to local high schools through programs, such as "Science and Technology Day", and promote Computer Science and Engineering and its role in NASA’s mission.
Figure 1: This project will closely integrate the contributions achieved in the three components of the project (see Figure 1). In particular, the path planning and the visualization can use the 3D maps and the object segmentation. The vision module can be improved by using human feedback collected by the visualization interface and by using the intended direction of motion returned by the navigation algorithms for “targeted” segmentation. The robotics component can forward candidate paths to the visualization module to be displayed to human operators. Paths can then be selected by the operator and forwarded back to the path planner. Simple immersive user interfaces will allow users to provide feedback to either the vision or robotics systems, for example changing camera or segmentation parameters or indicating locations of interest or selecting among potential paths. Moreover, the visualization module will evaluate the quality of paths given simulations of the lighting conditions and shadows at different positions on the surface of a planetary body.
Figure 2: A key technology for autonomous navigation is the ability to sense and model the 3D environment. High quality 3D terrain maps are necessary for terrain traversability by analyzing terrain morphology and detecting potential hazards. A key objective of this research is to develop advanced algorithms for co-registering optical imagery and LiDAR data so as to build higher quality, photorealistic 3D models of visited sites for on-site autonomous operations and mission planning on Earth. At the same time, detecting potential obstacles through segmentation and estimating physical properties of the terrain will allow rovers to predict mobility performance and adapt their path planning strategies. We plan to develop new segmentation techniques that combine information from Digital Elevation Maps (DEMs) and image data. Figure 1 shows an NASA Ames IRG K10 planetary rover, equipped with an Optech ILRIS-3D.
Figure 3: Our team is building simulation software for NASA's robotic rovers that can be used to study and develop sensing and path planning algorithms for such robots. The image displays a simulated rover and the distance measurements returned by a proximity sensor, such as a laser scanner, as the robot approaches an interesting feature on a simulated planetary surface
Figure 4: In collaboration with NASA Ames s, our team is developing path planning solutions for robotic rovers. Given data about the planetary surface and the robot's sensing information, the surroundings of the robot are often abstracted as a grid that needs to be navigated. The image on the left shows such a representation, where the darker cells are more difficult to traverse than lighter cells. Red cells are obstacles that cannot be traversed. The image in the middle shows the direction, computed by a competitive algorithm in the literature, which the robot should follow at each cell in order to reach a goal at the top-right of the map. The last image shows the path found to travel from the bottom-left to the top-right.
Figure 5: We will leverage DRI's 6-sided immersive visualization environment, called DRIVE6, to develop an immersive user interface to visualize planetary terrain and other data collected by the robot and set high-level goals to remotely control the robot. We will also develop algorithms to predict detailed lighting conditions that would be available for the robot's computer-vision-assisted path planning that is part of this project. Finally, we will create visualizations to disseminate project results to both scientists and the public. (a) Planetary terrain visualization in DRIVE6, (b) Radiological survey training in DRIVE4. (c) MER in Victoria Crater on Mars without indirect illumination. (d) Photorealistic rendering with indirect illumination in DRIVE6 at 14 fps: stereoscopic with interactive objects and light, and head tracking for perspective correction. Indirect lighting affects shadows and the appearance of the dark sides of objects.