
When you send a spacecraft a few billion miles away to Mars, it's best to stick the landing or the safe orbit. And success is not always a guarantee. NASA lost a couple of missions in the 1990s when the spacecraft didn't make it. In one case, one of the many causes was one team using metric units for a key parameter, while the other used the imperial system.
Luckily, with practice NASA is getting bolder and better. In 2004, the two rovers Spirit and Opportunity fell to the surface encased in airbags. After rolling on the surface, they each came to a safe stop and unfolded as planned. Opportunity is still working today and just completed a marathon's worth of driving, while Spirit only died in 2010 after it got stuck in a sand dune.
Curiosity's landing in 2012 was perhaps the craziest yet. As the system was too heavy for airbags, NASA elected to instead use a sort of jetpack to bring the rover within reach of the surface. Once Curiosity was close, the jetpack cut the tether and flew off for a crash landing. The rover arrived safely. But even though it was on target, the landing zone was huge: 12 miles by 4 miles (20 by 7 kilometers), according to the agency.
This is where a new system called ADAPT (Autonomous Descent and Ascent Powered-flight Testbed) could improve accuracy and save on fuel. It's a collaboration between NASA, Masten Space Systems and the Jet Propulsion Laboratory. Essentially, it uses imagery to guide the system to a safe landing, which no Mars lander has done before.
The real challenge with landing on Mars is the extreme distance between Mars and Earth. With an average of 20 minutes' radio travel time between the planets, that means that nobody on Earth can control a landing in real time. It's up to the robot to guide itself. This means that for it to use imagery for a landing, the computer system has to be smart enough to distinguish between different types of terrain.
ADAPT is still far from a spaceflight, but a couple of test flights on a Masten Xombie rocket on Earth went well, NASA said in a recent press release. This included the rocket showing off its capability to change landing trajectories mid-flight once it was reading the terrain below. It's cool stuff, and could also have applications here on Earth.
Perhaps the easiest application to imagine is helping UAVs (unmanned aerial vehicles) land by using imagery. In fact, this has already been shown in a few applications. The video above is based on research that helped an uncrewed helicopter safely land. Presented at the IFAC-REDUAS 2013 conference, this was made possible by estimating the 3D position of the helicopter relative to the terrain.
Real-time imagery navigation could also be useful for other applications. Imagine a GPS that would be able to change its directions as it sees obstacles ahead, for example. As you drive through a construction zone, the GPS picks out a safe driving path ahead and helps you figure out the best way to navigate. Or at worst, if the road is closed it can sound a warning.
Perhaps the technology could also be used for other applications, such as guiding people around a neighborhood when they have limited vision. Or it can help people do shopping based on the size or shape of an item. How do you imagine image recognition helping in everyday life?
Top image: Artist's conception of the Curiosity landing on Mars. Credit: NASA/JPL-Caltech