Stick and Gimbal: Handflying in SPAAACE
Do the human roles in space include piloting spacecraft in the traditional sense of maneuvering them via direct control inputs, AKA handflying?
In an old post I said that 'handflying a spaceship is a ding waiting to happen,' alluding to a Progress supply ship that banged into the Mir space station during a Russian test in 1997. But the story is (as usual) a bit more complicated than that.
In Russian practice, handflying has always been strictly an emergency backup. And certainly their experience gave them no reason to change their approach. In the American space program, however, things were different.
Cosmonauts and astronauts were both originally chosen from among test pilots, for the same sensible reason. The basic mission was to test and exercise human capabilities in space, for which you want highly capable people. Familiarity with complex technical systems that go really, really fast was also seen as helpful.
But for institutional and cultural reasons, the early 'Murrican astronauts had much greater influence on how things were done. Mostly it was the whole hot-pilot mystique: Use the Force, Luke!
A scene in The Right Stuff - a title that encapsulates this mystique - conveyed the effects it had on American thinking about space. A German-born rocket scientist describes the prospective occupant of a Mercury capsule as a 'specimen,' but his 'Murrican audience hears it as spaceman, a term richly evocative of Romance.
A related factor might have been the historical accident that the Russians used dogs, most famously Laika, for space research while the Americans used monkeys and apes. Following in the pawprints of Man's Best Friend was one thing. Going boldly where our relatives from the primate house had gone before was a bit more awkward.
In any case, the upshot was that astronauts fiercely resisted being spam in a can, and got their way. Handflying was integral to American human spaceflight from the beginning, and right through the retirement of the Shuttle.
Moreover, it contributed significantly to Americans winning the 1960s moon race. Successful handflying of Gemini spacecraft in rendevous and docking maneuvers emboldened NASA to choose an Apollo architecture that required rendezvous on lunar orbit, and the savings in mass allowed the whole thing to go up on one Saturn V. In the mid-60s state of the art, when this decision was made, automated rendezvous and docking at lunar distance was surely a nonstarter.
Fifty years later our space technology, most of it, is not much different, but automation is obviously a different story. In the age of Google Cars, handflying is out of fashion, and new generation US spacecraft, both Orion and Dragon2, will reportedly follow the Russians in automating maneuvers, including rendezvous and docking.
So is handflying in space an idea whose time has come and gone? Just behind this question, of course, lurks the much larger one of whether human spaceflight itself is an idea whose time has come and gone. Our voyages of deep space exploration have now reached the Kuiper Belt, and the fringes of interstellar space, without their operating crews ever leaving Pasadena or its terrestrial counterparts.
It may be that at some point we will send 'mission specialists' to the planets without any need to send spacecrew along to fly their ships. For that matter, even if ship-operation spacecrew are needed, their tasks may not include handflying. But for now, let us consider handflying, as one of the classic skills we once expected of professional space travellers.
It is pretty much a given that automated systems can fly routine space maneuvers, including complex ones like rendezvous and docking, more smoothly than human pilots. And probably more safely as well, since robots are less prone to unaccountable lapses that can cause routine operations to go pear shaped.
The first question, and the traditional fallback for human intervention, is when things are not routine, and particularly when they have already gone pear shaped. This is nothing to dismiss lightly. So long as things go well, space lends itself to automation, what with Newton and all that. But once things go awry, from instrument failure to erratic maneuvers by another spacecraft, the ability of machines to easily predict the predictable is less helpful.
Moreover, a large part of contemporary AI is expert systems, essentially an encodings of prior human expertise and experience. Expert systems are convenient, cheap substitutes for scarce human experts. But it is less certain that the projected wisdom of skilled pilots who are not on the scene of a particular emergency should or can trump the judgment of a skilled pilot who is on the scene. (And setting aside the question of who trains the expert system if humans no longer practice something.)
Perhaps even more to the point, the purposes for which we go into space are human purposes, and at some point we probably want human judgment involved. In an earlier post I chose a somewhat extreme example, deciding who to rescue if not everyone can be taken. Commenter Brett rightly observed that the case was somewhat unlikely.
But more practical human decision points could easily arise at the scene of an emergency. Suppose a damaged, tumbling spacecraft has injured people aboard in need of emergency medical attention. The rescue ship can break the tumble, a time-consuming process, or perform a somewhat risky maneuver to put medics aboard while the crippled ship is still tumbling. An AI can help weigh the risks, but as Spock might say, cognitive abilities alone are not enough to make that call.
And if the decision is to attempt the maneuver, how is it managed? You probably want AI assistance in performing such a tricky maneuver, but giving verbal instructions would be awfully clumsy. A better alternative is to give the pilot something like a 'smart glove'. The glove learns the pilot's reactions - for example, distinguishing between a random muscular twitch and the beginning of a volitional action, allowing more responsiveness than a bare hand on the joystick could achieve. And if the AI packs up or starts singing 'Daisy' the pilot can disengage it and still fly the ship, even if their spirited steed turns into a carthorse plug.
This basic technology is something we are at least very close to having now, if we don't already have it. And it harnesses AI as what I believe it fundamentally is: a human mental enhancer. For routine operations we can step back and let AI handle the job. For non-routine operations the AI helps us to do a demanding job more effectively.
To be sure, the rescue example presupposes that there are humans in space to be rescued. But the basic reason that human spaceflight is so limited, and controversial, is that it is astronomically expensive. If it becomes merely expensive the justification bar will not be set so high, and in some cases the cost of human presence may fall below that of developing and providing a robotic alternative.
All of which still leaves some complex decisions to be made about handflying. If routine operations are automated, how much actual handflying experience do pilots get? And if they mostly sit passively overseeing automated operations, how alert will they be in a sudden crisis? This has already become a problem for highly automated operations such as rail transit systems.
As with handflying, so I suspect with much else, not only in space but here on Earth: AI will change many things, but probably in ways quite different from those imagined in conventional speculation about robots.
Discuss:
I previously wrote about what AIs want, or might not want, and the relationship between human and artificial intelligence. The image of Snoopy comes from a snippet on YouTube.
In an old post I said that 'handflying a spaceship is a ding waiting to happen,' alluding to a Progress supply ship that banged into the Mir space station during a Russian test in 1997. But the story is (as usual) a bit more complicated than that.
In Russian practice, handflying has always been strictly an emergency backup. And certainly their experience gave them no reason to change their approach. In the American space program, however, things were different.
Cosmonauts and astronauts were both originally chosen from among test pilots, for the same sensible reason. The basic mission was to test and exercise human capabilities in space, for which you want highly capable people. Familiarity with complex technical systems that go really, really fast was also seen as helpful.
But for institutional and cultural reasons, the early 'Murrican astronauts had much greater influence on how things were done. Mostly it was the whole hot-pilot mystique: Use the Force, Luke!
A scene in The Right Stuff - a title that encapsulates this mystique - conveyed the effects it had on American thinking about space. A German-born rocket scientist describes the prospective occupant of a Mercury capsule as a 'specimen,' but his 'Murrican audience hears it as spaceman, a term richly evocative of Romance.
A related factor might have been the historical accident that the Russians used dogs, most famously Laika, for space research while the Americans used monkeys and apes. Following in the pawprints of Man's Best Friend was one thing. Going boldly where our relatives from the primate house had gone before was a bit more awkward.
In any case, the upshot was that astronauts fiercely resisted being spam in a can, and got their way. Handflying was integral to American human spaceflight from the beginning, and right through the retirement of the Shuttle.
Moreover, it contributed significantly to Americans winning the 1960s moon race. Successful handflying of Gemini spacecraft in rendevous and docking maneuvers emboldened NASA to choose an Apollo architecture that required rendezvous on lunar orbit, and the savings in mass allowed the whole thing to go up on one Saturn V. In the mid-60s state of the art, when this decision was made, automated rendezvous and docking at lunar distance was surely a nonstarter.
Fifty years later our space technology, most of it, is not much different, but automation is obviously a different story. In the age of Google Cars, handflying is out of fashion, and new generation US spacecraft, both Orion and Dragon2, will reportedly follow the Russians in automating maneuvers, including rendezvous and docking.
So is handflying in space an idea whose time has come and gone? Just behind this question, of course, lurks the much larger one of whether human spaceflight itself is an idea whose time has come and gone. Our voyages of deep space exploration have now reached the Kuiper Belt, and the fringes of interstellar space, without their operating crews ever leaving Pasadena or its terrestrial counterparts.
It may be that at some point we will send 'mission specialists' to the planets without any need to send spacecrew along to fly their ships. For that matter, even if ship-operation spacecrew are needed, their tasks may not include handflying. But for now, let us consider handflying, as one of the classic skills we once expected of professional space travellers.
It is pretty much a given that automated systems can fly routine space maneuvers, including complex ones like rendezvous and docking, more smoothly than human pilots. And probably more safely as well, since robots are less prone to unaccountable lapses that can cause routine operations to go pear shaped.
The first question, and the traditional fallback for human intervention, is when things are not routine, and particularly when they have already gone pear shaped. This is nothing to dismiss lightly. So long as things go well, space lends itself to automation, what with Newton and all that. But once things go awry, from instrument failure to erratic maneuvers by another spacecraft, the ability of machines to easily predict the predictable is less helpful.
Moreover, a large part of contemporary AI is expert systems, essentially an encodings of prior human expertise and experience. Expert systems are convenient, cheap substitutes for scarce human experts. But it is less certain that the projected wisdom of skilled pilots who are not on the scene of a particular emergency should or can trump the judgment of a skilled pilot who is on the scene. (And setting aside the question of who trains the expert system if humans no longer practice something.)
Perhaps even more to the point, the purposes for which we go into space are human purposes, and at some point we probably want human judgment involved. In an earlier post I chose a somewhat extreme example, deciding who to rescue if not everyone can be taken. Commenter Brett rightly observed that the case was somewhat unlikely.
But more practical human decision points could easily arise at the scene of an emergency. Suppose a damaged, tumbling spacecraft has injured people aboard in need of emergency medical attention. The rescue ship can break the tumble, a time-consuming process, or perform a somewhat risky maneuver to put medics aboard while the crippled ship is still tumbling. An AI can help weigh the risks, but as Spock might say, cognitive abilities alone are not enough to make that call.
And if the decision is to attempt the maneuver, how is it managed? You probably want AI assistance in performing such a tricky maneuver, but giving verbal instructions would be awfully clumsy. A better alternative is to give the pilot something like a 'smart glove'. The glove learns the pilot's reactions - for example, distinguishing between a random muscular twitch and the beginning of a volitional action, allowing more responsiveness than a bare hand on the joystick could achieve. And if the AI packs up or starts singing 'Daisy' the pilot can disengage it and still fly the ship, even if their spirited steed turns into a carthorse plug.
This basic technology is something we are at least very close to having now, if we don't already have it. And it harnesses AI as what I believe it fundamentally is: a human mental enhancer. For routine operations we can step back and let AI handle the job. For non-routine operations the AI helps us to do a demanding job more effectively.
To be sure, the rescue example presupposes that there are humans in space to be rescued. But the basic reason that human spaceflight is so limited, and controversial, is that it is astronomically expensive. If it becomes merely expensive the justification bar will not be set so high, and in some cases the cost of human presence may fall below that of developing and providing a robotic alternative.
All of which still leaves some complex decisions to be made about handflying. If routine operations are automated, how much actual handflying experience do pilots get? And if they mostly sit passively overseeing automated operations, how alert will they be in a sudden crisis? This has already become a problem for highly automated operations such as rail transit systems.
As with handflying, so I suspect with much else, not only in space but here on Earth: AI will change many things, but probably in ways quite different from those imagined in conventional speculation about robots.
Discuss:
I previously wrote about what AIs want, or might not want, and the relationship between human and artificial intelligence. The image of Snoopy comes from a snippet on YouTube.