Monday, July 27, 2015

Stick and Gimbal: Handflying in SPAAACE

Snoopy at the Controls
Do the human roles in space include piloting spacecraft in the traditional sense of maneuvering them via direct control inputs, AKA handflying?

In an old post I said that 'handflying a spaceship is a ding waiting to happen,' alluding to a Progress supply ship that banged into the Mir space station during a Russian test in 1997. But the story is (as usual) a bit more complicated than that.

In Russian practice, handflying has always been strictly an emergency backup. And certainly their experience gave them no reason to change their approach. In the American space program, however, things were different.

Cosmonauts and astronauts were both originally chosen from among test pilots, for the same sensible reason. The basic mission was to test and exercise human capabilities in space, for which you want highly capable people. Familiarity with complex technical systems that go really, really fast was also seen as helpful.

But for institutional and cultural reasons, the early 'Murrican astronauts had much greater influence on how things were done. Mostly it was the whole hot-pilot mystique: Use the Force, Luke!

A scene in The Right Stuff - a title that encapsulates this mystique - conveyed the effects it had on American thinking about space. A German-born rocket scientist describes the prospective occupant of a Mercury capsule as a 'specimen,' but his 'Murrican audience hears it as spaceman, a term richly evocative of Romance.

A related factor might have been the historical accident that the Russians used dogs, most famously Laika, for space research while the Americans used monkeys and apes. Following in the pawprints of Man's Best Friend was one thing. Going boldly where our relatives from the primate house had gone before was a bit more awkward.

In any case, the upshot was that astronauts fiercely resisted being spam in a can, and got their way. Handflying was integral to American human spaceflight from the beginning, and right through the retirement of the Shuttle.

Moreover, it contributed significantly to Americans winning the 1960s moon race. Successful handflying of Gemini spacecraft in rendevous and docking maneuvers emboldened NASA to choose an Apollo architecture that required rendezvous on lunar orbit, and the savings in mass allowed the whole thing to go up on one Saturn V. In the mid-60s state of the art, when this decision was made, automated rendezvous and docking at lunar distance was surely a nonstarter.

Fifty years later our space technology, most of it, is not much different, but automation is obviously a different story. In the age of Google Cars, handflying is out of fashion, and new generation US spacecraft, both Orion and Dragon2, will reportedly follow the Russians in automating maneuvers, including rendezvous and docking.

So is handflying in space an idea whose time has come and gone? Just behind this question, of course, lurks the much larger one of whether human spaceflight itself is an idea whose time has come and gone. Our voyages of deep space exploration have now reached the Kuiper Belt, and the fringes of interstellar space, without their operating crews ever leaving Pasadena or its terrestrial counterparts.

It may be that at some point we will send 'mission specialists' to the planets without any need to send spacecrew along to fly their ships. For that matter, even if ship-operation spacecrew are needed, their tasks may not include handflying. But for now, let us consider handflying, as one of the classic skills we once expected of professional space travellers.

It is pretty much a given that automated systems can fly routine space maneuvers, including complex ones like rendezvous and docking, more smoothly than human pilots. And probably more safely as well, since robots are less prone to unaccountable lapses that can cause routine operations to go pear shaped.

The first question, and the traditional fallback for human intervention, is when things are not routine, and particularly when they have already gone pear shaped. This is nothing to dismiss lightly. So long as things go well, space lends itself to automation, what with Newton and all that. But once things go awry, from instrument failure to erratic maneuvers by another spacecraft, the ability of machines to easily predict the predictable is less helpful.

Moreover, a large part of contemporary AI is expert systems, essentially an encodings of prior human expertise and experience. Expert systems are convenient, cheap substitutes for scarce human experts. But it is less certain that the projected wisdom of skilled pilots who are not on the scene of a particular emergency should or can trump the judgment of a skilled pilot who is on the scene. (And setting aside the question of who trains the expert system if humans no longer practice something.)

Perhaps even more to the point, the purposes for which we go into space are human purposes, and at some point we probably want human judgment involved. In an earlier post I chose a somewhat extreme example, deciding who to rescue if not everyone can be taken. Commenter Brett rightly observed that the case was somewhat unlikely.

But more practical human decision points could easily arise at the scene of an emergency. Suppose a damaged, tumbling spacecraft has injured people aboard in need of emergency medical attention. The rescue ship can break the tumble, a time-consuming process, or perform a somewhat risky maneuver to put medics aboard while the crippled ship is still tumbling. An AI can help weigh the risks, but as Spock might say, cognitive abilities alone are not enough to make that call.

And if the decision is to attempt the maneuver, how is it managed? You probably want AI assistance in performing such a tricky maneuver, but giving verbal instructions would be awfully clumsy. A better alternative is to give the pilot something like a 'smart glove'. The glove learns the pilot's reactions - for example, distinguishing between a random muscular twitch and the beginning of a volitional action, allowing more responsiveness than a bare hand on the joystick could achieve. And if the AI packs up or starts singing 'Daisy' the pilot can disengage it and still fly the ship, even if their spirited steed turns into a carthorse plug.

This basic technology is something we are at least very close to having now, if we don't already have it. And it harnesses AI as what I believe it fundamentally is: a human mental enhancer. For routine operations we can step back and let AI handle the job. For non-routine operations the AI helps us to do a demanding job more effectively.

To be sure, the rescue example presupposes that there are humans in space to be rescued. But the basic reason that human spaceflight is so limited, and controversial, is that it is astronomically expensive. If it becomes merely expensive the justification bar will not be set so high, and in some cases the cost of human presence may fall below that of developing and providing a robotic alternative.

All of which still leaves some complex decisions to be made about handflying. If routine operations are automated, how much actual handflying experience do pilots get? And if they mostly sit passively overseeing automated operations, how alert will they be in a sudden crisis? This has already become a problem for highly automated operations such as rail transit systems.

As with handflying, so I suspect with much else, not only in space but here on Earth: AI will change many things, but probably in ways quite different from those imagined in conventional speculation about robots.

Discuss:




I previously wrote about what AIs want, or might not want, and the relationship between human and artificial intelligence. The image of Snoopy comes from a snippet on YouTube.

60 comments:

Nyrath said...

Of course in science fiction, the issue is settled by Burnside's Zeroth Law of space combat: Science fiction fans relate more to human beings than to silicon chips.

No SF fan is going to settle for a control panel with a single solitary button, labeled "Do Mission".

However, when I first took a look at handflying, I was surprised to find it boiled down to only three controls: Rotation, Translation, and Thrust (plus indicator instruments)

Geoffrey S H said...

It might be a bit of a broad point to make, but the sheer number of jobs that may go to robotics (including many humanities related jobs if rumours from certain potentially respectable source are to be believed) may mean that something has to be done to get the human to be able to compete with a computer to have any job at all. For any drama at all might a human have to think as fast as a computer?

Is this possible? Could the fast-reacting lizard part of our brain be stimulated to do this? I don't want to say no even if the scientists right now say it is not. Every single job I have wanted over the past 10 years might go to a computer in the near future, and so any sf story I write involving those professions might require humans to be able to compete with a computer in a way they cannot now. And that isn;;t taking into account further future developments.

And if a way to get humans to think as fast as (and faster than) computers for even a short time before burn out [and be aware of every decision/calculation a computer makes] would allow for shifts of humans to get done vital tasks, then the now slower computers could hum reliably and constantly in the back ground keeping navigation/calculations going while the humans rest. You'd get symbiosis rather than replacement.

Otherwise all we are good for is programming. Then some joker gets computers to do that too....

And to those saying it is unlikely that computers will do almost everything: I think that too sometimes, but if I sometimes think the things above as well then I won't get caught out if they do happen. Its possibly handwavium, but handwavium that has been seriously mulled over.

Geoffrey S H said...

Urgh! The grammar above is terrible even after proof reading before posting. Apologies. potentially abiit tired.

Eth said...

Otherwise all we are good for is programming. Then some joker gets computers to do that too....

But then, we are good at programming/guiding the programming systems. And you don't want to automate those and take humans entirely out of the loop - because, contrary to a hopefully small subset of the singularists, you are not stupid.

But there are others jobs that are still meant for humans.

Art, to begin with.
The example I like is painting. We have machines that are better at painting than us. We had them for more than a century: cameras. And yet, you still see painters. Sure, nowadays some painters massively use computers, up to CGI programs, but they are still artists.
Art, by definition, can't be automated.

Then, social and political jobs. Like computers in the example above, machines (even advanced AI) are tools, not an end in itself. You still have people making decision about how to use them, so you have people in power. And humans being social creatures, you also have people doing social jobs, because in the end, you socialize with humans.

Now, some jokers may automate imitation art generation (in what they consider art), have expert systems take over decisions (based on their previous decisions) and even create social impostors, and get humans completely out of the loop.
But even if that's possible (and what the aforementioned subset is aiming for), it's a nightmare scenario, a counter-utopia. That's the automated equivalent of a Soviet nail factory melting nails to make nails, but at least workers are with a paid job.

And, as silicon chips blowing each-other up are tools, there still are stories to be told about the humans who made the decisions. Pushing the "do mission" button isn't thrilling by itself. But once it is told why the mission was designed in the first place, it can be.

A real-life example is the Nasty Red Button. You know the one. One press, and it will Do Mission, even if the entire missile system was automated. It's just silicon chips mindlessly trying to blow up other silicon chips aboard barely space-capable crafts so they can't mindlessly do what they were programmed to do.
And yet, I can't tell how many books, movies and other media have been made where pressing (or not) this boring red button is the most dramatic scene, with passionate debates, political intrigues and espionage businesses about whether it is a good idea to have said silicon chips in the first place - or even said other silicon chips that are supposed to try and blow the other ones up.
Hell, there are some of the most frightening historical documentaries among them!

And 4X players will tell you how great stories can be told even when your entire fleets are mindless robots.

Not that I don't like stories of crewed spaceships facing the harshness of Murphy's law in irradiated vacuum, far from it.
For the PMF, having a flexible, general-purpose human in case of emergency sounds plausible (assuming the costs of sending said human are not-astronomical enough).
That's kind of what you see in the manga/anime Planete: space dustman is clearly a job with a high enough complication/mission ration that sending robots is simply not worth it, though it's not hard imagining them being sent for the more problematic missions and simple trashcans sent for the rest.

In the PFF, all bets are off. But at this point, what is a human in the first place may have interesting answers anyway.

fro1797 said...

Automation vs. hand flying:
1. When the computer starts saying "Current situation is outside of established perimeters", that means it has no idea what to do, so somebody better take over.Now.
2. When not standing watch (so as to not doze off), pilots should be practicing on the simulator, bringing up the most far fetched scenarios possible, just to be prepared.
3. Computers are precise...humans not so much. They complement each other.
4. Airliners are mostly automated and the most advanced ones can take off and land by its self...but no one seriously suggests not having a flight crew in the cockpit.
5. I always thought the reason to have a space program was to be able to go places...like the Moon, Mars, Titan, and all those other interesting place beyond our atmosphere...not reasonable to try and steer a robot over treacherous terrain with a minute of light-lag, so you need a human to be on scene...and I'm willing to bet that the vast majority would not be comfortable with "HAL" being the only pilot...

Ferrell

Brett said...

It's the same problem with aircraft as well. Humans unfortunately aren't that great at sitting back and waiting to jump in when the AI fails. If you're going to give them any role at all in controlling the ship in a time of crisis*, then you'd want to build in periods of time where they have to take active control of the spacecraft and keep it going, along with tons of drilling/simulations. It'd be like requiring that the aircraft go under manual control for segments of time before switching back to auto-pilot.

In fact, if the simulations are good enough you might be able to just use them for training and drilling purposes.

* I'm not certain you'd want to, in spacecraft. It might actually be better with spaceships to put in layers of redundancy for the automated systems controlling the ships, and just have a competent, around-the-clock staff of technicians and engineers aboard constantly doing maintenance and monitoring check-ups so that any problems get caught and fixed before they become too big. It'd be sort of like military ships with round-the-clock staffing.

It wouldn't be a ship full of mission specialists aboard the Solar Dragonfly. It would be mission specialists and the maintenance crew, with the latter outnumbering the former.

Hmm. But then what if something was slightly off at the destination, such that you needed a manual change to get it right? I'm thinking of the LEM docking sequence from Apollo 13.

Brett said...

Had another thought, since the "jobs" issue came up-thread-

That's almost certainly going to be a big aspect of work in the Future of Automation. We build machines to do specific tasks, then integrate those into automated systems under human control, in the same way that a person using a computer is lording over a system of automated programs doing stuff in the background.

In that case, humans will just be pushed higher and higher up atop mountains of automated systems. We'll lord it over Personal Assistant programs that handle out budgets and scheduling, cars which do all the work of getting us where we want to go just off of voice commands, and so forth. The amount of hidden complexity in the economy's technological structure will grow enormously.

Of course, that also means it becomes much more complex and hugely problematic if any of that fails. So then you have tons of jobs for people doing constant maintenance, check-ups, and stress tests so that the system doesn't fail catastrophically.

That sort of fits with the Solar Dragonfly not having a pilot, just maintenance staff and the chief engineer. Piloting the ship would just be one of many tasks integrated into the advanced spacecraft system, and you wouldn't be expected to jump in and manually fly it anymore than you would be expected to suddenly jump in and start manually caching files on your computer if an error popped up.

Thucydides said...

For the most part, both passenger jets and fighters have a computer between the joystick and throttle and the actual control surfaces and engines. The pilot "suggests" what he wants to do and the computer interprets the order and activates the controls, ensuring the pilot has not overstressed the airframe.

While this is probably not an issue with most spacecraft, a computer to moderate so you don't overrotate or tumble your spacecraft is probably going to be a common feature.

Still, there is nothing like experience in the cockpit. The experience of the pilot of the so called Gimli Glider (https://en.wikipedia.org/wiki/Gimli_Glider as a glider pilot allowed him to bring the airplane to a safe landing despite an unprecidented situation. Similarly, the pilot who brought his 737 to a safe landing on the Hudson river after bird strikes crippled his engines also had thousands of hours in the cockpit, so had a wealth of tacit knowledge to apply to the situation at hand. I would suspect that you would need a very high fidelity simulation and thoushands of hours in the VR seat to be able to have the same wealth of tacit knowledge to draw on when the SHTF.

Tony said...

Expert systems are supposed to distill the knowledge of an expert into a decision support or even a decision-making tool. There are two fundamental problems with this approach:

1. The software programmer is by definition not an expert in the problem domain, so he doesn't know which questions to ask to elicit the fullest knowledge base from the expert.

2. Any given expert may know about most of a problem domain, but he probably doesn't know everything, sometimes because everything hasn't been discovered. So the fullness of the knowledge base is likely to be compromised by insufficient breadth or depth of information, even with the most highly regarded experts. So multiple experts have to be consulted to get the widest coverage, but often they disagree about critical assumptions or values. This leaves the programmer in a quandary about whether to decide for himself who is right, or to write a bunch of provisional knowledge rules that the user has to choose between at setup time. Of course, this means that the expertise of the system is a function of user or programmer prejudices, not really the knowledge of all of the experts consulted.

Then there is the meta problem of experts being a limited commodity to begin with, with many demands on their time. So one might find an expert system's knowledge base compromised by the limited availability of expert input, even with multiple experts consulted.

On top of all of that, there is the case (and this probably goes all the way back to Babylonian scribes working on clay tablets) where no human is the expert to begin with. For example, my boss wrote one of the first computerized taxi reservation and dispatching systems. He's been working with such systems (and an allied cash and credit management system) ever since. You'd think he knows everything there is to know about dispatching cabs and paying cab drivers, right? Well, there are still days that he emails a programmer with a request that boils down to: "Why are we doing things this way, and how?" The reason is simple. The knowledge base isn't a formal body of assertions about assumptions and data values. It's embedded in the structure of the system itself. When some problem comes up, a solution is generated by a programmer -- sometimes in direct consultation with the boss, but often not -- and enshrined in programming code and database definitions that nobody will ever look at again, unless a related problem emerges. And then nobody knows what is happening until some programmer does some code archeology and figures out what is going on.

Bringing this all back to human or computer piloting, any complex computerized management system is this way. An aerial or space vehicle management system is indeed built in this way. It's an accretion of the corporate knowledge of dozens of subdomain experts and -- more troubling for potential users -- solutions to emergent implementation problems that may or may not have been vetted by an expert. Now, aerospace companies are very big on documentation of programs and programming decision making. But those things are really only relevant for forensic inquiry, after the fact. So the solution is likely to always be control everything that can be controlled with an automated system, but keep a human expert (in a very narrowly defined but critical expertise, like piloting) in the loop to handle unexpected or out of computer comfort zone situations. And, oh yeah, make sure that the human expert can exert full, direct control at any time. Computer mediation of expert inputs can be just as bad as no expert input at all.

Hugh said...

How much human intervention is allowed in unusual circumstances is a continual issue for current day avionics. It is stereotyped as "Boeing vs Airbus" philosophies. Like many stereotypes they aren't really that distinct in real life but the kernel is there: should the flight controls allow the human to do something exceptional?

The "Airbus" philosophy is that humans often make mistakes, so the computer should assume that, say, trying to pull 5Gs in an aircraft only rated for 4G should not be allowed.

The "Boeing" philosophy is that maybe the pilot wants to pull 5Gs because the alternative is flying into a mountain, so allow it.

Rick's post suggests that NASA are on the Boeing side. Anyone know how much the Russians trust their computers?

Hugh said...

Another thought, latency might be a good reason to send human pilots along.

The Mars rovers are controlled from Earth, but how many actual decisions get made per day? I suspect that avoiding an unexpected rock on the surface of Mars is a lot less time sensitive than avoiding an unexpected rock in orbit.

Tony said...

NASA's hands-on philosophy comes with a couple of huge caveats:

Everything is simulated and rehearsed to death on the ground before it is done in space.

The real commander of the mission is the Flight Director in Houston. Pilots still pilot, but they don't make serious command decisions on their own.

WRT the Russians, they use automatic docking technology, but the crew continuously monitors the process, and there are several go/no-go steps where the crew has to give a proceed command. The crew also has manual controls that they can use in a docking emergency. However, in typical Russian fashion, if a Soyuz has to be repositioned to a different docking port on the ISS, they do that almost totally manually.

Tony said...

On interplanetary missions, the crew is going to need ore than engineer/technicians. At least one should be a pilot with command experience. When your round trip communication time to Mission Control can be up to half an hours, it will be necessary to have someone aboard who is in clear authority and has the experience to back that up. The collaborative approach seen in Apollo 13 is not going to work when somebody aboard has to be in command right now.

Brett said...

You will need a larger crew, to be sure - and that means more of a command hierarchy or at least some type of organization of roles and control. On really long missions, you're probably going to need a larger share of the crew dedicated specifically to maintenance and check-ups of the ship, so that nothing goes wrong (replacing the role that Mission Control does now).

@Hugh

It does greatly slow down the rate of travel your rovers can do, though. Ideally you'd want either real-time control, or at least rovers capable of navigating around driving hazards on route to a destination.

Katzen said...

The main thing with any complex machine is you will need both human and robotic crew. The trick is getting them to work with each other. I don't see so much machines doing everything as enhancing everything the pilot does. Take for instance a complex spacecraft. when a engine or power plant starts to go awry a machine might automatically correct it but without fixing a underlying problem it can't detect. A human though might not notice the slight change in power or acceleration. The computer can alert that something is amiss while correcting what it can and the engineer on duty will start to troubleshoot.

highlighting problems and balancing systems is probably best up to the computer which is good at constant monitoring while fixing and troubleshooting is best left to the human.

Elukka said...

Even LEO taxis have a designated commander. On the shuttle for example the commander could make the choice to perform an abort in case mission control couldn't because communications were down.

Rick said...

Not for the first time, not for the last, I ignored the bright red stencil on the top of the can: CAUTION: CONTAINS WORMS.

AI has come to mean two distinct things, only indirectly related. One is 'strong' AI, familiar in SF, meaning roughly 'smart like us' - and very possibly smarter.

The other is AI as it has actually developed in research and industry, which doesn't need to be smart like us, just smart enough and cheap enough to replace human labor in routine tasks. Which is plenty enough to screw with the economy not to mention the whole social underpinnings of capitalism.

No SF fan is going to settle for a control panel with a single solitary button, labeled "Do Mission".

Though as pointed out downthread, a whole subgenre of dramas have been created around the 'Blow Up World' button. That said, the problem is the meta to end all meta, because ultimately we really want people* there, and taking positive actions, for story purposes, but you can hardly admit to that inside the story.

*Sure, you can treat 'strong' AIs as characters, but that is just treating them as people. Which assumes that high-level cognitive capability, 'intelligence', will be accompanied by self-motivated personality. Which I'm not so sure of.

It seems to me that if you write a story about an expert system AI, it is not about what the AI thinks. The AI doesn't think at all; it just crunches data and spits out an answer. You're really writing about the human institution that creates the AI - with all the ways it can go awry. Real-world AI is essentially bureaucracy, automated. (Remember that this can, in many circumstances, be a feature not a bug. There is a reason why pweople say 'RTFM'.)

For any drama at all might a human have to think as fast as a computer?

I'm doubtful that our nervous system, including the brain, can be driven very much faster. And the attempt could cause serious health damage. But do we really need to be faster, as such? Our strength is not number crunching, but mental leaps, shortcuts, intuition, by whatever name. Since we don't clearly understand it we cannot emulate it, which is why no one has succeeded in automating the art of programming.

the Solar Dragonfly not having a pilot, just maintenance staff and the chief engineer

Automation issues entirely aside, 'flying' a solar dragonfly would be a curious proposition. A pilot once said that flying the B-36 was 'like sitting on your front porch flying your house.' With a solar dragonfly it would be sitting on your porch flying a farm. Maneuvers would be so gradual that they would be 'navigated' rather than 'flown'.

Being able to handfly smaller spacecraft (the ones that do the maneuvering during rendezvous and docking) might be one of the skills an experienced spacecrew member would be expected to have, along with spacewalking.

At least one should be a pilot with command experience

I tend to agree, with the proviso per above that 'pilot' might not be a distinct rating, just a skill that an onboard commander would have. The ship and the mission will determine what priority piloting as such has.

Katzen said...

I think "pilot" might mean different things to different craft. To a small orbiter craft it might be perfectly reasonable to hand fly with a computer giving flight path and such. A large interplanetary craft would be more like piloting a modern day ship with more emphasis on the computer doing the minute adjustments and the pilot navigating the overall course with the help of simulations and computers.

When spacecraft can't be multimillion or billion dollar one offs and standards of reliability and care in each craft go down, the need for a pilot will be necessary no matter what kind of ship it is.

Thucydides said...

@Tony

While Apollo 13 was a collaborative mission, in the end, Jim Lovell was definitely in command of the ship, and indeed it could not be any other way. Gene Kranz and his team on the ground had the resources, but were not "there" on board, so could only provide expert advice. Lovell was the man in charge, and could always override or veto suggestions (and indeed he did sometimes). Having direct knowledge of the situation, as well as his own experience as a naval aviator and astronaut to guide him, gave him the ability to act as an independent commander.

I suspect that this will be the case for decades to come, especially if/when people start going back into space for extended missions such as to NEO asteroids or to Mars, simply because the light lag will make directing things from Mission control impractical. Future Jim Lovell's (or Mark Watney, the fictional hero of "The Martian"), will need to have the freedom and flexibility to use their own judgement and resources when operating in space.

This is actually an aspect of economic theory described by F.A. Hayek as "The Local Knowledge Problem", which simply stated says that information is widely distributed in time and space, and an individual actor can recognize and act on fleeting bits of information far faster than centralized organizations, where information needs to be passed up layers of hierarchy, processed, instructions issued and passed back down to be acted upon. Delays between gathering information and the action can negate the initial observation and make the action moot or counterproductive. We see this in the military when they talk about "getting inside the opponent's OODA Loop" (Observe, Orientate, Decide, Act), and the common observation that more unconstrained free market systems always outperform more centralized command economies. (Even if the historical examples are not particularly "free market" by our standards, so long as they were more free than the competition, Athens could beat Sparta, Elizabethan England could stand off Habsburg Spain or the Serenìsima Repùblica Vèneta could be competitive with the Ottoman Empire).

Tony said...

...just smart enough and cheap enough to replace human labor in routine tasks. Which is plenty enough to screw with the economy not to mention the whole social underpinnings of capitalism.

And not in a way that SF authors and futurists thought. We're not really dis-employing vast swaths of the population. (Though technology-driven unemployment is a problem.) We're turning most employees into semi-skilled labor. Anything that requires human hands we employ hands to do, but the knowledge of how and why to do things is increasingly built-into the computer programs. So most people just follow written procedures and follow computer program prompts (my girlfriend -- sorry Tami, but it's true), while a small minority writes the procedures and an equally small minority (me) programs the computers. There's very little room left for craftsmanship, except in premium or boutique products and -- much to the dismay of the software engineering enthusiasts and prophets -- computer programming. (And yes, there's way more craft in computer programming than engineering.)

It seems to me that if you write a story about an expert system AI, it is not about what the AI thinks. The AI doesn't think at all; it just crunches data and spits out an answer. You're really writing about the human institution that creates the AI - with all the ways it can go awry. Real-world AI is essentially bureaucracy, automated. (Remember that this can, in many circumstances, be a feature not a bug. There is a reason why pweople say 'RTFM'.)

All absolutely true. The software of any system, no matter how simple* or how complex, is a reflection of the values and knowledge of the institution or organization that pays for the program's development.

*Even the standard first program in most programming language manuals -- printing "Hello, World!" to the screen -- is a reflection of societal values. "Hello, World!" seems to me to be a very American sentiment, after all.


I tend to agree, with the proviso per above that 'pilot' might not be a distinct rating, just a skill that an onboard commander would have. The ship and the mission will determine what priority piloting as such has.

The point I was trying to get across is that the non-mission specialist crew out of immediate touch with Mission Control needs a leader who has command experience and a risk assessment and emergency management frame of mind. Pilots, particularly fighter and heavy aircraft pilots, meet these requirements, plus possess a necessary set of mission skills.

Tony said...

While Apollo 13 was a collaborative mission, in the end, Jim Lovell was definitely in command of the ship, and indeed it could not be any other way. Gene Kranz and his team on the ground had the resources, but were not "there" on board, so could only provide expert advice. Lovell was the man in charge, and could always override or veto suggestions (and indeed he did sometimes). Having direct knowledge of the situation, as well as his own experience as a naval aviator and astronaut to guide him, gave him the ability to act as an independent commander.

Totally false.

The Flight Director's job description undermines this point of view: "The flight director may take any action necessary for crew safety and mission success." If that's his true job description -- and it is -- the crew in the spacecraft has to undertake the actions that he directs as much as anybody in Mission Control. Remember, the reason the Capsule Communicator (AKA "CapCom") had to be a qualified astronaut was so that the crew wouldn't hear some engineer giving them commands, but a fellow test pilot. But of course CapCom told them exactly what the Flight Director wanted to tell them -- his only real job was to couch commands in terms that would make orders sound more like suggestions.

WRT the flight crew's supposed superior knowledge of their situation, nothing could be further from the truth. While some data values were presented to the crew in the form of analog instruments, the vast majority of flight data was telemetered back to the ground, where the people who could understand and interpret it were. Mission Control told Neil and Buzz to continue with the first lunar landing when the 1201 and 1202 errors manifested. The crew in the LM didn't have the information to know what to do. Likewise, Mission Control had the data and, more importantly, the ability to integrate it into actionable information, to figure out what the problem was and what to do about it.

The Mission Commander was in charge for stuff that only he could do, like pilot the LM to a safe landing. But even then he was still just an agent of the Flight Director. If something came up that convinced the Flight Director that the landing had to be aborted -- and a lot of things could appear in telemetry that the cockpit crew just couldn't see, or couldn't reliably integrate in the time available -- then the Mission Commander would be responsible and accountable for pressing the Abort button. If he doesn't abort, he's wrong, no matter what the outcome is. It was in the mission rules that an abort call from the ground was definite.

And here's the thing that many people don't get about that -- as test pilots, the Apollo astronauts were used to (and perfectly fine with) that regime. Flight rules were there for good reasons, and space flight was just another flight test in a lot of ways. Mission objectives were just test objectives with a different name. The Flight Director had the same authority over the crew as the test director had over the pilot during flight test. The pilot was in direct control of the machine, but the test was under the control of the guys on the ground.

Tony said...

...the military when they talk about "getting inside the opponent's OODA Loop" (Observe, Orientate, Decide, Act)...

Since you addressed this directly at me, I think I'll allow myself to observe that the day has not yet come when you are qualified to lecture me about Boyd's theories. I was reading this kind of stuff in 1990 in FMFM-1 Warfighting. In a note to Chapter 2:

"Tempo is often associated with a mental process known variously as the “decision cycle,” “OODA loop,” or “Boyd cycle” after John Boyd who pioneered the concept in his lecture, “The Patterns of Conflict.” Boyd identified a four-step mental process: observation, orientation, decision, and action. Boyd theorized that each party to a conflict first observes the situation. On the basis of the observation, he orients; that is, he makes an estimate of the situation. On the basis of the orientation, he makes a decision. Finally, he implements the decision—he acts. Because his action has created a new situation, the process begins anew. Boyd argued that the party who consistently completes the cycle faster gains an advantage that increases with each cycle. His enemy’s reactions become increasingly slower by comparison and therefore less effective until, finally, he is overcome by events. “A Discourse on Winning and Losing: The Patterns of Conflict,” unpublished lecture notes and diagrams, August 1987."

Unfortunately for your thesis, the important point is that one has to observe something meaningful about the situation in order to decide and act. The way NASA had things organized during the Apollo program, and still today, Mission Control was Information Central, not the spacecraft cockpit.

I suspect that this will be the case for decades to come, especially if/when people start going back into space for extended missions such as to NEO asteroids or to Mars, simply because the light lag will make directing things from Mission control impractical. Future Jim Lovell's (or Mark Watney, the fictional hero of "The Martian"), will need to have the freedom and flexibility to use their own judgement and resources when operating in space.

I agree with the analysis, though not the first name you attack to it, for reasons already explained. But I also think that Clarke got things right in Rendezvous with Rama. Commander Norton was still absolutely and immediately (with communication delays taken into account) responsible to Mission Control for everything he does, but has local tactical control and administrative responsibilities. Think of him -- just like Lovell, to a lesser but still identifiable degree -- as the sergeant in charge of a reconnaissance vehicle, not the captain in command of the company as a whole.

Tony said...

Correction:

A couple of posts above, "Likewise, Mission Control had the data and, more importantly, the ability to integrate it into actionable information, to figure out what the problem was and what to do about it.", meant to read:

"Likewise, with Apollo 13, Mission Control had the data and, more importantly, the ability to integrate it into actionable information, to figure out what the problem was and what to do about it."

Rick said...

"The Local Knowledge Problem"

I think this is applicable to space operations, but in economic life it has not really played out that way since Heyak's day. In industry after industry, big players have muscled into formerly local markets. A century ago this was confined to industries like automaking that involved massive front end investment. But now marketing muscle is sufficient to trump local knowledge. Organizations may give significant latitude to local managers, but they are still bureaucratic organizations.

We're turning most employees into semi-skilled labor

Yes. Any traditional skillset that amounts to mastering complicated but 'routine' procedures to get a predictable result is a target for AI, which can do the same job, possibly not quite as well, but even more predictably, and cheaper.

The fact that computer programming has been singularly resistant to automation is an odd and interesting fact. (It can't be put up to programmers preventing their own replacement, because that would call for too much of the wrong kind of cleverness.) When software can write better software than geeks can, then we can have a discussion about 'strong' AI, but I know nothing to suggest that we are even kinda sorta close to any such capability.

Tony said...

The fact that computer programming has been singularly resistant to automation is an odd and interesting fact. (It can't be put up to programmers preventing their own replacement, because that would call for too much of the wrong kind of cleverness.) When software can write better software than geeks can, then we can have a discussion about 'strong' AI, but I know nothing to suggest that we are even kinda sorta close to any such capability.

Well...the automation exists in the first place because of computer programming and programmers. As ol' Brother Neiblas used to ask us when discussing the problem of evil: can God make a rock he can't lift? And programmers are the gods of their software creations. (Not necessarily because we want to be, either.)

Self-aware AI would be, I think, that rock, but what we do now can't be automated, because it involves acts of imagination and creation.

Having said that, programming nowadays isn't what it was just ten years ago, much less twenty or thirty. A lot of what we do is in fact highly automated, in the sense that modern development environments come with gigabytes of pre-written code for common program items and features, that you just plug-in to the program and set a few properties. The code within the provided libraries takes care of the rest. Even while you're writing code, the compiler is constantly running in the background, and highlighting your syntax errors for you. (Logic errors are your own lookout.) Very convenient and a real force multiplier for productivity. So even the proverbial "code monkey" -- the guy or gal who did the drudge work of implementing the routine 90% of somebody else's design -- is being automated out of existence. Software developers have to be able to design the whole system from top to bottom and implement it, because there's just no more room left for routiners.

Rick said...

Software developers have to be able to design the whole system from top to bottom and implement it, because there's just no more room left for routiners

Thinking over the fairly simple sims and quasi-games I have written, the conceptual design (and debugging thereof) was almost all of the real work. In principle, I suppose, I could have designed in some abstractified form, like a flow chart, but at that point it is hardly more work - perhaps less work - to just write the thing in C, and have blocks of code I could run and test.

The way NASA had things organized during the Apollo program, and still today, Mission Control was Information Central, not the spacecraft cockpit.

Thinking about it, this illustrates how laughably far we are from truly robotic space flight. Just thinking of the big Mission Control center in Houston, it must have on order of a hundred work stations, which in effect is the 'bridge crew' requirement for an Apollo mission. Or about 10x the size of the Trek bridge crew, though admittedly Mission Control includes the equivalent of Scotty's engine room control station, etc.

And in spite of our progress in cybernetics, I doubt that Mission Control staffing could be reduced by very much, because the constraint isn't instrument displays, but expert-level evaluation of what instrument readings may imply.

So, loosely thinking, we would need to 'streamline' space operations by an order of magnitude to reach the point where a large spacecraft might have an onboard Mission Control in operational charge.

Jim Baerg said...

Rick: "So, loosely thinking we would need to 'streamline' space operations by an order of magnitude to reach the point where a large spacecraft might have an onboard Mission Control in operational charge."

Aircraft currently have a mixture of on board & on ground control. Pilots are expected to follow the instructions of the Air Craft Controllers, but still have a lot of responsibility & power to do what they consider necessary for the safety of the aircraft. Someone who has actually flown can say more about where the boundaries are.

I suppose we will want to go part way on putting Mission Control on the spacecraft as communication delays get longer.

fro1797 said...

I'm thinking that any manned mission beyond Mars would need Mission Control on board the spacecraft, with Ground Control being a source of non-time-critical information and major, but routine, mission milestone type decisions (go for orbit insertion on schedule yes/no, dispatch lander yes/no, commence burn for return trajectory on schedule, etc...). It may be that the senior officer on board a spacecraft on a mission to the outer planets might be the Flight Director, with both the Spacecraft and Mission Commanders deferring to this individual.

Ferrell

Elukka said...

I think the size of mission control now isn't necessarily directly applicable to on-board mission control (bridge crew). The limitations are different. Minimizing the number of crew is a big factor on a spacecraft, but on the ground it's not, you can always just build a bigger room. So, theoretically - I don't know if this is actually the case - ground based mission control may have more people than strictly necessary. I'm not implying anyone there is extraneous, but if having a hundred people there is useful there's no particularly pressing reason not to have them.

It feels a bit like attempting to extrapolate the specs of space nuclear reactors from ground-based reactors. It just doesn't work because the design constraints are so different.

Tony said...

Thinking over the fairly simple sims and quasi-games I have written, the conceptual design (and debugging thereof) was almost all of the real work. In principle, I suppose, I could have designed in some abstractified form, like a flow chart, but at that point it is hardly more work - perhaps less work - to just write the thing in C, and have blocks of code I could run and test.

I apologize if I gave the impression that the current software process is so formalized. In large shops, and even in some medium-sized ones, there is indeed a lot of formal design process and documentation. But in numerous small shops, like the ones I've always worked in, the process is much more informal, and much more like what you described.

By "design", I'm not necessarily talking about a formal intellectual process. I'm talking about knowing how to translate a set of specifications, often as informal as a suggestion written in short email, into a working software system or module. It surprising how many people with a COmputer Science education can't really do that so well.

In terms of concrete implementation, what I do, and what most software developers do, at least part of the time, is start with a basic set of elements and experiment their way to a working product. I can tell from the above quote that Rick at least is familiar with the process. But for those of you that aren't, what one does is start with an idea and write a basic framework of code that constitutes a minimal running program. Run it to see that it works as expected. Then one adds features, like database reading and writing. Run it to see if it works as expected. Then one might start writing more detailed procedures, like calculating statistics and organizing the output as an Excel spreadsheet. Run it to see if it works as expected. Test and repeat until you have all of functionality that you need.

At no point in the process do you ever really formalize what you're doing. Your design, as it were, is your sense of how these kinds of things should work. That's why I think that software development, in the 21st Century anyway, is way more craft and a lot less engineering.

----------------------------------------

WRT to Mission Control as an entity, and where it should be located, I think we've conclusively demonstrated that at least out to the orbit of Ceres -- which means for at least the next century, possibly longer -- we can control spacecraft both efficiently and well with a ground-based organization. (Actually we can control out past Pluto, if we expect only a minimal amount of maneuvering.) Even when one adds a crew to the equation, for most purposes things just don't happen so fast in space that Mission Control is excluded from the control loop by comm delays. For example, navigation will still best be managed by a combination of signal analysis (it's amazing how much position and vector data you can get out of Doppler) and spacecraft sensor data, integrated on the ground. And, except for stuff that is so far out of round that it causes the audible alarms to go off in the cabin, spacecraft system data is still going to be best handled and responded to by a large ground organization, with all of its available skilled manpower and infrastructure.

Gregory Johnson said...

I have a differing viewpoint of Tony's comment: "WRT to Mission Control as an entity, and where it should be located, I think we've conclusively demonstrated that at least out to the orbit of Ceres -- which means for at least the next century, possibly longer -- we can control spacecraft both efficiently and well with a ground-based organization." or perhaps an alternate agreement. I agree that we "CAN" but depending on one's view of the PMF, it might still not make sense that we "SHOULD". How many missions are you running at once? If it is a few tens, and you have a well funded space agency, no problems. If you have lots more activity (but still with less than 1000 space crew total, potentially), then you run into scaling issues... a hundred people for ONE mission control is doable, If you have to have 10,000 people for ONE HUNDRED Mission Controls, you get funding problems. So, presuming that you have sufficient resources for a large, ground based Mission Control, then I agree fully with Tony's point. But if you get private missions that run with a much smaller mission control, you might start asking why not just send them along.
Today, with our limited run rockets, often launched by their builders in conjunction with a well-funded client, you don't get into space without a mission control. If the rockets are an order of magnitude cheaper in the PMF, you might not have the same luxury of staffing a Mission Control even though it would be really nice to have one.

Rick said...

I'm somewhat inclined to split the difference, moving some of Mission Control aboard the spacecraft, once these are large enough to carry a control team, while also calling on ground support, which as Elukka notes is a whole lot easier to provide in logistical terms.

On the flip side, if you have a hundred human missions underway, their combined Mission Control teams might well reach the 10,000 range. What is the total number of FAA air controllers and other 'mission control' personnel in commercial aviation? Probably on that same order. I just checked: about 15,000 controllers, who handle ~30,000 commercial flights per day (plus about twice as many general-aviation flights). During peak traffic hours, 5-6000 commercial planes are in the air over the US at a given time.

Translating that into 'equivalent missions' could get complicated, but it gives a general idea of what might prevail at some future date, if space operations can eventually be made more or less comparable to air operations. Which is sort of implied by my $1 million/ton rule of thumb, but probably would be achieved - if at all - only at the far end of the plausible midfuture.

Tony said...

I think there are two points to be made here:

1. There are already numerous Mission Controls in operation across the globe, including all of the various government, academic, and commercial mission going on all of the time. More manned space exploration -- and by that I mean ten or twenty times more -- might only double the needed capacity.

2. There's no reason why one mission control facility couldn't handle multiple missions at the same time. Certainly the test, training, and in-flight problem solving capabilities of a large facility like the Johnson Center could be shared between many missions. Even the mission-specific facilities could probably be manned by reduced size shifts most of the time, with full attendance only necessary for critical mission phases.

Michael said...

I like the analogy of likening modern AIs to bulldozers. You hint at this in the article, but I think there is a lot of interesting advancements to be made in the coming years regarding feedback from computerized control systems. One of the great things about mechanical control systems is that they provide an amazing amount of information to the user (after a given level of experience). The most readily available example I can think of is a manual transmission automobile (and to a lesser degree an automatic). A driver experienced with the vehicle gets all kind of input about what is happening in the car from vibrations, sounds, etc. It's useful information, too, because it's transmitted in a way that doesn't take the driver's attention off the primary task (driving) and it's information that is easily assimilated.

Computer controlled, but user commanded vehicles, such as modern jet fighters, have to convey this information in useful ways that doesn't overwhelm the user with so much information as to be useless. This same is almost certainly true of hand-flown spacecraft, especially during actions with complex consequences. A gunship controller can't be deciphering the pressure readout in dorsal LOX pump 2 when she's supposed to be inspecting a cargo shuttle on an unfiled flight path that takes it far too close to the Congressman's shuttle. Visual bandwidth gets saturated so quickly with humans.

fro1797 said...

The USAF actually felt that the F-15 had about as much information as a pilot could handle and might even be too much in combat, so the F-22 has a much simpler cockpit layout. It still gives you all the info you need, but not all at once. It displays critical things first, like who is shooting at you, how much ammo or missiles you got left, and if an important piece of your aircraft is about to fail; then it lets you know whatever else you want to know. I'd think that combat spacecraft would have much the same type of information system. Enhance, not overwhelm.

Ferrell

Thucydides said...

Two points WRT Mission control and the Local Knowledge Problem.

While Mission control has access to far more resources than the on board crew did (or even does now), they are still at the far end of the line, and have to take, assimilate, interpret and act on the data, then send instructions back to the spacecraft. In ideal circumstances, the crew is fine with this, and mission control obviously does have the ability to drill down far deeper then they can. However, they are in the spacecraft, so some things will be immediately apparent to them, and they might be able to make a response far faster than distant mission control. On board the spacecraft, there is also the issue of what the crew will actually do. The crew of Apollo 13 did use their own initiative and judgement and deviate from (or even refuse to do) Mission control's plan, and ultimately, there was little the flight surgeons or Gene Kranz could actually do about that; the spaceship was thousands of miles away in space. The book Dragonfly: NASA And The Crisis Aboard Mir by Bryan Burrough is another example of how the crews in the spacecraft can behave in ways that are contrary to what mission control may deem as optimal. Unless Mission control actually controls every aspect of the ship the way JPL controls Mars rovers, the crew really does have the last word.

In a larger scale, Rick mentioned that the growth of massive corporations negates the Local Knowledge Problem, but I will disagree. Back in the late 80's or early 90's, I started hearing about an idea called "Intreprenuership", where corporations were devolving things down to units within the corporation in order to foster the ability to see and capture ideas without engaging the corporate bureaucracy. More recently, corporations like Apple, Google and Microsoft simply wait for entrepreneurs to come up with innovative new ideas and then purchase the companies and intellectual property. In essence, both ideas are ways for large corporations to "harvest" the innovative solutions to fleetingly glimpsed opportunities. All that has really changed is the mechanism for exploiting local knowledge has evolved in ways F.A. Hayek probably never envisioned.

M. D. Van Norman said...

Consider also that in the plausible far future, when humans would presumably be traveling and working routinely in interplanetary space, control of spacecraft systems could well be managed by some sort of cybernetic interface—in other words, controlled by thought. In such a scenario, manual controls would be the very last layer of redundancy. However, with sufficiently advanced navigation and flight-control systems, it would make little sense for humans to take direct control of spacecraft operations in most if not all circumstances.

We’re already starting to see these arguments being made in favor of self-driving cars, which will generally operate in much more chaotic environments than interplanetary spacecraft.

Tony said...

While Mission control has access to far more resources than the on board crew did (or even does now), they are still at the far end of the line, and have to take, assimilate, interpret and act on the data, then send instructions back to the spacecraft. In ideal circumstances, the crew is fine with this, and mission control obviously does have the ability to drill down far deeper then they can. However, they are in the spacecraft, so some things will be immediately apparent to them, and they might be able to make a response far faster than distant mission control. On board the spacecraft, there is also the issue of what the crew will actually do. The crew of Apollo 13 did use their own initiative and judgement and deviate from (or even refuse to do) Mission control's plan, and ultimately, there was little the flight surgeons or Gene Kranz could actually do about that; the spaceship was thousands of miles away in space. The book Dragonfly: NASA And The Crisis Aboard Mir by Bryan Burrough is another example of how the crews in the spacecraft can behave in ways that are contrary to what mission control may deem as optimal. Unless Mission control actually controls every aspect of the ship the way JPL controls Mars rovers, the crew really does have the last word.

The flight crew of Apollo 13 engaged in small rebellions, but they didn't dare refuse Mission Control navigation, system management, or life support instructions. IOW, they didn't dare refuse the instructions that saved their lives. Their local knowledge just wasn't good enough.

The same can be said of Mir, or ISS. Crews go about their daily business and do sometimes push back, or even just do their own thing. BUt when it comes to spacecraft systems and operations, their local knowledge is just insufficient. Mission control decides when they complex is reoriented or maneuvered, how life support systems are managed, and when other major activities are undertaken, and how they are to be accomplished.

There's no reason to believe future interplanetary flights would be any different. Spacecraft are highly complex dynamic systems. The capabilities necessary to monitor and control those systems are equally complex and extensive. It's just a fact of life that most of those capabilities can't make it into the mass budget of the spacecraft. If it helps you understand this better, think of mission control as the crew and systems that are absolutely necessary for mission accomplishment and flight crew survival, but which couldn't be taken along. And, even at interplanetary distances, most spacecraft system events are so slow in manifesting that the ground control establishment is just much better equipped to analyze and formulate safe and effective responses to them.

Tony said...

In a larger scale, Rick mentioned that the growth of massive corporations negates the Local Knowledge Problem, but I will disagree. Back in the late 80's or early 90's, I started hearing about an idea called "Intreprenuership", where corporations were devolving things down to units within the corporation in order to foster the ability to see and capture ideas without engaging the corporate bureaucracy. More recently, corporations like Apple, Google and Microsoft simply wait for entrepreneurs to come up with innovative new ideas and then purchase the companies and intellectual property. In essence, both ideas are ways for large corporations to "harvest" the innovative solutions to fleetingly glimpsed opportunities. All that has really changed is the mechanism for exploiting local knowledge has evolved in ways F.A. Hayek probably never envisioned.

What Rick was getting at is that large corporations make local knowledge irrelevant in a lot of cases. When you make the thing that everyone wants, like pickup trucks, or a profitable system of food service, like McDonalds, local knowledge becomes secondary to the power of carefully cultivated and massively supported ideas. Now, there is room for local variation. For example, in muslim countries, McDonalds doesn't serve pig. Where I live, if you want a pickup truck -- any make or model -- you're buying 4WD, unless you want to make a special order. Those are both local variations on the overall product. But they are still well thought out variations that still support the overall brand and organizational goal. These variations are no different -- and no more significant -- than the variation in design you might make between two spacecraft, one meant to bimble around in Earth orbit, and one meant to undertake an interplanetary mission. Neither says anything about local operational autonomy, just what variations in design and implementation are necessary to operate in a known local environment.

Now, having said all of that, there is room at the local franchisee or office level to make decisions regarding purely local administrative matters, or to handle emergencies that can't be submitted to the home office with time to get a useful resolution. But here again the corporate system intervenes, defining the scope of local autonomy, and providing rules governing just when and how that autonomy can be exercised. To exceed that scope or disregard the rules can be career suicide, just like not listening to Mission Control can be personal suicide.

Tony said...

Consider also that in the plausible far future, when humans would presumably be traveling and working routinely in interplanetary space, control of spacecraft systems could well be managed by some sort of cybernetic interface—in other words, controlled by thought. In such a scenario, manual controls would be the very last layer of redundancy. However, with sufficiently advanced navigation and flight-control systems, it would make little sense for humans to take direct control of spacecraft operations in most if not all circumstances.

We’re already starting to see these arguments being made in favor of self-driving cars, which will generally operate in much more chaotic environments than interplanetary spacecraft.


The environment of a motor vehicle changes rapidly from moment to moment, but the vehicle itself, and what it can do in that environment is very simple, compared to that of a spacecraft.

Controlling spacecraft by thought turns out to be a very bad idea, because very few things in space need to be done right away. THose things that do need to be done right away generally involve manual labor or at least highly skilled and exhaustively trained input.

victor wooddell said...

Thank you Rick for starting the site up again. I remember the day when you could come to this place and see 300 comments (although this is the first time I have ever posted a comment here). I firmly believe that it will get there once again. Just give it some time.

This particular conversation is interesting, because it turns out that handflying vs. automation is a systems problem. The decision of what flight operations to automate, what to leave to the on-board crew, and what should be managed by a flight control back at the point of origin will depend upon specific circumstances. There is an entire spectrum of options to choose from, and which will make the most sense in a particular case will primarily be a function of the mission.

Which connects to an interesting point that was raised in one of Rick's earlier threads- what is the economic incentive for sending humans at all? It's expensive and dangerous to send people out there- a cost factor that increases significantly with distance from Earth. There may very well be decisions that only an on board pilot could make, in the heat of the moment using information only he could know- but unless the mission is such that it's intrinsically worth risking his life over, that may not matter. New Horizons might have run into a rock, but that's no reason to send a human to Pluto- it's a reason to make space probes cheap enough to replace them if we have to. This may seem counter-intuitive- I'm suggesting that the farther from Earth you send something, the less it makes sense to include a human crew. Conversely, the closer the destination is, the easier it is to justify the cost. Craft designed for close orbital orbital are highly likely to be manned, just as Rick suggested.

Tony said...

Steven Squyres, the principal investigator for the Mars Exploration Rover program, is very fond of pointing out that what it takes a day to do with a remotely controlled rover, it would take a trained geologist could do in five minutes. This is where local presence and knowledge is actually an advantage -- exploration, not spaceship flying.

victor wooddell said...

In regards to resource management, there is often a trade-off between time and money. You can save money spent on an activity by investing more time; or you can save time by investing more money (should you build your own home addition, or pay someone else?). The time saved by sending humans to Mars (or beyond) is clear and unambiguous, but what of the cost?

Tony said...

It's not just the time saved, it's also expanding the scope of direct human experience. Since that's kind of the whole point anyway, the cost is acceptable, for sending at least several humans every few hears.

victor wooddell said...

Acceptable to whom? To the people paying for it? If that were the case, we would still be sending manned missions beyond Earth orbit. The general public seems to want a stronger payoff before they will support that side of NASA's mission.

Tony said...

In case you missed it, US space policy is to send humans beyond Earth orbit. The US just isn't moving at a Cold War money-means-nothing pace. Would the Space Cadets love to see an Apollo for Mars (TM)? Yep. Is that realistic right now? Nope.

victor wooddell said...
This comment has been removed by the author.
victor wooddell said...

Sorry to be a cynic, but I'll believe it when I see it. I'm pretty skeptical regarding the economic incentives to send numbers of people into space. With one exception, I dont believe they are there. The one exception would be if life were discovered somewhere in the Solar System. That would be an absolute game changer. Short of that, I'm sanguine.

Rick said...

Welcome to new commenters! (And longtime readers!)

what it takes a day to do with a remotely controlled rover, it would take a trained geologist could do in five minutes

Agree in principle (no surprise!), but as usual it is more complicated in practice. Mars I, when we send it, probably will not land where the geologist would like to. It will land on a nice, boring flat plain, chosen for maximum safety in doing something intrinsically dangerous, landing on a planet for the first time. Only on later missions will they risk landing at more interesting but riskier locations.

I'm working on a front page post about space exploration - a subject I've barely ever touched on here. The (tentative) TL;DR version is that it will be both gradual and expensive, because every mission is breaking new ground technically and operationally. Over time we may learn how to streamline operations, but the line between 'streamlining' and cutting corners is a delicate one. The early missions, at least, will be very expensive, which means fairly few and far between.

The general public may only have a hazy notion of what we are, or aren't, doing in space. I recall reading a blog post by (IIRC) an engineering prof, many of whose students assumed that the ISS was off somewhere in translunar space. Which though incorrect is not entirely inappropriate, since the ISS is basically a training mission for long duration spaceflight.

Gregory Johnson said...

Thinking about Tony's comments, he presents a very different possible PMF -- that may not be Rocketpunk, but has it's own interesting consequences.

Your interplanetary craft is flown by systems managers on the ground for the entire mission, the only reason you have a pilot aboard is incase one of the automatic systems needs to be overridden during a docking maneuver. This means the ship doesn't have a "commanding officer" but rather a "department head" who organizes the payload and mission specialists aboard and makes sure they get fairly allocated experiment space. This ties in with keeping a "short leash" on weapons of mass destruction (propulsion systems), if you are a civilian operation, you don't get control over the ship. On the plus side... this means you CAN have minuscule crews of 4 or 5, who will die horribly if something goes seriously wrong, and who don't need to bother with a watch rotation because the ground will wake them up if something happens.

A military craft, if such exist, may have a different approach (or may not if encryption remains secure), but it allows for a fascinatingly different sort of mission profile. And in this case, a police-like patrol would fall under the more civilian model, with the ship taking them WHERE they need to be, and the people aboard simply having to be ready to board their targets when they get there.

victor wooddell said...

"I'm working on a front page post about space exploration"

Ohhhh- I cant wait for that! I'm probably going to be the devil's advocate though, because I dont believe that in this political environment the public will pay for manned space exploration. With the one exception that I mentioned.

OTOH- Given the premise of manned routine interplanetary space flight, then the ideas Tony and Gregory are discussing sound like an exceptionally cost-effective way to go about it.

Tony said...

Sorry to be a cynic, but I'll believe it when I see it. I'm pretty skeptical regarding the economic incentives to send numbers of people into space.

I don't think I ever said that there was an economic incentive. The incentive is more focussed knowledge acquisition, and the expansion of the human experience. IMOP that incentive does, to the extent of several explorers every few years, counterbalance the cost.

Tony said...

Agree in principle (no surprise!), but as usual it is more complicated in practice. Mars I, when we send it, probably will not land where the geologist would like to.

The Mars Exploration Rovers were put down in some pretty unambitious terrain. Look at all of the science they've provided. And that's the kind of science Squyres was talking about.

victor wooddell said...

For those of you who are interested, and because Rick said he might start a thread on the subject, I thought I would share some information on manned vs. unmanned space exploration that I found.

Here's a study by MIT on the benefits of manned spaceflight. Here's the best justification they could come up with: "We define primary objectives of human spaceflight as those that can only be accomplished through the physical presence of human beings, have benefits that exceed the opportunity costs, and are worthy of significant risk to human life. These include exploration, national pride, and international prestige and leadership." I'm not certain I agree that the objectives they list meet their own criteria.

Here are a variety of other informed opinions on the matter: Is Manned Spaceflight Obsolete?. OK- on the one hand, this article provides the best comparison of the two approaches I have found, and explains the one rationale for sending humans that might convince me- the search for life on Mars. While I find the arguments in the "Humans Only" column unconvincing, I might be argued into a "Mixed Mission"- in other words sending a couple of humans along with several dozen robots probes, leaving the humans in orbit while the probes search the surface (and subsurface). The reason I might be argued into this is because although finding life on Mars is probably a long-shot, if we did find it there, the implications of this back here on Earth would be incalculable. The science of biology would leap ahead by an entire paradigm.

Another set of articles comparing the advantages and disadvantages of manned space flight The Con Side: Manned vs. Unmanned Space Exploration (Part 1), The Pro Side:Manned vs. Unmanned Space Exploration (Part 2).

While a little off the topic, it relates to the extent that if manned space exploration is too expensive and impractical, it implies that the fewer people you send to accomplish the mission the better, hence strengthening the "Anti Hand-flying" argument.

Tony said...

Exploration and national and international standing have always been the only enduring reasons for human space flight. You're not telling us anything we don't already know.

I have to ask, did you actually read the pro and con articles you posted links to? If you didn't, you should -- the con article refers to the James Webb Space Telescope as the "Jack Webb Space Telescope". If you did read the article and posted it anyway, shouldn't one seriously question your knowledge of the subject?

victor wooddell said...

You should seriously question my knowledge (and everyone else's) regardless. As we should question yours. That's how understanding is advanced, after all. I did read the articles, I did not catch that particular error, but now that it has been pointed out to me, I'm not certain that it invalidates the rest of the arguments. In any case, I don't think we can assume what everyone knows.

Tony said...

You should seriously question my knowledge (and everyone else's) regardless. As we should question yours.

ISWYDT

I did read the articles, I did not catch that particular error, but now that it has been pointed out to me, I'm not certain that it invalidates the rest of the arguments.

Not trying to invalidate any of the arguments, on either side.

In any case, I don't think we can assume what everyone knows.

Actually, that's one thing we can safely assume. There's definitely no orthodoxy here, but everyone that regularly contributes has made up his mind about the value of human space exploration. Not one of them is so shallow as to have done so without thinking things through on at least the level expressed in your links. FWIW. Maybe we should all meditate on that a little while.

victor wooddell said...

Tony- we pay people to be journalists so that they can do the research the rest of us dont have the time or energy for. They do the interviews, look up the facts and write the articles so that less informed people like myself can read them and improve our knowledge. If I lack the expertise to properly critique these articles, then what better option do I have than to share them with others, such as yourself, who might be able to educate me? If you are aware of flaws in the arguments presented, I wish you would share them, so I could benefit from the insight.

Tony said...

Tony- we pay people to be journalists so that they can do the research the rest of us dont have the time or energy for.

I don't. I pay scientists and historians (and some journalists) to do good work by buying their long form works, while studiously avoiding short form articles found in periodicals and on the web. IOW, what is your investment in the subject? What Are you willing to invest that you haven't already?

If you want to become more invested, I'm sure there are many here that will help you with suggestions for study. But start with very focused questions, because highly invested people really can't imagine where to tell much less invested people where to start. What is obvious to us is probably not obvious to you. For example, I could tell you to get yourself a copy of Bate, and learn something about orbital mechanics, or spend several hours a week on astronautix.com (AKA Encyclopedia Astronautica), learning about the technical history of space flight. But I don't know if your math pay grade is up to Bate, or if your general technical background is up to the prerequisites assumed by the compiler of Encyclopedia Astronautica.

The one place I can tell you to access that you may find something of interest is Atomic Rockets. It's a web site designed to be a resource for prospective science fiction writers that want to get space right. But you still have to be comfortable with math, physics, and technology. Because the creator of that site is also a highly invested person, and he pretty much assumes you're at least determined to become invested.

victor wooddell said...

I understand how valuable your time is. Thanks for the tip.