Wednesday, September 26, 2012

Endeavor's Passing

Shuttle Endeavor above Golden Gate Bridge

I never got to see any of the Shuttles during their active service as spacecraft. Planned West Coast military launches from Vandenberg were scrubbed after the Challenger loss. I did hear the double sonic boom a couple of times  when Shuttles crossed the coast en route to landings at Edwards AFB.

And only by sheerest luck did I end up seeing Endeavor on its final piggyback flight last week: Paula happened to turn on local TV news half an hour or so before it overflew San Francisco en route to retirement in Southern California.

It was an impressive sight. But like the sight of USS Iowa passing under the same bridge, it was a somewhat melancholy occasion. What in my early adulthood was the spaceship of the future has become a spaceship of the past.

Will we see its like again?

The Shuttle program was star-crossed in multiple ways. Thus its experience does not provide a 'fair' test of reusable spacecraft. This is the good news.

First of all, it conflated the roles of experimental prototype and operational vehicle: a beta pushed into production. Its development costs were squeezed, compromising the design, and further compromised by demanding of it an enormous payload capacity.

Given all these fundamental shortcomings it is remarkable that it succeeded at all. It surely cost far more to operate over its service career than either a capsule atop a conventional rocket or a smaller, fully reusable spacecraft, refined from a prototype, would have cost - not to mention the greatest and most needless cost, two large crews. Spaceflight is dangerous, but the points of failure for both Challenger and Columbia were direct results of the flawed development process.

We cannot say how much a more robust Shuttle would have cost to operate, or how safe it would have been. All we can say is that it would have been cheaper and safer than the ones we actually flew.

Unfortunately, we can also say that a more robust Shuttle would have been - would still be - horrendously expensive to develop. The projected development cost of the original Shuttle design, circa 1969 - before the compromises mentioned above - was on order of $10 billion. This is equivalent to $60 billion in present-day dollars. Hear the deafening sound of wallets snapping shut.

And it gets worse. Assume a 30-year service lifetime, with monthly launches - less than hoped for, but a lot more human spaceflights than we have actually flown. The apportioned development cost - ignoring interest and such - thus comes to about $170 million per mission. Remember, this does not include any of the costs of actually flying the missions, or training the astronauts, or anything else: It is just the development cost leading up to the first operational flight.

There is also a line of argument, all too credible, that a truly viable reusable spacecraft - one that is cheaper in the long run than expendable rockets - is just not attainable at our techlevel. We know a lot about building large, lightweight structures, along with powerful engines able to drive them into the upper atmosphere or even into space. We can do significantly better at these things now than we could fifty years ago, but not dramatically better - an indication that our technology in these areas is pretty mature.

But getting into space is so intrinsically difficult that our normal technique involves large, expendable boosters or lower stages. Payloads are, at most, a few percent of launch weight. And the problem for reusable spacecraft is that they must carry heavy fittings, such as heat shields, along with wings and landing gear, that their expendable counterparts can do without.

A simple design burdened with these heavy fittings probably couldn't reach orbit at all. But a design refined to the point that it can reach orbit is liable to be so extreme in its specifications that it requires extensive  tear-down and inspection, and perhaps refurbishment, after every flight. Which defeats the whole point of being 'reusable.'

In all of this there is a glimmer of hope: We are not dealing here with 'cold equations' but with devils in the details, and the lines between not-quite-feasible and just-feasible are pretty fine. And even as the Shuttle rode off into the west, the US has begun operating another spaceplane, the X-37B.

This is in no sense a 'Shuttle replacement.' It is very much smaller, launched atop a conventional expendable rocket, and it is unmanned. It is also a classified DARPA project - even thought it began as a NASA project - meaning that not much is being said either about its performance or its missions. But it may well be more operationally robust than the Shuttle - in particular, safer during re-entry.

Incremental progress in mature technologies is glacially slow compared to the Moore's Law-style progress seen in tech revolutions. But in the course of this century we might (or might not!) gradually develop our launch capabilities to a level approaching what the Shuttle once hoped to achieve.

After which, things could get interesting.


Note: A recent, truly awesome XKCD comic has a relevant comment on space rocketry. You will have to look ... carefully ... to find it.

Another Note: Unrelated to this post, but a blog reader has done the service of converting my Planetary Climate Sim into Linux and Win32 object code.

The sim itself is designed primarily to test the effects on an Earthlike planet's climate of greater orbital eccentricity or different axial tilt. It also has some settings for different average locations within the habitable zone, greenhouse gas composition, differing proportions of ocean and land surface, and so on. But these things are far more complex, and pretty much above my pay grade.

 I don't warranty the results! And I haven't tested the Linux and Win32 versions at all - let me know how they work! I'll pass any bug reports along to the contributor.

The image of Endeavor and its 747 carrier above the Golden Gate Bridge comes, a bit paradoxically, via the Baltimore Sun.

Friday, September 14, 2012

Artificial Intelligence, Human Intelligence

Filming Jeopardy!

 The comments on a recent post about science fiction, A Literature of the Possible, included a considerable subthread about HAL, the ship's computer in 2001: A Space Odyssey. The discussion centered on HAL's literary role, and whether its function in the story could have been played by either a human mutineer, or something - such as a shark - not claimed or expected to have anything like 'intelligence' in the human sense of the term.

Here I will take the discussion in another direction, going boldly where many have gone before, including my own blog post on What Do AIs Want? - along with several earlier comment threads, which I am too lazy to dig out.

In that post, I discussed SF tropes regarding AI and its response to humans, starting with the ever-popular insane malevolence and ending with:
It will ignore us, but neither serenely nor with any other emotion. It is after all a machine, with all the emotion and motivation of a lawn mower.

Of all the possibilities regarding AI, this may be the one we least expected beforehand. So of course inevitably - thus far, at least - it seems to be the one that is coming true.

This was not expected because the only high-level congnitive intelligence we had to go by is our own. So we imagined artificial intelligence to be, at the least, similarly mysterious. This was the case long before computers or robotics were even imagined.

Roger Bacon was said* to have once created a brazen head, capable of speech and, implicitly, of thought. 'Time is,' said the head. A little later, 'Time was.' Still ignored (by Bacon's stupid servant, Miles), the head finally said 'Time is past!' - and shattered into pieces.

AI as a concept has been making heads explode ever since. Bacon's brazen head, as described, was not out to kill or enslave anyone. But it was enigmatic. You don't know quite what it was thinking, or feeling, any more than you knew what HAL was thinking and feeling behind that unblinking red eye.

What we took for granted is that the AI would indeed be thinking, in some way at least broadly comparable to our thinking. And it would likewise be feeling ... something.

The ability to play chess well was long taken as a touchstone of 'real' AI. At one point, HAL mentions playing chess with Dave. Even in the early 80s, the supercomputer in WarGames, after learning from its sim runs that nuclear war is unwinnable, suggests playing 'a nice game of chess.'

I even remember an explanation of why chess made such a good benchmark: Because the brute-force solution of playing out all permutations to find out which ones worked would require a million trillion zillion moves. Even an IBM computer could never run them out, certainly not fast enough for a chess match. So for a computer to play grandmaster-level chess it would have to do whatever human grandmasters do. QED!

It did not work out that way, and QE wasn't D'd. As it turned out, of course, crunching through permutations by brute force is exactly how IBM's Deep Blue beat Gary Kasparov in 1997.**

We still have only a hazy notion of how human grandmasters play chess. But by way of intriguing insights, I have read (possibly in a blog comment here) that while powerful chess computers can beat any 'unassisted' human player, chess masters using simple commercial chess programs can beat any computer playing on its own, without a human partner.

In any case, most AI work today is not at all about trying to emulate the still-mysterious workings of the human mind. It is about amplifying human intelligence - that of skilled programmers and subject-matter experts - by harnessing it to the brute-force power of computers.

This does not rule out the possibility that work on brain structure and psychology - perhaps amplified by expert systems - may yet unlock the mystery of human thought and lead to machines that really do think. But there is no particular reason to anticipate this in the plausible midfuture. Particularly since such work does not have the clear practical applications that expert systems do.

All of which casts doubt on some familiar SF tropes. Not all of them! An expert-system political machine (in the literal sense!) might amplify and implement its designers' scheme to conquer and enslave humanity. And if they were careless in algorithm design, it might fail to exclude them from enslavement.

But it will do so with no ambition or passion of its own - only by implementing its designers' schemes. And human freedom fighters won't defeat it by persuading it to adopt the ideas of John Locke or the Buddha. If they defeat it, they will do so by identifying limits or errors in its implementation (perhaps using their own expert systems to reverse-engineer these), and posing problems it cannot efficiently solve.

Faust lives. Frankenstein's monster, not so much.


* In a play by Elizabethan playwright Robert Greene. Greene also made a snide remark about William Shakespeare, which has the distinction of being the first notice of Will's presence in the London theatrical scene.

** Disclosure: I have an ongoing work gig on behalf of IBM. But in this case I don't have to cop to a shameless pander - Big Blue's work on AI truly is impressive.

The image of Jeopardy! comes from this Flickr page.

Monday, September 3, 2012

All In a (Future) Day's Work

Today is Labor Day in the US, a holiday we evidently borrowed from Canada, and ultimately from Australia. In the linked post I talked about working in space. Here, as you might guess from the title, we'll consider the future of work, whatever planet it is performed on, including none.

The past is prologue: For most of the last ten thousand years, extending to the origins of Labor Day in the century before last, work was largely synonymous with agricultural labor. And, all too largely, work was associated with more or less naked forms of exploitation - sharecropping, rack-rent, serfdom, and so on down to outright chattel slavery.

Agrarian Age exploitation had effects beyond the purely economic: Think of medieval villeinage (a form of serfdom) and the etymology of 'villain.'

Exploitation was often less intensive in environments where agriculture was so marginal that not much could be squeezed out of the peasants. Thus mountainous uplands and other rugged environments were often associated with both poverty (even the lords were poor) and freedom. Montani Semper Liberi, goes the motto of West Virginia: Mountaineers are always free. Much of the Western conception of freedom is rooted in this tradition. Thomas Jefferson might be a plantation slaveowner, but his ideas were built around independent small farmers.

Cities, with their more complex market-based economies, fostered a different sort of freedom. In the medieval German usage, Stadtluft macht Frei: City air makes one free. Urban freedom had much less to do with economic equality, and much more to do with a dynamic balance of power between money and labor interests.

Markets in themselves are inherently oligarchic: one florin, one vote. But an alliance of quasi-monarchical state interest and a populist interest can push back against the oligarchs. Machiavelli was the first to notice that the 'conflict of the orders,' so long as it did not get out of hand, could be a positive basis for freedom.

His ideas - with his name filed off - contributed more than Jefferson's to the theory of the US Constitution: Compare its strong federal government with the weak central institutions of the Articles of Confederacy.

In the Agrarian Age, both mountain freedom and urban freedom were special cases. Exploitation was the norm, embodied in latifundia, manor, and plantation. In the industrial age - which is essentially urban - mountain freedom is even more marginalized, but urban freedom has become widespread. Indeed it has become rather normative, even if often honored in the breach.

Well, that turned out to be a rather lengthy prologue. Now, what of work in the future?

One possibility, which has sometimes come up in comment threads here, is that technology will lead us to a post-scarcity future. Economists will say there can be no such a thing, because human desires are limitless. But we still come from the primate house, with some basic physical needs and comforts. Once we have ample food, we don't want more of it - we want instead some combination of tastier, more convenient, and more appealing to our vanity.

The higher the productivity level, the more things come down to vanity. Whether or not it is technologically feasible, we can at least imagine a world where basic physical comforts are so readily provided as to be nearly free. But in this same world there is almost no demand for productive labor, and it is not quite clear how this nearly free stuff gets distributed - let alone any high level of pleasures.

This post-scarcity economy is often imagined as hobby-driven, with people 'working' for the sake of self-satisfaction, the way I write this blog.

But the economy could equally well be a crass, somewhat creepy mix of celebrity culture and Thorstein Veblen. A world where those in a position to do so hire everyone else, at nominal wages, as personal servants - not to do any work that even cheaper robots couldn't do just as well, but for the sheer ego gratification of showing off how many servants they have. This too is primate-house behavior.

Or the post-scarcity economy could be a mix of both.

But a post-scarcity economy is, alas, scarcely a given. Technological progress tends to come in leaps, followed by longer periods of maturity with only gradual, even glacial progress. So another possibility, at least for the midfuture, is a decelerando, a technological world that stabilizes at a level somewhat higher than today's, but only somewhat.

Such a world has some rather sobering implications. It will not be a world without work, because a robot capable of doing, say, restaurant kitchen work (and doing it more cheaply than Latino immigrants) is a doubtful proposition. But it may be a world without very much high-paying 'good' work, especially if the oligarchic tendency of markets is unchecked by political 'countervailing power.'

In such conditions, the upward concentration of wealth means that investors will have plenty of money. But given a low rate of innovation, due to largely mature technology, they will struggle to find profitable places to invest it.

Their desperate search for high rates of return will fuel asset bubbles, from Impressionist artwork to real estate. And the financial shocks from successive popping of these bubbles will leave the economy sputtering, performing below even its constrained technological potential.

If this world sounds rather familiar, it should - it is arguably the world that has been emerging, at least for the already-industrialized economies, in the last decades of the 20th century and the beginning of the 21st. We still have plenty of innovation in some industries - especially 'tech' in the sense of computer-related - but unlike mid-century tech progress it is not creating all that many jobs.

If the decelerando scenario is correct, this will become more and more the case.

All of which, in an urbanized, post-industrial future, could be fuel for a new 'conflict of the orders.' Which could take catastrophic forms. Or, channeled into political rather than violent conflict, might well take the positive, freedom-generating form outlined by Old Nick Machiavelli.


The image of Carolingian peasants comes from a blog about the history of cooking and food.