February 28, 2002:

The Wall Street Journal for February 26, 2002 has a front-page story on the way that the major record companies dump tens of millions of dollars into singers and bands, hoping to generate the next major hit. (The article itself is not available except to subscribers.) Of course, in trying to create the next Britney Spears, they usually fail, because there already is a Britney Spears, and undistinguished clones of blockbuster acts have no reason to succeed. Duhh.

Yet they try, and when they fail, they recoup their losses by trapping successful acts in contracts that siphon off most of what the succesful acts earn to pay for all the failures the labels attempt to promote to stardom. In most other contexts, this would be called thievery and would be prosecuted, but Big Media seems to live by a different set of rules.

As with most truly huge corporations, the labels prefer to control the market rather than compete. It struck me, reading the WSJ article, that they never actually listen to obscure acts that people seem to like. They much prefer to take a wholly artificial act and spend money on it until it either explodes into a megahit or completely craters. Why? They control what they create. If they allow acts to rise up from obscurity to stardom on their own, they will be in an inferior bargaining position relative to those acts, and big companies hate that. Big book publishing is very much the same way these days: Everybody wants to create the new Harry Potter, even though there already is a Harry Potter, and the Next Big Thing almost by definition cannot be anticipated.

Interestingly, the labels' job is made more difficult because radio station ownership has consolidated into a very few megacorporations, which only want to play hits, and not obscure local bands who might get some legs if they just got some exposure. Years ago, FCC rules forced radio and TV stations to be locally owned, so local bands and sometimes just weird, off-the-wall stuff got some exposure, allowing it to have a chance in the market of public preference. No more. These days the labels send demo CDs of new acts to the stations, which play the lead cut twice, and if the fans don't storm the gates instantly demanding more, they shitcan the CD and go on to the next one. Local acts need not apply.

There's also the problem of manhandling the record distribution channel...but I've gone on long enough for one day. My point? The record labels' problems are largely of their own making. Sales are falling because they're trying to sell us the same old stuff, and we already have enough of that. (I simply don't listen to pop music anymore.) Blaming it on Napster is naive in the extreme.
February 27, 2002:

I know that a lot of people who used to read Visual Developer Magazine read my diary here, so it's appropriate to mention that the two people most responsible for the artwork side of the magazine are available for freelance projects, and I give both of them my unhesitating endorsement. Gary Smith is the man who drew every one of VDM's startlingly original covers (plus a few from the last months of the PC Techniques era) and has done a great deal of work in other high-tech areas, particularly with respect to transportation. His degree is in industrial design, and he's worked for fifteen years on the automotive design staff at General Motors. He operates his own design firm, Performance Design, and you should check out his art portfolio there for the sheer beauty of the material he's worked on. In addition to VDM covers, he did a great many covers for Coriolis books, and also designs Web sites, posters, product brochures, and virtually anything else requiring high-precision artwork. "Breathtaking" somehow doesn't quite capture it—go see for yourself!

Visual Developer's interior layout/design and technical art was in the hands of Kris Sotelo, a very young and very talented designer who has worked at Coriolis for five years now, and managed production of the magazine for most of her first three. (VDM's been gone almost two years now, which is hard for me to believe.) Kris is leaving Coriolis shortly to begin freelance design work, particularly for Web sites. The Coriolis Group's Web site is almost entirely her work—see also her personal portfolio site. Kris is equally comfortable with both print design and Web design, and handled the logistics of creating a 96-page magazine every two months without ever dropping a bit—and doing virtually all of the work herself. She also created numerous marketing pieces for Coriolis; catalogs and ads and other creative odds and ends. Contact both Kris and Gary through their Web site links given here. They're both brilliant, and it was an honor to have worked with them in VDM's glory years.
February 26, 2002:

I've tried to predict the future of technology ever since I discovered I loved it. I did better at some times than others; for example, in 1973 I predicted that voice synthesis would be hard and voice recognition would be easy, yikes.

In 1982, however, I think I struck gold. In that year I began working out an SF novel, to be set in the year 2047. (In 1985, having seen the headlong forward rush of computer technology, I moved the date up to 2027.) I tinkered with it for five or six years before abandoning it as unworkable, but I wrote almost 20,000 words along the way, and set down in those words were my vision of the future of computing. Note that this vision wasn't the main theme of the story, but simply the technological and cultural background against which the story took place.

The story itself was about a middle-aged agent for the descendent of the FBI, trying to track down a rogue computer program that is wreaking havoc around the world. He suspects (and soon knows) that the program is in fact an old experiment of his: an attempt to copy his own mind into silicon. He succeeded, but the copy he made of his mental patterns evolved in rational power but not moral power, and he finds that it has in fact devolved into something almost completely selfish and destructive.

It didn't work as a story, for numerous reasons, chief of which was that I was attempting to imitate a popular style—cyberpunk—that I was ill-suited to write. Cyber I can do; punk is forever beyond me. So let that part of it go, and consider my prediction of the state of computing in the year 2027, keeping in mind that this vision dates back to 1984:

Virtually everyone has a computer on their person. Some are built into jewelry, some into hats. Most are fat lapel pins. Each computer—called a jiminy—speaks and "understands" a language called Structured English, and has an infrared transceiver built into it. On command your jiminy acts as a telephone, connecting for voice traffic with other people's jiminies. That was no stretch of imagination, really; briefcase-sized cell phones existed back then. However, jiminies aren't just phones. Your jiminy keeps track of where it is, and "talks" to other jiminies as you pass people in offices and on the street, taking part in a virtual bulletin-board system, asking and answering questions, looking for things (anybody got that rare Association album, "Stop Your Motor" and willing to sell?) and people (lunchtime is in an hour—any robotics hobbyists here want to run down to Wendy's for a burger?) This freeform infrared virtual community I called the Infranet. Dumb infrared nodes posted in rooms and public places connected to fiber optic telecomm cable handled longer-haul communications as required. This separate network I called Plasmanet, which was much more like today's Internet and modeled on ARPANet, which I was using at that time through my job at Xerox.

So let me tote up what I predicted in 1984: Wearable computers. Combination cell phones/PDAs. VOIP. Computer viruses that mutate and evolve. Ubiquitous mobile computing. Peer-to-peer. Real-time voice comprehension—though in truth everybody assumed that that fundamental need would happen someday. I failed to anticipate GPS (that system still astonishes me sometimes) but my system for geolocation, which depended on tiny infrared transponders encoded with their location, literally glued or nailed everywhere to power poles and the sides of buildings, had tremendous resolution, down to fractions of an inch. We're much of the way toward my 1984 vision already, and I'm sure by 2027 it'll be old hat. I'm also sure that had I finished the novel I'd be much more famous now, sigh.

I published two items about jiminies in PC Techniques, and I just OCRed and posted them in the PC Techniques/VDM archive. One was a fragment from the novel, and the other a summary of the idea itself. I'm not going back to that novel, but I'm glad to see "the future" (that is, the present) evolving alone the general lines that I foresaw almost twenty years ago.
February 25, 2002:

A reader reminded me that I make regular disparaging comments about opera without ever having heard me explain why. Well, that's a tough one. The old gustibus explanation comes into play, but in truth, my feeling is that opera is abuse of the human voice. Opera isn't singing so much as shouting while attempting to hold to a single note. "Attempting" is a good word here—the operatic warble that makes me want to run for the exits is what happens when you push so much air through your vocal chords that they can't hold a steady note.

This was once necessary. Prior to electronic amplification, there was nothing but the force of the performer's own voice to carry the music to the far ends of cavernous European opera halls. We have electronics now, and the clear, steady notes that the human voice can produce (if not at deafening volumes) can now fill any hall we can build. So why stick with opera? I guess it's nostalgia; kind of like taking horse-cart rides around downtown just to sit in a drafty open cart and fantasize about how wonderful it was when horses dropped hundreds of tons of crap on city streets every day. Certainly it's not for the beauty of the singing. Yeah, I know, I don't "get" it...and praise God for small blessings!
February 24, 2002:

Realizing that I did not have machine-readable copies of many of my early PC Techniques editorials led me to go hunting for a decent OCR package, and what I found is decent indeed. FineReader Sprint is a $20 marvel, and for casual OCR work I've never seen anything like it. A dumbed-down version of a Caere package came with my first scanner, but it was miserable to use and had a great deal of trouble with any but the plainest of fonts. FineReader, while technically OCR (optical character recognition) is actually optical word recognition. It recognizes characters, of course, but it also has a built-in dictionary that it uses to confirm its "hunches" about character patterns in a processed file. In the preview window it color codes on words according to how sure it is that a word is a real word.

I tested it both on modern text (my editorials in PC Techniques) and on an ancient book published in 1875, an old history of Old Catholicism, yellowed and spotted and printed with tiny type in a very archaic font. The editorials it sucked in with virtually 100% accuracy, and did it so quickly it made me gasp. But what really impressed me was its facility with old type on yellowed paper. I scanned about ten pages out of the old book, and not only did FineReader pick up all the text, it picked up the idiosyncratic footnotes as well, in their 4-point microtype. It had a little trouble with the archaic dipthongs (things like the Œ in "Œcumenical") and for some unknown reason it tended to see the capital R characters as E's, turning Rome into Eome. Still, that's worst case—when was the last time you had to scan/OCR a 125-year-old book?

The product can push text onto the clipboard or invoke and transfer text directly into word processors. Its "big brother" product FineReader Pro can actually preserve the layout of a scanned document. I haven't tries this version yet (don't need that feature for the moment) but it exists, albeit for $100. FineReader Sprint has dictionaries for 55 different languages, including odd ones like Finnish and Flemish, as well as programming languages like Java and Pascal, heh. It even does Latin, of all things, so scanning my old Latin-era Catholic missals would be possible, if I ever needed to do that. (They're already up on the Web somewhere, proving that some people are even nuttier than me.) But it solved my immediate problem—getting my old editorials into HTML—and may prove useful in other ways as well. Highly recommended.
February 23, 2002:

It's interesting sometimes how entirely separate things I'm reading simultaneously mesh, sometimes in peculiar ways. So it is with one additional article in the current Atlantic and a nice little book that Carol gave me as a Valentine's Day gift. The book is Paul Woodruff's Reverence: Renewing a Forgotten Virtue, and the article is The Apocalypse of Adolescence, which has been posted on the Atlantic's site and can be read without charge.

The article isn't the Atlantic's best work, but it's probably worth reading. The gist of the piece is that we are seeing more and more young teen males who kill utterly without remorse, often in horrific ways. These are the boys cut from the same cloth as the Columbine killers, completely cut loose from what most of us would consider ordinary human emotions and behavior. The article fails for me because it throws up its hands and does not even pretend to guess what happened to generate this army of chaos—and I think it does that because the author cannot face the truth: Our culture did; our media culture, more or less created by the American liberal wing. I hate to say things like this, lest people think I'm a right winger, but culture matters, and our culture is hurting the young. The right says it's the sex, and the left says it's the violence (yet our supposedly liberal Hollywood continues to glorify violence) and as usual I say, a pox on both houses. The problem goes far deeper than either sex or violence. The problem, at the bottom of it, is one of reverence.

Nobody says this because so few of us understand reverence anymore. Say "reverence" and people imagine someone standing in front of a statue in church with head bowed. True enough...but that's not what reverence is—it's just one of many ways that reverence is expressed. And so we come to Woodruff's book, which is a sophisticated discussion of the true nature of reverence and how it shapes human behavior. Woodruff is a philosopher and an academic, so the writing is stuffier than I'd like it, but he nails the topic dead-center: Reverence is our capacity to be moved by things beyond ourselves; in size, in importance, in age, in understanding. It's difficult to summarize Woodruff's point in just a sentence or two—if it matters to you, please read the book. What I want to emphasize here is that our culture hates reverence, reverence in any form, expressed in any way. A mild touch of this disease is called cynicism, which is embraced most readily by spiritual cowards afraid to allow themselves to be affected by the world around them. Our culture has gone far beyond cynicism, to base nihilism, which is simply the attitude that no one and nothing at all matters in any way. Pour that into the ears of a vulnerable fifteen-year-old long enough, and you run the strong risk of producing a being who lives only to destroy. Like the Atlantic article, I can propose no clear path out of this cultural bind we've gotten ourselves into, but unlike the article, I will stand up and accuse TV, rap, video games, and the multitude of nihilistic flavors of rock music (among many other things) of pushing young minds in the deadly direction of nihilism.
February 21, 2002:

Another item from reader Dennis Peterson: He sent along a comment from a colleague of his, on my entry for February 18. This from Wendy Wright:

This great population possibility would certainly explain why the Indians were determined to use every piece of an animal, and draw only what could be used. It would be absolutely necessary when trying to live in harmony with such large populations to cope with. It makes more sense to me than the Indians doing this sort of thing in such times of plenty as when the colonists came over.

In other words, the Indians would not have developed such abstemious habits had they been few and the animal herds immense, as most historians have long taught was the case in the Americas since time immemorial. Makes sense to me. Contrast this situation to what happened in Siberia and central Asia 10,000 or so years ago, when the great woolly mammoths were hunted to extinction by early humans. We have found entire towns built with nothing but mammoth bones, indicating that once humans settled in that area, they lit into the mammoth population and didn't let up until the mammoths were gone. While mammoths were plenty, primitive Asians hunted with abandon. Somehow, aboriginal Americans avoided wiping out the bison and other big ungulates, which implies considerable sophistication as stewards of the wild—and also implies the need to be sophisticated, in order to feed and clothe a large and settled population.
February 20, 2002:

Reader Dennis Peterson put me onto a new Slash-style aggregator: Kuro5hin (no clue how to pronounce it, though it's abbreviated as "K5") which is nominally about technology and culture, but actually ranges a lot more broadly than that. An early read shows a lot of meat there, as typefied by this discussion about e-books and the traditional publishing industry. This is a matter of crucial interest to me, as I am laying the foundations of a new publishing company that will focus on e-books, in concert with print-on-demand technology. What I have sensed (and is borne out by the Kuro5hin piece) is that the Big Guys won't play until they have some sort of technological insurance that they can't lose. And what that means, of course, is that little guys without all that ego and paranoia can jump in and make some headway without being trumped by the global publishing conglomerates.

I'm not willing to spill the details just yet, but keep reading, and as things firm up you'll see it all here.
February 19, 2002:

People pondering the almost instantaneous extinction of North American Indians (see yesterday's entry) wonder how our diseases could work so rapidly on them. Neither the Europeans nor the Indians had a germ theory of disease, and the Indians would often gather as extended families to "see off" the dying, picking up their pathogens as they did so.(Europeans had a nascent sense that the dying should be avoided, and had by 1500 learned to quarantine smallpox victims.) The fact that the Indians had no immunity at all to the diseases should surprise no one, since the populations had been almost completely separate for 10,000 years. True, the Scandinavians touched down on North American shores circa 1000, but pathogens don't live easily outside the body in non-tropical climates, and the Scandinavians didn't mix much with the locals. The Spanish, on the other hand, worked in warmer climates and journeyed primarily to pillage the locals of gold and other valuables, so the deadly equations were nearly optimal.

A more interesting question is why there were no similarly deadly diseases brought back from the new World to ravage the Old. Indians were brought back to Europe as slaves, servants, and curiosities almost immediately, and nothing terrible happened to Europe as a result of these visitations. Supposedly syphilis originated on these shores, but syphilis is nowhere near as deadly as smallpox and the various nastier versions of swine flu. My theory (touched on lightly in the article in The Atlantic mentioned yesterday) is that the Indians didn't domesticate many animals. They hunted wild meat but didn't have barns full of bison, cattle, or (especially) pigs. Pigs seem to harbor pathogens that jump easily to humanity, and once Europe introduced pigs to America, it was just tossing fuel on an already deadly fire. This is a fascinating issue, and for more on it I recommend the beautufully written Guns, Germs, and Steel, by Jared Diamond. Our close partnerships with animals have often been a very mixed bag, and the downside has not always fallen to the animals.
February 18, 2002:

The cover story of the latest Atlantic presents a startling idea that's gaining currency among archeologists: That the number of aboriginal Americans pre-1500 was tremendously greater than previously thought, perhaps greater than that of Europe. We have come to think of North America having been a vast wilderness, because when colonists from Northern Europe (England, France, and Holland) began settling along the Atlantic coast in great numbers circa 1600, there was almost nobody there. The reason for this may be grim indeed: That by 1600, diseases brought by the Spanish in the years 1490-1520 had by then already wiped out 95% of the indigenous population. In that single century, much of North America had returned to wilderness, and by 1700, when the Midwest began to be settled, huge swaths of new forest had grown, and the population of large animals (like bison, elk, and mule deer) had swelled almost unbelievably. It looked like a primordial paradise, mostly untouched by human hands except for the few and gentle hands of the aboriginal Americans we call Indians.

Not so. The Indians were not gentle so much as wise, and there were huge numbers of them. They had apparently taken almost complete control of the North American ecosphere through careful and coordinated hunting, farming and coppicing (tending and near-surgical thinning of forests). The land served their needs, and they somehow discovered the secrets of sustainable agriculture and animal husbandry on a scale unknown anywhere else in the world, at any time period of which we are aware. The massive herds of elk, deer, and bison/buffalo we discovered in the 1700s were created by the simple fact that the Indians were no longer there to thin the herds.

Environmentalists, of course, are very much of two minds about this. The Indians had been their champions for having kept their hands off the ecosphere, when in fact the North American ecosphere pretty much totally served the needs of North American humanity by 1400. So expect some squawking and accusations from the deep greenies, who hate humanity and would much rather none of us were here. Me, I think we need to look much harder at how they did what they did. Much could be learned about sustainable agriculture and ways of living.

This sort of article, by the way, is precisely the reason I read The Atlantic. More than any other magazine I've ever read, it obeys my command to tell me something I don't already know. Powerfully recommended.You can subscribe through Amazon. (Note that they don't post all their articles to their Web site, and certainly not their current cover stories. This is probably one reason they're still with us. Good content is worth paying for, and I'd certainly rather have it on paper anyway!)
February 17, 2002:

It's time for a new main PC here, and I was surprised when I went to order my new Dell (which should arrive shortly) that you just can't get them with Iomega Zip 100 drives anymore. I was considering moving up to Zip 250s, but apparently I don't have a choice if I want to continue my long-standing (since 1985) tradition of keeping my data files on cartridges. It's the 250s or nothing. So I chose the 250s, and I sure hope I'm doing the right thing.

What I haven't quite figured out yet is the difference between the conventional Zip 250 cartridge (which is externally identical to the Zip 100 cartridge) and something called the U250, a 250 MB cartridge that is physically compatible with 250MB Zip drives but is round on one side rather than square. (See the photo at left.)

My first cartridge drive was a twin Bernoulli Box, bought in 1985 and used relentlessly until the ticking finally got to me (not to mention the cafeteria-tray size of the cartridges!) and made me jump to SyQuest 5 1/4" 44 MB cartridges in 1990. I got burned in the mid-90s by adopting SyQuest's 135 MB EZ-Drive technology, which never worked well and eventually put its parent company in the grave. I backslid from there to Zip 100s, and never had a lick of trouble with the cartridges, though one of the drives died on me two years ago. I see almost nothing about the Zip U250 cartridge in the press, and hope it's not another needlessly different EZ-Drive sort of thing. I guess we'll see.
February 16, 2002:
Do I have any readers in Colorado Springs? If so, please drop me a note. I'd like to talk to someone about the city and what it's like.
February 15, 2002:

I was playing one of my Survivor CD's today, and happened to be kicked back in my chair here when I heard Jim Peterik singing:

When it's coming from the heart
All the people sing along,
It's the man behind the music,
It's the singer, not the song!

This from Peterik's tune, "It's the Singer, Not the Song." Well, Jim's got it entirely backwards: It's the song, stupid. Singers are basically interchangeable. I have, at last count, 18 covers of the old pop tune, "Funny How Love Can Be." I like them all, though some are more expertly performed than others. (And one is sung in Italian.) The song just appeals to me, and it matters very little who sings it.

I can forgive Peterik for making the mistake, because he's a very talented songwriter who happens to perform his own material. When it's Peterik singing, it is the man behind the music—because in this case, the man behind the music wrote the music. Survivor CDs are marvelous, because almost every song is a winner, and I can only ascribe that to the talents of Jim Peterik and his co-composer Frankie Sullivan.

Most music CDs aren't like that. Doing a cover of somebody else's song is expensive, so the labels encourage non-songwriters to fill out their CDs with dull or even awful stuff they wrote themselves, when (as is so often the case) as songwriters they're basically one-hit wonders. The trashy nature of most music CDs is a major factor encouraging people to steal audio tracks. A CD full of good songs is worth $17. A CD with one good song and a lot of trash is worth way less, and since we're not dickering here, the inevitable result is Napster.

I've made that point before, and I won't belabor it further. I will list a few artist/composers whose CDs are almost all winners—emphasis on few, because that's all there is:
  • Survivor
  • The Association
  • The Beatles
  • Mary Chapin Carpenter
  • Bruce Hornsby & The Range
And that's all I could name off the top of my head! There are other composer/performers who do well (The Carpenters and Simon & Garfunkel come to mind, with maybe James Taylor) but they do other people's songs as well so their CDs don't serve to illustrate my point: That it's the song, not the singer. The economics of the record industry would like to convince us otherwise (and I'll bet there's some singer ego in there somewhere) but that doesn't make it any less true.
February 14, 2002:
As Lent begins in earnest, a Lent factoid: During the Middle Ages, beavers were hunted almost to extinction in England, because the Church at that time did not consider beavers to be "flesh meat" (as it was said then) but rather a category akin to fish. So you could eat beaver on Friday and not go to Hell—which was not a good thing if you happened to be an English beaver.
February 13, 2002:

Ash Wednesday. I had completely forgotten, but then again, Lent isn't a big thing for me. I was in Home Depot with a carry-basket full of oddments when I saw a middle-aged woman with a black cross smudged on her forehead, and it really caught me up short: "Remember, Man, that thou art dust, and unto dust thou will return." Talk about a Catholic downer.

Lent has always been kind of a puzzle to me, and an irritating one at that. Why? In practical terms, Lent is about self-imposed suffering, whether that's what it says on the theological label or not. We're often told by the Church that it's about humility before God. Humility? Hah. People who give things up on their own initiative are almost always mighty pleased with themselves for their sacrifice. It's an ego thing, like the monks who, during Lent, cut back on their food to the point of starvation, literally attempting to "fast one another under the table." (Generally, they succeed.) If it's important to give something up for Lent, it's probably important enough to give up forever, like cigarettes or reefer.

It's well-known that I have little patience with asceticism, for entirely that reason: I have almost never met an ascetic for whom self-sacrifice was not a point of obvious pride, which (in my view) completely negates any spiritual gain one might accrue from the sacrifice. Health nuts share in that vice: Nobody ever seems to be a vegetarian in silence, privately, in one's own domicile—one must make as much noise about it as possible, so that everyone will know how morally superior one is for not eating meat. (This quite apart from making dinner parties as miserable as possible by throwing tantrums over the menu. Do these people have any idea how completely self-centered they're being?)

It's better, I think, to live a generous (toward others) and modest (toward oneself) life all-year around. God made gumdrops too, and he made them to be enjoyed, just as he made all the rest of this wonderful creation.(The challenge in living in our extravagant universe lies mostly in not overdoing it, be it enjoying gumdrops or whatever.) I'm still trying to decide what Lent is really about, but in the meantime: If you're going to give stuff up for Lent, don't talk about it!
February 12, 2002:

It's out! Asimov's Science Fiction Magazine for April 2002 arrived in the mail today, and therewith I celebrate a return to Asimov's after a 21-year absence, and to SF magazines as a whole after 19. I didn't have the cover story, though I didn't really expect to, but it's mighty fine to be back in print again.

The story, again, is "Drumlin Boiler," which I wrote in the fall of 2000 and sold to Asimov's a little less than a year ago. This is not a fast-turnaround business like computer publishing, sigh. "Drumlin Boiler" is set in the same universe as my unpublished novel, The Cunning Blood, but is only thinly connected: A starship turns up missing in the middle of the 22nd century, an event which gives an idea to the heavy in The Cunning Blood to fake similar vanishings and thus steal several starships. Well, "Drumlin Boiler" is set on the planet where the passengers of the genuinely missing starship find themselves marooned. The very Earthlike planet is scattered with hundreds of thousands of strange shrine-like structures consisting of a 6-foot stone bowl of silver dust and two black stone pillars. If you tap out a pattern of 256 taps on the two pillars, something materializes in the silver dust. Simple patterns make simple things like axes, rope, wheels, gears, and so on, though of an extremely light and strong metal, or diamond for bottles and optics. Complex patterns generate, well, "thingies" whose function is obscure. Somewhere in the middle is the interesting part, heh—and there are 1.56 X 10E77 different possibilities, a cool million for every atom in the observable universe. Go pick it up—it's lots of fun, and is the start of what I hope to be a lot of stories set in that universe and on the Drumlins world.
February 10, 2002:

Wow. It really was Sheena, Queen of the Jungle. Heh.

Back in 1986, I did a lot of travelling for Ziff-Davis' PC Tech Journal, and I was on the road to meet advertisers. I don't remember entirely where I was, but it might have been Atlanta. I had just checked in at a hotel, which was bustling with the national convention of the Friendly Ice Cream chain. I was waiting for an elevator, and waiting with me was a grandmotherly woman in her sixties, who was looking at me and smiling broadly.

Not wishing to appear standoffish, I tried to make conversation. "So, are you with the Friendly convention?" I asked.

Her smile grew even broader. She had doubtless been a very beautiful woman in her time, but she'd been out in the sun too much and it had taken its toll. "I'm not with any convention," she replied, "but I'm friendly."

Ulp. During my one business trip to Europe I had learned never to smile at women who were leaning against buildings. I was a little befuddled at how to respond, and in the awkward silence she now said something even odder: "I used to be Sheena, Queen of the Jungle."

Right. And in my off-hours I was the Pope. She went on, introducing herself as Irish McCalla, and explaining that she had played the part of Sheena in the low-budget TV adventure series in the mid-1950s. Never the vid kid, I hadn't ever seen the show, but the numbers were about right (that era and her age) so I mumbled approval and kind of wished the elevator would hurry up before she asked to buy me a drink.

I needn't have worried; we got on the elevator and she babbled happily about swinging from vines until she got off on the 11th floor, and I never saw her again, though I've told this story half a million times since, wondering whether she was just a slightly loopy granny who was angling from some attention, or if she really had been Sheena, Queen of the TV Jungle. Now in this morning's Arizona Republic I read the obituary of Irish McCalla, age 75, who had played Sheena, Queen of the Jungle in a 1950's TV show. She had lived in Prescott, Arizona—but more to the point, I really had met Sheena in an Atlanta elevator in 1986.

I guess it says something about fame and about images, and about other things that I can't quite define. Suppose I had met a woman who claimed to be Playboy's Miss November 1956. I'm sure there are now lots of grandmotherly Playmates out there who had bared it all in the 1950s. Does anyone believe them? And what a conversation opener!

Rest in peace, dear lady. I'm sorry I doubted you. Fame and image are peculiar things sometimes.
February 9, 2002:

I was out in the garage this afternoon, oiling my lathe and putting stuff away in preparation for some major machine work on my new equatorial mount, and found a small box that had been sealed since we moved into this house at the end of 1993. I opened it up, and, wow! It was my button and badge collection, plus a scattering of miscellaneous trade show tchotchkes (or however that's spelled...Lizz, help!) that I somehow couldn't part with, even though they seem ridiculous now.

What may be even more ridiculous is that I apparently retained the press badge for every trade show I had ever attended up to 1993, which was quite a stack, yellowing pastel ribbons proclaiming PRESS and all. Does anybody even remember NCC or Interface? The clip-on holders had begun decomposing from the heat (which gets intense in the garage at times in the summer when I choose not to turn on the AC) and many of the badges themselves were curling up like potato chips. Into the trash they all went. Also incongruously present were 6000 yen in Japanese paper money, leftovers from my trip to Tokyo in 1981. (Not sure, but I think that will buy almost half a hamburger.) I may laminate the 5000 yen bill (which is quite elegant) and use it for a bookmark.

But the buttons—now, they were cool. Note the Apple button above at left, stemming from Apple's attempt to gain market share through litigation in the late 80s, which almost killed them back then and may finish the job yet. The No Copy Protection button was given out by the Capital PC User Group in the mid-80s. We thought we won that war, didn't we? Heh. Never to be ordinary, Borland's elegant button remains one of my favorites. Be there or be square!

And finally, do any of you recognize the green button at left? I suspect it's quite rare, and I am privileged to have met and spoken with the remarkable man it represents. If you recognize it, drop me an email—I'd be curious to know how well he's known outside of his obvious fan community.

Other buttons celebrate many forgotten firms and products: The Quadram EGA, Avatex modems (2400 baud!) and other things less clear, like the button with the thumbprint and the simple legend, "Touch me!" I don't go to as many trade shows these days. The industry just isn't as diverse and interesting as it was in the crazy decade from 1981-1991. What was there goes into a smaller box, minus the badges and ribbons, and most of the tchotchkes, and perhaps when somebody does a cultural retrospective someday on what I call the IBM PC era they will emerge to delight an audience who may not ever quite understand the exuberant and endlessly creative whackiness of that delightful time.
February 8, 2002:

While sitting up here in my big comfy leather chair in the middle of the night last night, trying to get myself bored enough to sleep, it occurred to me that people almost universally associate the existence of God with the existence of an afterlife. It's a natural thing to do, considering that almost nobody but the world's religions will even recognize the possibility of an afterlife. Nobody else is advertising the product, so God gets the brand by default.

But it's an interesting speculation that there are four distinct combinatorial possibilities, as shown in the little figure here. There could be God and an afterlife (as the religious hold) no God and no afterlife (as almost everybody else holds) and two other odd possibilities: God but no afterlife (as a handful of modern reductionist religionists hold) and the odd one indeed, that of an afterlife but no God.

I'm surprised that the atheists reject that one outright. Considering what we've learned about higher dimensions, complexity, chaos theory, and quantum physics, you could make a strong argument for the possibility that extremely complex systems (like the human mind) require more spatial dimensions than three. And while the human mind may be a phenomenon that emerges from a three-dimensional nervous system, if it truly does occupy a fourth spatial dimension it may not require its three-dimensional mold once it reaches a certain level of complexity. When the mold dies, the mind may cut loose and continue its existence (and, one would hope, its growth and education) in four-space.

Modern scientific types seem extremely reluctant to speculate on the existence and nature of higher dimensions, which is a monumental failure of imagination. Current thought is that if such dimensions exist, they must be rolled up (down?) to the Planck length (verrrrrrrrrry tiny) which is physicist doublespeak for non-existence, since I read the physics as suggesting that something at the Planck length ceases to be reliably real. Why does the fourth dimension scare physicists so much? Why can they never explain how we could detect a fourth (or higher) dimension if it existed? What's really going on here? A certain bald-headed middle-aged fourth-dimension groupie wants to know.

I finally got to sleep, by the way. And me, I'll take God with the afterlife. Someday, we'll all know.
February 7, 2002:

I see the reports in the media more and more all the time: People are dropping broadband Net connections right and left. What's wrong? Nobody in the media seems to know, and most reporters blame the lack of streaming video or similar ridiculosities. It's way simpler than that: Broadband doesn't make the Web a great deal faster.

I've been regularly testing the downlink speed of my own broadband connection since I discovered the DSLReports site. (See my February 4, 2002 entry.) Except in the later evening, my connection generally runs at the astonishing downlink rate of 2 Mbps (two megabits per second!) and sometimes peaks at greater speeds than that; at 1:30 PM yesterday it hit 2.3 Mbps. Nonetheless, Web pages come down just as slowly as ever—in fact, when I was using my nephews' 56K dialup connection back around Christmastime, I felt they were coming in about as quickly as they do back here at home, at 2 Mbps.

How could this be possible? It doesn't take a degree to figure it out. If the servers are overloaded, they won't serve up packets as quickly as the downlink to the user can pass them. And a lot of the pokiness of the Web is in fact due to overloaded banner ad servers. Banner ads aren't generally hosted on the sites where you see them, but somewhere else, often on dedicated ad hosting and management sites that have to serve tens of thousands of simultaneous requests for ad bitmaps. As ad links are generally early in Web page HTML, you often wait for the ads before much else on a Web page comes down.

You can speed up Web pages a little by installing banner ad and popup killers, but the worst of the problem is just that the Web is very popular and most servers don't have the cycles (or the outbound bandwidth) to serve the number of requests they receive. I find it interesting that Usenet and peer-to-peer data transfers are extremely quick for much larger files than your typical Web page contains.

None of this is rocket science, but you still wonder why nobody in the media has figured it out. If the Web doesn't appear to be any faster on a $60/month broadband connection than on a $15/month dialup, who's gonna wanna pay $50? Guys, read my lips: The bottlenecks are elsewhere.
February 6, 2002:

People regularly ask me: How is Aardmarks coming? (See my entry for January 8, 2001, for a quick description if you haven't heard of Aardmarks before.) It might be time here for a quick recap. I have "finished" two prototypes, such that a prototype can be said to be finished. I use Prototype 2 on a daily basis to manage my own bookmarks, and it's actually quite effective. I never turned it loose to the public because it's too full of prototype cruft: Half-finished and abandoned forms, counterintuitive menu arrangements, and third-party components that are, well, something less than robust. I used the prototypes to decide how I wanted to do things; to learn what worked and what didn't, and how well.

I've taken the opportunity to abandon both prototypes and start over from scratch. I say "abandon" too lightly; much of the code in Prototype 2 will be brought over almost without change, as will most of the custom dialogs. I have also managed to eliminate over half of the third-party components libraries that I was using before. Some just didn't work right (mostly the freebies) but with others (like ABC) it was more a matter of using one component out of a library of 200, which seems absurd. Orpheus 4 made it possible to cut numerous libraries out of my component palette, and moving to Delphi 6 for development forced me to upgrade some of the binaries-only commercial libraries I was using, like DBISAM. This was a good thing, overall...some of those components have been following me since 1998.

Starting from scratch has allowed me to re-think the user interaction model completely, and although I haven't changed it radically, I've simplified it a lot based on my own experience in using it for close to a year. I've knocked out the half-finished screens, and the first public release will not contain the planned email and news client screens. Maybe someday...they weren't core to the challenge. I've also been able to untangle some of the ad hoc code logic that "just grew" as I gradually learned how I wanted it to work. I'm cleanly and sharply separating the several major subsystems so it'll be easier to understand—and evolve—in the future. (My guess is that most of the minor instabilities will also vanish in the process.)

I don't know how long it will take to bring the Aardmarks Alpha (as I'm calling it—this is the Real Thing) to a point where I'll turn it loose. A few months, fersure. But I'll keep you posted along the way, and when it's done—or at least non-toxic—you'll be able to download it here.
February 5, 2002:

Aggregators are popping up everywhere. (See my entry for February 8, 2001 for why this is an excellent thing.) I just discovered Tablet PC Talk, a new news aggregator for the nascent Tablet PC industry. I'm watching the Tablet closely, mostly because I think it has implications for electronic publishing. It's unclear how ubiquitous it's going to become, and I doubt we'll know before they've been on the market for a year or two and we discover how useful they really are. Certainly as an ebook platform they have a lot of potential, especially compared to other things currently being sold as "ebook readers."

The site is still a little thin on content, but the FAQ and the news items are worth a visit if you have any interest at all in tablet computing. Interestingly, the site is not a slash site, but really should be. Slash really has the aggregator function nailed, and I wish I could mount one, just for the fun of having one to tinker with. (My ISP plan doesn't allow for anything that complex, sigh.)
February 4, 2002:

My good friend Pat Thurman put me on to the DSLReports site, which in addition to being a good news aggregator on the broadband Net access topic, also hosts a number of extremely useful tools for measuring what your Net connection actually gives you.

Most significant among these is a speed test that can tell you how many bits per second you're getting on both uplink and downlink. What a lot of people don't realize about their connections is that they are sharing bandwidth, and thus the more people moving data through a particular ISP, the slower any particular person's data will move. DSLReports certainly drove this home, as I watched my link speeds vary wildly over the course of an evening as I sampled them every half an hour or so. Predictably, there was a broad slowing trend as the evening progressed, as people finished dinner and sat down at their PCs to poke around the Net. When I packed it in last night about ten PM, it was going as slowly as it had gone all evening.

Even at the office, where we have a dedicated T1 line, speeds vary. This morning I watched the speeds increase as people finished processing their email, and now, closing in on lunchtime, things have gotten as brisk as I've seen them so far. This is useful; if I need to download anything big, DSLReports gives me some idea of the best times of the day to make the attempt.

The other thing I've learned from DSLReports is that when Web sites are slow, it's not my connection, as even when I get a reported 1.4 Mbps (as I did early last evening) the sites come down as slow as ever. The real bottleneck on the Web is on the server side, and not (as most people believe) in the speed of everyone's Net connections.
February 3, 2002:

I should have just gone and looked: Ted Nelson's home page (see my two previous entries) is easy enough to find, and yup, Project Xanadu is still a going concern, and a better and more interesting thing than I remembered it being. Definitely go take a look. No clues anywhere, however, as to why Computer Lib/Dream Machines is not in print. Why is Ted Nelson so utterly unremarked and invisible these days?

Oh, and while I'm looking for things to pad out a short entry, take note that I've posted a bunch more of my old editorials and idea pieces from Visual Developer Magazine over where they all are. More coming as I find the time to format them as HTML. I just wish I had them all on disk, but most of the really ancient stuff from the PC Techniques era I don't have except in the magazines themselves. I guess it's time to buy a good OCR package...
February 2, 2002:

I was astonished to see that Ted Nelson's Computer Lib/Dream Machines (see yesterday's entry) is out of print—and used copies are selling for as much as $375! That's why I didn't give you a hyperlink to Amazon, even though I think the book is a must-read for anyone who wants to understand how the personal computing idea came to be what it is.

I can only assume that Ted (who as best I know is still alive) doesn't want it to remain in print, which brings up a question I've pondered from time to time: Should an author be able to suppress a work of his once it has already been in print?

I've finally decided that no one—even its author—should have the right to keep a work out of print once it's been published. Authors should always be paid for publication, but that's a different matter. If it was good enough to be in the public eye once, it should be good enough to remain there forever, generating revenue until it passes into the public domain.

One possible solution to this issue is to make copyright licensing pre-emptive. When the original publisher chooses to place a work out of print, that publisher must post a notice somewhere, allowing other publishers to bid for the right to publish and pay royalties on the work. If no one chooses to republish the work, fine. But there should always be an option for someone to publish it, and the author relinquishes his or her right to keep a work under wraps as soon as it is originally published. This balances the right of the author to be paid for what is done, and the right of the common good to have ongoing access to work protected by the law and the courts.
February 1, 2002:

Some of my older readers may have been around when Ted Nelson's seminal book Computer Lib/Dream Machines was seen as a kind of computing bible and guidebook to the future (we're talking mid 70's here) and some of my younger readers may never even have heard of Ted. He is/was a visionary who predicted a lot of interesting things, including (though he may not think of it that way) the Web. Ted's Big Vision was something called Xanadu, which was a global hypertext network that anyone could contribute to. He had people tinkering with the Xanadu idea for almost of 20 years, but the project was for the most part kept secret and when the Web turned up independently in 1994, Xanadu was pretty much pushed off the stage of public attention.

The two crucial differences between Xanadu and the Web may have kept Xanadu from realization: 1) It allowed anyone to annotate a Xanadu page, via a hyperlinked "sticky note" sort of interface; and 2) it had elaborate machinery to retain copyright and control access to Xanadu pages, allowing page authors to charge for access to their material. Item 1 might seem madness today, but back then, when most people saw Xanadu as a sort of free-market ARPANet with an average IQ of 175, it was at least conceivable. Item 2, I think, was the technical nut they couldn't crack, and when the free Web appeared, it trained people to think of online content as somehow by definition free.

I don't know what ultimately happened to Xanadu or (for that matter) Ted Nelson. But there is something that comes about as close to what I understood as Xanadu as anything else I've seen: Wiki. In a nutshell, Wiki is a Web-based content environment in which anyone can modify the pages as they choose, up to and even having the power to delete other people's material on other people's pages. There's no content rights management. In perusing some of the Wiki material, I'm struck by the degree of courtesy (which doesn't mean a lack of banality) compared to other online forums I've had the fortune or misfortune to visit. Somehow it seems to work—though I do see that there are both moderated and password-protected Wiki pages, so like everything else in this world, it works...to a degree. The noise level is higher than they seem to think, and useful material is spread thin and hard to find. But hey, given the tendency of Net people to flame and act selfishly and irrationally, I'm astonished that it exists at all. (One intriguing reason they give for Wiki's success is that it is by design difficult to use. People with short attention spans get frustrated and go elsewhere, heh.)

Wiki is written in Erlang, a programming language that came out of Ericksson in Sweden, and is about par for house-written languages that are insufficiently different from Pascal or C to be worth the bother. At least it's free and open-source. Anyway, Wiki is an Interesting Thing and might be worth a look, as one of the Web's best-kept secrets.