Wednesday, October 31, 2007


The Wii Virtual Console is a brilliant invention. Leaving aside its immense appeal (and thus marketability) to old-school gamers, it works to make enforcement of copyright easier.

First, the legal framework. Section 106 of the Copyright Act delineates the exclusive rights of copyright owners:
Subject to sections 107 through 122, the owner of copyright under this title has the exclusive rights to do and to authorize any of the following:

(1) to reproduce the copyrighted work in copies or phonorecords;

(2) to prepare derivative works based upon the copyrighted work;

(3) to distribute copies or phonorecords of the copyrighted work to the public by sale or other transfer of ownership, or by rental, lease, or lending;

(4) in the case of literary, musical, dramatic, and choreographic works, pantomimes, and motion pictures and other audiovisual works, to perform the copyrighted work publicly;

(5) in the case of literary, musical, dramatic, and choreographic works, pantomimes, and pictorial, graphic, or sculptural works, including the individual images of a motion picture or other audiovisual work, to display the copyrighted work publicly; and

(6) in the case of sound recordings, to perform the copyrighted work publicly by means of a digital audio transmission.
I'll bet the observant among you are wondering what's going on with that "Subject to sections 107 through 122" business. Well, as it happens, section 107 is the important exception to the exclusive rights discussion here:
Notwithstanding the provisions of sections 106 and 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include--

(1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;

(2) the nature of the copyrighted work;

(3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and

(4) the effect of the use upon the potential market for or value of the copyrighted work.

The fact that a work is unpublished shall not itself bar a finding of fair use if such finding is made upon consideration of all the above factors.
What does this have to do with the Virtual Console? Plenty. The Virtual Console allows owners of the Wii to purchase, from an online shop, a copy of a game. That game is downloaded and available to play on the Wii. The Wii acts as an emulator and the downloads are the ROMs for this emulator. Thus, in effect, the Virtual Console is a licensing market for copies of certain old games, and for most of those old games, new physical copies are going to be extremely difficult, if not impossible, to find.

So the Virtual Console is an emulator. What does that have to do with copyright? Emulators have been around for a while, and video game publishers have never liked them. An emulator like, say, UberNES, can open and play certain files, called ROMs, that have the data contained on a video game cartridge, but in a format suitable for a personal computer. Combine emulator and ROM, and you can play a NES game on your computer. The problem (I hope you saw this coming) is that the companies who created the video games of which the ROMs are copies do not authorize this kind of use. The ROM infringes their copyright, to use the legal language. ROM users counter with some sort of “fair use” argument, and the key to that argument is the fourth fair use factor* - the effect of the use on the potential market for or value of the copyrighted work. Because the games of which ROMs were copies were so old, the video game companies weren’t releasing any new ones, and the effect of ROMs on the market was probably precisely nothing (with some exceptions, noted below). Even if the video game makers had gone after ROMs, they might not have had a case.

There are ways around this fair use exception, of course. For one, you could start selling new versions of the old games, as Nintendo did when it released its “Super Mario Advance” games for the Game Boy Advance. The games were, in most respects, just translations of old NES and SNES games into GBA cartridges, although some tweaks were made, some things added, some things unfortunately left out (like the “Fuzzy” graphics distortions in Yoshi’s Island, which were apparently impossible for the GBA to process). Now there’s a market for the old games, albeit in a slightly different form, but this substantially weakens the fair use rationale. In fact, it probably destroys it.

The Virtual Console is a systematic means of preserving copyright by undermining the fair use argument of ROM users. Because Nintendo has created a market for old games, even games that are twenty years old, fair use disappears. Because Nintendo was doing this even with the GBA games (and even earlier – anyone remember Super Mario All-Stars?), as I noted above, it’s just possible that this re-releasing of old games was part of a conscious effort not only to exploit the lucrative market of selling old games for nostalgia, but also to secure the copyright in those old games and prevent third parties from producing versions of the games themselves to distribute with no profit to Nintendo.

This dual benefit from the Virtual Console is ingenious. I can’t imagine the costs of maintaining the Virtual Console store are any more than, and are probably much less than, the costs of physically producing new NES systems and NES cartridges (and other systems and games), so it’s very cost-effective to boot.

I want to talk about one more thing – the license market created by the Virtual Console. There is a circularity here that infects fair use analysis generally. That is, if something is a fair use, then there is no need to pay a license fee to the copyright holder in order to continue such fair use. But if the copyright holder does create a license market, then that feeds into the fourth factor’s “effect on the market” and tilts that factor toward the copyright holder. But wait – if the use is fair, then it shouldn’t matter that the copyright holder is charging people a license fee for that fair use; the effect on the market is illusory, because the license market is illusory. People who pay the license fee are being cheated, because the use is fair in the first place; but then, that people are paying and that the holder is charging is evidence that the fee is necessary, and the people paying aren’t being cheated. What’s going on here?

Simple – the copyright holder, by creating a license market even for things that are arguably fair uses, gets a great deal of control over what counts as a fair use. It’s simple – offer to license all manner of uses, and argue that any use not licensed, even if normally a fair use, fails the fourth factor. It’s a glitch with copyright law, one that copyright holders can exploit.

I strongly suspect that Nintendo had copyright in mind when creating the Virtual Console. If so, bravo. If not, nice windfall anyway, guys.

*The fourth factor is the most important one. I didn’t say it, the statute doesn’t say it, but the Supreme Court said it (O’Connor for-the-lose), so that’s what it is. Sorry, I don’t make the law.

Tuesday, October 30, 2007

Beating the AI Drum

Having received I Am a Strange Loop for my birthday, I have been reading it with delight. I hope to finish it either today or tomorrow, and as it deserves a full review, I expect to be giving it one shortly. However, one little (or is it?) point of disagreement has popped up.

Douglas Hofstadter, author of that book and (more famously) of Gödel, Escher, Bach: An Eternal Golden Braid, speaks at some length about the topic of artificial intelligence. In Herr Hofstadter's view, there is no metaphysical bar to machines' being able to think, though machine thought may not be precisely similar to human thought. In any case, anyone who has read his books gets the sense that he believes AI to be inevitable.

Compare John Searle; Hofstadter does it, so we might as well do it, too. Searle's objection to AI is that machines cannot possibly think like living organisms think. I think Searle may have a point - the only "machines" in our experience who are capable of thought are biologically constituted. It is not a necessary inference that only biological organisms have the physical structure needed to think; then again, the inference is appealing. It is possible that only an organism could possibly think, and our experience of things that do think is a reflection of a general prohibition on machine intelligence.

Hofstadter attacks that idea and attacks Searle personally, rather viciously, in his latest book. This is what I mean:
[John Searle] has gotten a lot of mileage out of the fact that a Turing machine is abstract machine, and therefore could, in principle, be built out of any materials whatsoever. In a ploy that, in my opinion, should fool only third-graders but that unfortunately takes in great multitudes of his professional colleagues, he pokes merciless fun at the idea that thinking could ever be implemented in a system made of such far-fetched physical substrates as toilet paper and pebbles...
What Hofstadter is saying is that a Turing machine could be constructed of anything; Searle recognizes that toilet paper and pebbles could provide the tape and on/off indicator for cells of a Turing machine; and Searle imputes to AI fanatics the absurd claim that toilet paper can (or could) think.

Such successes give the lie to the tired dogma endlessly repeated by John Searle that computers are forever doomed to mere "simulation" of the processes of life.
Well, all right; but who's the dogmatist here? Here is one dogma: "Computers can never think in the way that human beings think, because the components of computers are not suitable to thought." But here's another: "Computers can think just like humans, because the components of a computer are not different, in any way essential for thought, from the components of the human brain." Reflect on that a bit - it's just as dogmatic to assume that the cells of the human brain are irrelevant to thought as to say that they are necessary for thought. Perhaps computers can think, but Douglas Hofstadter and other AI proponents have just as strong a burden as their opponents to prove their point. People assume that the human brain is a type of computer. Then a circuit-based computer, obviously being a computer as well, ought to be able to think in a substantially similar way to the human computer. Sure; if the brain is a computer. Let's see the proof of that dogma first.

Searle may or may not be correct, but he has an interesting point. Organic compounds may have a special suitability to being the physical substrate of the abstract activity of thought. If so, reproducing that activity in a different medium may be impossible, or extremely difficult. We should not blithely expect it to happen. Further, given the demands of evolution, we should not be surprised that the configuration of our brains is the best possible (the only possible?) way to produce thought and consciousness.

Monday, October 29, 2007

No. I (most likely) will NOT sue you.

As have my blogging compatriots in the past year or so, I will soon (hopefully!) embark on the magical journey known as law school. This information has been, for the most part, well received by friends and family. As one family member so delicately put it, “That would be a damn sight better than whatever the hell it is you’re doing with your life now.”

So, yeah. Languishing in my lower level support role for an undergraduate institution has done almost as much to jade me as my own undergraduate experience; the chorus of “told-you-so’s” regarding my pure mathematics major replacing the chorus of “why’s” and the steady, if diminutive, paycheck are the main differences.

It hasn’t all been wine and roses from the peanut gallery, however. Several coworkers and family members still question this decision of mine. Most hauntingly, they ask me if I plan to sue them in a few years, presumably with a J.D. in my back pocket and a fist full of promissory notes. Despite my assurances that this is not the case (“What do you have I could take from you?” still never seems to settle said rabble-rousers), this persistent image of a nitroglycerin vial of a lawyer, ready to explode lawsuits all over your face at the slightest provocation, has been, well, persistent.*

So, thus begins today’s learning. Listen up. Lawyers are not all power-grabbing, fee-charging, ethics-lacking, wallet-raping scumbag ratfinks. Sure, a few are (just look at John Edwards [the doofus who pretends to care about anything and is pretending to run for POTUS {ahoy! Google hits galore on this}] for an example of a crook who never saw a pseudo-tort he didn’t like), but this sort of practice, though often highly visible, is relatively rare.

Basically, a lawyer is simply someone who is qualified (certified via the state bar association) to advise people on legal matters. So, when you ask your buddy, who is a 2L, about some jerk at a catering company who is suddenly charging you more for the soup than your contract said he would, he is not allowed to give you advice on potential legal remedies. Such advice would be, ummm…, “under the table” I suppose, and not “advice” in the legal sense of the word (every word has a “legal” definition which can “be” accentuated by “putting” quotes ““”” around “it”).

Some lawyers do practice in part in the courtroom representing clients, but there is so much more to the legal field than this. If you want to buy a new house, for example, lawyers are involved on behalf of both parties to ensure that the transfer of the real property is done correctly and in accordance with any relevant statutes. Wouldn’t you hate to give someone a couple hundred thousand for a house, only to discover later that you have no claim of ownership of the property? Yeah, that would suck.

If you are a clever fellow, and create a widget that replaces several older gadgets, you would like to protect your invention and make people pay to use or produce it. Enter patent lawyers. It is not often you see one of these gentlemen (or ladies, why not?) in the courtroom, for rarely is there a reason. In fact, Calvin’s father from Calvin & Hobbes was a patent attorney. You never saw him in a courtroom, and you know you read every single one of Watterson’s masterpieces (right?). (Yes yes yes, I know, this is a very simple exposition. For the purposes of the topic at hand, however, it is illustrative of the falsehood of the perception of the prevalence of the ambulance-chasing tort-happy type of lawyer. Or something.)

Anyway, lawyers assist businesses by helping ensure compliance with relevant statutes, assist in the creating of contracts, help people get married or divorced, and make sure that the obscene amount of money you fork over to the government on April 15 is correct (even if wrong in a different sense of the word).

Really, I guess that’s it. No neat little ribbon to tie this mess up at the end (do I seem out of practice to you too?). A J.D. is a very beneficial degree to have today, even if the practice of law is not your ultimate goal. It is required for a judge and a de facto qualification for most public offices. So, as I take off on the strange journey of law school, I do expect that real doors will be opened for me (not the doors of a math degree in Pennsylvania, which lie on the y-axis in the complex plane, if you get my drift [if not, comment, and I will put that degree of mine to work after all]). So, no, I won’t sue you for no reason, and no, the world doesn’t have enough lawyers (just enough John Edwardses).

*Don’t worry, lawyer jokes are still funny. And I used an asterisk just like all the other authors here. I am cool too!

Friday, October 26, 2007

Very Observant

Ah, Friday. Tomorrow, my odometer turns over and I will be visiting this fine American establishment. Thoughts of taxation, business organizations, intellectual property, civil procedure, and telecommunications law will be washed away in a relentless tide of hops and barley. Before the weekend, I think it only appropriate to send the blog off with a long exercise in pure intellection. I know everyone loves that.

I’ve been reading this and reached the selections from Aristotle’s Physics. Of course Aristotelian physics is outdated for a number of reasons. First, much of the Physics itself is devoted to inquiries that are not considered part of the branch of learning called physics today. Discussion of predication, substance, and the four causes is proper to the philosophy class, not the physics laboratory, although Aristotle obviously did not make the distinction. Division among the sciences and between the sciences and non-scientific fields is a matter of judgment, but here we have some pretty clear metaphysics skulking around in a physical work:

If nothing else besides soul, and more specifically the understanding in the soul, is naturally capable of counting, then it is impossible for there to be time if there is no soul. All there would be would be the subject of time, if, that is to say, it is possible for there to be motion without soul. Before and after belong to motion, and time is these insofar as they are countable.

However, the difference between physics then and physics now is arguably nothing more than a disagreement about classification, and, ultimately, about names. Beyond the nominal differences, Aristotelian physics is simply substantively wrong in many ways. For instance, according to Aristotle, heavier objects fall more quickly than lighter objects. Of course, Galileo proved that to be wrong.

Ah, Galileo! I’m getting ahead of myself. Aristotle’s physics is more than a mere curiosity (compare, for instance, Thales, who made the grand claim that all the world is water; his physics, such as it is, is understandably not a hot topic of debate*). Aristotle’s works were preserved by Muslim scholars and reached the Christian world, whereas the works of the Atomists, whose physics was adopted by the Epicureans, were not received until much later. For the medieval period, then, Aristotle was the ancient authority on physics. St. Thomas Aquinas, who could properly be called the Aristotle of Catholicism (both for the breadth of his works and for his reliance on Aristotle), followed Aristotle to a large extent when writing on worldly matters. The official physics of the Catholic Church was a modification of Aristotle’s, and this had a profound effect on the development of science. First, it diverted all physical inquiry into Aristotelian channels; that is, whatever investigation of physics happened was likely to be viewed in light of the principles set forth by Aristotle. Second, it stifled any physical inquiry that could not (or could not easily) be reconciled with the Aristotelian worldview. Galileo directly challenged this worldview and faced the Inquisition for it, but the earlier influence (between Aquinas and Galileo) was important, while more subtle. Those who investigated physics used Aristotle’s language, concepts, and even empirical facts found in his work. If a concept could not be squared with that tradition, it had a hard time finding hold on the medieval mind.

The difficulty overcoming this dogmatism and the intellectual suppression that resulted have been much decried, and I need not add my voice to that din. Some who criticize the Church, however, are absolutely mistaken about what was “wrong” in the entire affair. It is argued that Galileo, Copernicus, Newton, and scientific pioneers like them were methodologically empiricist and were struggling against a dogmatic institution that elevated authority above observation in evaluating truth. This is plainly wrong. Aristotle made many observations, especially about biology, and his works were not the result of a priori speculation from abstract principles. Aristotle did what we might now term “field work” and derived a great many of his physical and metaphysical principles empirically. He was confused about some things, wrong about some things, drew erroneous conclusions about some things; he didn’t just make things up, though, or take the word of experts in all cases.

Contrast Atomism, the best-developed physical theory besides Aristotelianism. Atomists posited that tiny, invisible objects of varying shape, by moving against each other in great numbers, accounted for all the variety of physical phenomena that we could observe. This idea is profoundly anti-empiricist; for one thing, atoms are invisible because they are too small possibly to see. However, these minute particles, operating by blind causality, make up the whole world, including the apparently purposive movements of living organisms. On the one hand, then, is Aristotle, explaining what seems to happen by appealing to sense experience. On the other hand, the Atomists claim that the fundamental nature of reality is closed off from observation and can be described only by abstract thought. Again, for Aristotle, living things have parts that work in harmony because they have souls which order the parts; for Atomists, an organism is essentially a chance arrangement of small particles whose collisions fortunately produce something called life.

Yet Atomism was more fruitful for scientific development. Obviously, then, something other than strict adherence to empiricism is necessary for science to advance; and, in fact, empiricism can sometimes work against science. This is true because mere observation cannot serve the mind at all; the mind must supply something to the observation in order to discover the significance of the phenomena. Atomism was a brilliant idea that could not have come about without a productive effort of thought; observation of a thousand falling stones would not create, of itself, the idea that reality is atoms moving in the void.

Certainly observation is needed to correct far-flung speculation. Observe (sorry) the rise of string theory in contemporary physics. From what I understand, string theory has no theoretical or empirical foundation (I have assured myself a steady stream of hate-mail for saying this. Wait for it.) but it’s still taken seriously by an alarming number of people. Again, as I understand, most serious physicists do not find it to be a viable alternative to the quantum mechanics/relativity dualism in contemporary physics. String theory is clever, startlingly deep, unified, and without evidentiary basis. It’s probably theoretically inadequate as well (except in the extremely broad sense that it explains reality in a consistent way, unlike the theories we have now), but if someone could at least show some phenomenon that makes string theory likely, it’d be standing on firmer ground. When theory distances itself so much from fact that reality is not even consulted when building the world-system, empirical observation is sorely needed.

But observation alone will not do. To paraphrase Kant: we must approach the world not as students before a teacher, but as judges questioning a witness. Passive observation takes in all the data of experience but has no mechanism for sifting that data; in fact, it fails even before all the data are received because everything is received indiscriminately, with no prioritization and preliminary standards for what is to be observed with greater attention, what to be ignored, and what patterns to expect. We must know at least something about what we are looking for to find anything at all. Obviously there must be a balance between utter passivity and merely token empiricism with already-determined results. On the one hand, nothing will be found; on the other, what is found will be precisely what was predicted no matter what. Neither extreme provides a suitable frame of mind for using observation to enlarge understanding.

What such reasoning leads to is the idea that the scientifically-inclined person needs a suitable heuristic when approaching the world. As choosing the suitable heuristic is at least partly a philosophical problem, philosophy will never cease to be important for scientific development. One of the problems with a heuristic, however, is that people may begin to a regard an especially fruitful and well-established one as a constitutive concept, when heuristics are by definition regulative. I’ve spoken about the difference before; those curious can search in this blog for that earlier entry. When the heuristic becomes a constitutive concept, it becomes a dogma, and its value is elevated from being a mere catalyst for discovering truth to being an expression of truth itself. The history of major advances in thought is the history of fundamental changes in ways of thinking, replacing old heuristics with new. Aristotle’s teleology-based philosophy of science had to give way to the Scientific Revolution’s mechanistic view. It would be worthwhile, in a future entry, to examine closely regulative concepts in the history of science. I have especially in mind Kant’s Metaphysical Foundations of Natural Science; but that must wait. Another question is the extent to which the mechanical theory of nature has become its own dogma – Richard Dawkins being the author of our century’s mechanistic canon. These threads diverge sufficiently from the topic I was discussing that they must wait for later exposition.

My own assumption whenever doing philosophy of science is that we are better off assuming that our meta-science is regulative and not constitutive. Many in the past, and many still today, probably think that certain aspects of meta-science are immutably fixed; further, it is known that they are fixed, and which ones are fixed, which variable for descriptive convenience. There is a meta-meta-scientific question here – is the human understanding of science fundamentally based on heuristics or fundamentally realist? Because, as should be clear, the idea that one heuristic is better than another is meta-science, whereas the question I just expressed is about whether one-heuristic-replacing-another is what meta-science is about.

Well, that’s far off course. It’s interesting, though. Perhaps I should say more about that later (and now I have a laundry list of blog topics to finish). Perhaps I should obliterate the questions with booze forthwith.

* But the traditional interpretation of his metaphysics has been challenged lately. In the Cambridge Companion to Early Greek Philosophy, Keimpe Algra contends that Thales’ claim that “all is from water” is capable of two interpretations. The traditional interpretation is that water is the material substrate of the universe. A better interpretation, says this scholar, is that all the matter in the world originally was water, but that it has been changed. It is incorrect to impute to Thales the belief that, at its essence, every existing thing is water in some form or another. Instead, water was the original constituent of the universe, but things have since changed. Aristotle seems to have held the traditional view, and in light of his importance for our understanding of the Presocratics, may in fact have been the originator of the traditional view.

Right on the money

In all the media publicity that surrounds the Supreme Court whenever it evaluates the constitutionality of a federal or state statute, the Court's more mundane role is often overlooked. Many people seem to forget that the Court spends a great deal of time simply clarifying federal statutes to resolve circuit splits or streamline court proceedings. However, one such case this term probably says more about the state of the union than any constitutional review case.

In July 2004, Humberto Cuellar was driving through Texas and headed towards Mexico when he was stopped on suspicion of driving while intoxicated. While talking with Cuellar, the officer noticed bulges in his pockets, so she performed a Terry pat-down. The bulges turned out to be rolls of $10 and $20 bills that smelled of marijuana. The car was searched, and $83,000 in U.S. currency was found beneath the back floorboard. He was charged with and convicted of money laundering and sentenced to 78 months in prison. The conviction was ultimately upheld by an en banc review of the Fifth Circuit, and Cuellar has now appealed to the Supreme Court.

There's no constitutional issue here, so all the Court is doing is interpreting the relevant statute. In this case, the federal statute defines money laundering as transportation designed to “conceal or disguise the nature, the location, the source, the ownership, or the control” of the proceeds of illegal activity. Short of violating the Constitution in some way, Congress can define money laundering as it sees fit. As is frequently the case, though, Congress has not clearly defined the matter, so the highest court in the land must figure out if "conceal" means physically hiding money or if it requires creating the appearance of legitimacy (as it does in the common understanding). In the grand scheme of things, this is pedestrian statutory interpretation and not too exciting.

This case does say something about American society and the War on Drugs, though. It has now reached the point that carrying a large amount of cash is a crime. Just carrying the cash can result in the money being seized and a court hearing where the possessor must prove by a proponderance of the evidence that the money is legitimate (see this civil asset forfeiture case for an example). When cash is coupled with erratic travel plans and contradictory justifications, it's money laundering and can result in 20 years in prison.

Cuellar is probably guilty of trying to smuggle money to Mexico so it could later be laundered. It only takes a light perusal of the facts to come to that conclusion. However, taking over $10,000 from the country without declaring it is already a crime (and at a minimum, the government will likely seize all the money). Should be he guilty of money laundering, though? Should innocently possessing large amounts of cash be grounds for seizing it and forcing the owner to prove its innocence? Should suspiciously possessing cash be grounds for up to 20 years in prison? Some questions to ponder as the Supreme Court quietly considers a routine statutory matter.

Thursday, October 25, 2007

Hating Mercenaries

Blackwater is in trouble. Yes, I'm coming late to this story. But there's just been so much going on - new breakfast items, gay wizards, gay Germans, and various other minutiae have occupied the attention of me and my fellow Skeptics. As usual, however, my perspective is sufficiently deeper than that offered by contemporaneous reports that I can justify considerable delay. I am sure I will not disappoint.

There is something deeper in the Blackwater story than mere popular disgust at the firm's collateral-damage-intensive tactics. (Spot the understatement). Americans dislike mercenaries. Calling the Blackwater men "mercenaries" effectively ends conversation on the matter, actually, because the term has such distasteful connotations for us. Calling Blackwater a "private security firm" at least leaves its utility open to debate. Many do not seem to realize that Blackwater ranks are made up of, well, former U.S. military men. Check out the site:
Minimum of 8 years of active US military experience (National Guard & reserve time does not count) and qualified in Special Operations Forces (Navy: SEAL, Army: Special Forces, Ranger, Marine: Force Recon, Air Force CCT, PJ)
So it's not as if these men haven't done considerable time in public service.

As I said, Americans dislike mercenaries. Many people find this animus to stem from King George III's use of Hessian mercenaries against the colonists in our Revolutionary War. I don't doubt that such soldiers were viewed with disfavor - on the one side, you have colonials fighting for their freedom, and on the other, hired goons fighting against foreigners, for foreigners, for no other reason than that there is money to be made. This view is valid, but a bit short-sighted. Mercenaries, and the powers that hire them, have been hated in Western culture at least since the Roman Republic. Recall that Rome fought three wars with Carthage for domination of the western Mediterranean. Carthage was rich, having originally been founded as a colony of the famed sea-trading Phoenicians. Because Carthage was wealthy, it could hire mercenaries to fight for it, and it was known for fielding large numbers of such mercenaries in defense of its interests. The Romans, on the other hand, fought for Rome because it was their duty to do so. To some extent, this account is inaccurate, because the Romans certainly were motivated, to some extent, by profit in trying to eliminate their Carthaginian rivals, but that issue is virtually irrelevant. What is relevant is that, in the Western mind, soldiers fighting for honor, for religion, for duty, or, more generally, for some values, were considered morally superior than those fighting merely for profit. Fighting for profit has become regarded pretty widely as evil - even the obnoxious "No Blood for Oil" slogan has its origins in our collective hatred of the war profiteer.

Examining the issue more closely, since we now know its history better, we can see that distrust of mercenaries has good practical and moral justification. If the mercenary fights for money, then if your enemy has more money, you can reasonably expect the mercenary to switch sides. Mercenaries that take payment from both sides are a practical threat as well, wasting money and profiting at the expense of the commonwealth. And as our ethics have become developed to the point that even lawful killing (that done by the State pursuant to due process convictions of criminals; and that done by the military in the prosecution of a just war; and private self-defense) must have the purest motives to be justifiable, the mercenary strikes us as a person who can never justify his killing, and who might, if the price is high enough, kill anyone. The rules of war are difficult to express; the mercenary makes those rules even more indistinct. This line-blurring does not sit well with civilized people in the 21st century.

As I hope you've noticed, this discussion was abstract, said little about whether Blackwater ought to continue to provide security for us in Iraq or elsewhere, and provided no concrete judgment on the suitability of the use of mercenaries in modern warfare. I thought, to the contrary, that this historical and philosophical view of the matter is necessary, in order to uncover all the ideological assumptions behind the term "mercenary" and to locate the discussion ethically.

One more thing: $550 a day, these men earn. What does that say about everything?

Wednesday, October 24, 2007

Sodomy for Kids

Some fictional character has come out of the closet, posthumously, through his creator. I had a hard time writing the sentence that would precisely describe this story, but there it is. If there’s any incoherence in it, any baffling absurdity, I sure assure you that I am entirely innocent – the author started the absurdity herself.
British author J.K. Rowling has outed one of the main characters of her best-selling Harry Potter series, telling fans in New York that the wizard Albus Dumbledore, head of Hogwarts school, is gay.
I can remember hearing people say that the Harry Potter books were not just for kids. They were sufficiently interesting and mature that adults could enjoy them, too. Well, apparently, the books actually portrayed the struggle of a homosexual surrounded by hundreds of little boys and girls at a boarding school. So the books were apparently not meant for kids at all!

The reaction was predictable - this is 2007, remember?
The audience reportedly fell silent after the admission -- then erupted into applause.
But some of the commenters on this item seemed to think Ms. Rowling an incredibly brave person:
Very brave of Rowling I suppose
Er, well, I don't see how, but perhaps I am just an evil reactionary. First, there's the issue of all that utterly predictable applause Ms. Rowling got when she made this revelation. If you say something or do something which you know your audience will lap up happily, then your bravery is very attenuated indeed. That is a polite way of saying that the commenter is an idiot, and that what she did wasn't brave in the least. Second, people typically ascribe bravery to the coming-out act because an outed homosexual will, presumably, be subject to discrimination, mockery, and general social discomfort. By telling the truth about himself even when it has painful consequences, he is doing the right thing and being brave by doing it. Ms. Rowling is outing someone else; actually, she's outing a fictional character. What could possibly be brave about doing that? It stretches the whole origin of the "it's brave to out yourself" idea, which is itself, in the year 2007, somewhat dubious. The only possible way this makes Ms. Rowling brave is if she always "knew" that Dumbledore was gay, and partook vicariously in his struggle to conceal it. Well, then she's psychotic. Dumbledore isn't real. In fact, I'm told he's actually killed sometime in the series of books, so that it's more correct to say "Dumbledore was gay" than "Dumbledore is gay." It's most correct to say "This is a fictional universe and I'm not going to discuss these stupid, one-dimensional characters as if they were in any way real."

Here's another bizarre comment, and note that it's from a Canadian, so you can't blame stupidity solely on Americans:
Good for Ms. Rowling. She's always made peoples skin colour, religion, etc. secondary to the characters actions, so is merely doing the same with one of her characters sexuality.
Oh, I see. She made his sexuality secondary by announcing it publicly and making a to-do about it. They must have different rules of inference up north, and logic must be a fascinating subject.

Oh, did I say earlier that the characters were one-dimensional?
"This is even more awesome because it adds another layer to Dumbledore's character, which is already so rich and complicated. I hope he got over Grindlevald (sic) and fell in love again," wrote Amanda.
I guess he's one-dimensional with an extra layer of "likes men" now. He was already rich and complicated, though, so this extra layer may just make the books impossible for most to understand. Only the wise few will be able to explain that a kid finds out he's a wizard, then goes to school. Whew, that was complicated. What's with the "sic"? Poor Amanda misspelled a fake wizard's name, and we have to embarrass her for it? For shame, Canada.

Tuesday, October 23, 2007

920 calories? I'll take two, please.

Hardee's has created another menu item that proves it, not Burger King, is the true royalty of fast food: the Country Breakfast Burrito. This remarkable burrito contains "two egg omelets filled with bacon, sausage, diced ham, cheddar cheese, hash browns and sausage gravy, all wrapped inside a flour tortilla." The perceptive reader will note that it appears to be the sort of large breakfast available at any sit-down restaurant, except it is not delivered on a plate by a surly waitress. As with any breakfast filled with meat, cheese, and eggs, it is not light fare: 920 calories and 60 grams of fat.

While Hardee's appears to be reaping some publicity benefits from the sheer spectacle of this item, it was introduced due to market pressures. According to Brad Haley, the head of marketing of Hardee's, there was customer demand for a single breakfast item that would be filling enough so the customer wouldn't have to buy two items. Sounds like a triumph of capitalism: the customers want something, so a company seeks to gain their business by offering something that satisfies the demand.* Hardee's even publishes the nutritional content of the item so there is no danger of an unassuming consumer buying the wrong product.

Of course, there has to be a joykill. Jayne Hurley, from the Collection of Assholes Center for Science in the Public Interest claims the "country breakfast bomb" is a "lousy invention" that "represents half a day's calories and a full day's worth of saturated fat and salt." She is certainly entitled to her incorrect opinion, and if this were a sane universe, one could simply ignore her. However, she is part of the organization that hates freedom and customer choice so much that it has sued Burger King, KFC, and Frito-Lay. As such, public derision is necessary.

First, calling it the "country breakfast bomb" is just asinine. It's a sit-down breakfast in a tortilla, not a weapon. RDX is an explosive; bacon and eggs are not. It seems elementary, but these are the geniuses that sued Betty Crocker because the dry carrot cake mix didn't contain enough carrots. Maybe some large pictures in crayon would help them understand what a real bomb is.

Second, it's clearly not a "lousy invention." It was introduced in response to customer demand, which means it's the sort of invention that the free-market produces reguarly. Hardee's isn't forcing anyone to buy it (unlike a lousy government program), and consumers are free to take their business elsewhere if this item offends them. While "lousy invention" sounds more objective than "I don't like it," Ms. Hurley obviously meant the latter.

Next, the burrito only "represents half a day's calories" if one is completely sedentary and must eat less than 2,000 calories a day to avoid gaining weight. However, there are many people who have been known to get off their asses occasionally and thus require a greater number of calories to maintain their health. For them, this burrito could represent a reasonable breakfast.

Finally, she's correct that the fat content is high, but there is nothing inherently wrong with calories derived from fat; in fact, for those that need the calories, fat calories are probably better than those derived from carbohydrates for blood sugar reasons. This breakfast could actually be healthier than a stack of whole-grain pancakes. Alert the media! Oh wait, that's only done when panicking about a new fast-food item, not when using common sense.

Sadly, these nannies won't be happy until the federal government mandates a diet of sweet potatoes, broccoli, butternut squash, and spinach. I will do my best to fight them by patronizing companies that meet customer demand and enjoying every bite of delicious, fat-filled food. Hey CSPI, my bike/run workout tomorrow morning is going to burn over 2,000 calories. Guess where I'll be eating breakfast?

*The price is another triumph of capitalism: $2.69. Nearly 1,000 calories (including lots of high-energy fat and protein) for under three dollars. Considering how much time and energy it took 100 years ago just to survive, this is a testament to American progress.

Monday, October 22, 2007

Raving Germanophobia*

Mr. Michael Knox Beran really, really dislikes Prussia. I'm stunned that anyone in 2007 can even manage hatred toward Otto von Bismarck. I wonder if Mr. Beran grew up in Alsace, or has Austrian blood, or something. Maybe he sat on one of those spiked helmets as a child? Whatever; but this is truly heartfelt hatred on display.
Yet Otto von Bismarck was in his own way a character quite as demoniacal as Gottfried, only he enacted his Walpurgisnacht yearnings on a much grander scale.
Well, what did Gottfried, Otto's great-great-grandson, do, and let's compare the men:
The count, who was 44, died of what the newspapers reported as one of the largest cocaine overdoses on record; the pathologist said that the amount of cocaine in Gottfried’s blood was “the highest” he had “ever seen.”
A post-mortem revealed that he had consumed morphine as well as cocaine, that he had a damaged liver, and that he suffered from Hepatitis B, Hepatitis C, and an HIV infection.
in August 2006 a reveler at what the coroner described as a “gay orgy” in the Count’s flat perished when he plunged from a rooftop garden.
On the one hand, gay orgies, massive infections, damaged organs, drug use at Keith Richards levels; on the other hand:
The Iron Chancellor fantasized that he was a bomb; confessed himself capable of lying awake through a whole night “hating”; and spoke of the “brutal sensuality” and “depraved fantasy” that led him “so close to the greatest sins.”
Er, right. Apparently Otto imagined doing some awful things, which, I think we can all agree, is precisely the same as doing horrible things. Oh yes, on that point only a fool could disagree.

Otto von Bismarck was instrumental in the political and military rise of Prussia and the unification of the German states. You see, after the dissolution of the Holy Roman Empire, Germany didn't exist. Separate states existed, relatively weak when compared against the powerful empires of the early 19th centuries: France, Austria, and Russia being just the most uncomfortably close of them. When Napoleon failed to take over Europe, the great powers feared that one of them would become too powerful again, especially France (yes, people were once afraid of France! What a time to have been alive.), so they took a number of precautions. One precaution was making sure that Austria effectively controlled the German states, so they could be used as a shield to stop any other powers from getting out of line. Prussia, the largest, most culturally and intellectually developed, and most ambitious of the German states, was rather annoyed at having its fellow Germans under foreign rule. In a series of wars in the middle of the 19th century, culminating in the Austro-Prussian War, Prussia took more and more territory, humiliated the Austrians, and united the German states. Bismarck was one of the big movers behind this.

Quite a different picture from what Mr. Beran paints:
he struck at the heart of the dream of the middle and professional classes in Germany when he set out to crush the free institutions they were busily assembling.
Prussia was always in danger of becoming another highway for an ambitious leader trying to invade Russia. Further, the state of Europe was simply insulting to the Germans - they were pawns, doing nothing more than providing a shield for Austria. Otto offered independence and German unity. I see parallels in the colonial struggle against English rule in the late 18th century, but it's a bit of a stretch, so I won't press the analogy. I, for one, don't find moral equivalence to be a virtue.
his governing philosophy, which combined blood-and-soil nationalism with coercive social-welfare programs, prepared the way for a host of unsavory creeds, most notably the National Socialism of Adolf Hitler.
Look, this is positively Randian. Kant did not create Hitler; nor did Bismarck. This conclusion is dishonest and offensive. Bismarck instigated wars, sure; but those wars were designed and had the effect of asserting Prussian (and later German) power against empires who would have consumed Germany otherwise. Bismarck never went to the lengths of a Napoleon to take over all Europe, he just fought enough to secure the unity of Germany and annex territories on its borders, territory that arguably was German by right. So let's drop this phony connection.

Hiding behind this article might be a conflation of Prussia circa 1870 with Germany circa 1914. I don't know if the author has made this mistake, because he may just hate Prussia on the merits, but it's worthwhile to compare the two. The wars Prussia fought were, as I said, done mostly for the sake of unification and independence. But when Kaiser Wilhelm II took the throne, he (and some in the German elite) wanted more. They wanted, among other things, to get into the colonial game, and they acquired that impulse rather late, so that any colonial ambitions would inevitably lead to direct conflict with other European powers. And Bernhardi and his ilk were even more ambitious - they saw it as Germany's destiny to take over Europe. The warmongering and bogus racial theories of the Nazis were finally being formed in the German mind around this time, and it would not be a logical leap to connect that era's way of thinking with the Nazis'.

No one said Germany was perfect. Nor was Prussia. But let's not find Hitler in every little corner of German history.

* "-phobia" is a terrible and incorrect way to express hatred, but there it is. Just like homophobes don't fear things that are the same, Germanophobes don't fear what is German. But I can use only the words we've got.

Sunday, October 21, 2007


I try not to update during the weekends, when no one is reading (right, right, no one reads anyway), but this is too short to waste a weekday on. At a moot court oral argument yesterday morning, one of my opposing counsel contended that the religious organization he represented was not really controversial. After all, in his words, it wasn't like a group "discussing Kantian ethics."

That's harsh. Imagine the bitter controversy engendered by having people discuss Kantian ethics in public schools! A school wouldn't even be able to get insurance after the brawl that would cause.

Friday, October 19, 2007

Strange Times

I got a roll of quarters from the bank today. One of them was Canadian. My first reaction was to be furious. Now that I think of it, though, maybe I should give the bank back the valuable property they gave me by accident?

Thursday, October 18, 2007

Mike Nelson, Infringer?

Michael J. Nelson, writer and host of Mystery Science Theater 3000, and all-around funny man, has a new project called RiffTrax. You see, when MST3K ended, many of us, clamoring from our parents' basements, wished that the amusement could go on. But no television network would pick up MST3K; what to do? Mike had two ideas, one awful, and one RiffTrax. His first idea was to sell DVDs of movies with his humorous commentary on them. This is, sadly, grossly violative of copyright. RiffTrax proposes to sell merely the funny audio commentary to consumers who themselves have to provide the movie. Theoretically, this seems to work very well - it does not cut into the revenues of movie studios at all, because people still have to buy the movies, and, in fact, it arguably makes the movies more attractive on the market because there is another application to which they can be put. This all sounds perfect for everyone.

Well, here's the problem. I should preface what I am about to say by warning the reader that it is speculative, that I don't actually think Mike Nelson is doing anything wrong, and that he would probably win if a lawsuit were brought against him. Now to it. When MST3K was running, the producers had licenses from the filmmakers of the movies mocked. This was cheap and easy because the movies were so awful (and that was the point) that the owners of the copyright simply could not squeeze much out of licensees - the market wouldn't bear it. I cannot imagine that, for instance, Manos: The Hands of Fate is a valuable movie to license. Getting a license for a RiffTrax movie may be impossible as a practical matter, because many of the films are extremely popular blockbusters. Further, the directors of those movies, quite assured of their films' artistic merit, would balk at the idea of allowing someone to mock mercilessly what are their greatest achievements.

So Mike does not have licenses and has no intention of trying to get licenses. Because RiffTrax does not sell the movies at all, though, it might seem like no license is needed. Here is where the speculation begins. The audio commentaries are themselves expression in a fixed medium. The commentaries are only understandable as anything if played in precise conjunction with separate works whose copyrights are held by entirely distinct entities. Do these facts make RiffTrax derivative works? The copyright owner has the exclusive right to make derivative works. Mike Nelson is not the copyright owner. If I add the premise that RiffTrax are derivative works, then we come to a disturbing conclusion - Mike Nelson is a copyright infringer.

There is a separate issue. Even if RiffTrax are not infringing, they may encourage infringing uses by the consumers. Certainly Mike Nelson realized that he could not produce an MST3K-like program without a license. But what happens when the end user plays a DVD and a RiffTrax at the same time? Besides not having those delightful little shadows at the bottom of the screen, he is replicating the MST3K experience without a license.

I sincerely hope nothing comes of this, but I know that money talks in the legal world, and the threat even of a frivolous lawsuit can end many a legal activity. I would also appreciate any comments by people more informed than I am who may be able to explain things better than I.

Wednesday, October 17, 2007

A "Silver Tsunami"

It appears that the first baby boomer recently applied for social security retirement benefits. Ms. Casey-Kirschling is the first of 80 million such individuals, and thanks to them, social security will pay more in benefits than in receives in taxes by 2017, and it will exhaust the trust fund by 2041. However, according to Social Security Commissioner Michael Astrue, "there is no reason to have any immediate panic" because "I and most people who are really familiar with the situation are confident that . . . Social Security will be there for future generations." Bush's privatization plan is currently buried in the same vicinity as Jimmy Hoffa, but Commissioner Astrue is confident that something will be done. I know I feel better.

Where the Reuters article is mostly sunshine and rainbows, this one is a bit more sobering. That "trust fund" supposedly created when Congress spent all the surplus on other things? Just a pile of IOUs. Actually, it's not even a collection of IOUs since one's IOU to one's self is actually a reminder. It gets better, too. The U.S. currently has 3.3 workers paying into social security for each retiree drawing benefits, but it's expected to drop to 2 workers per retiree around 2030. And let's not forget Medicare, which is expected to start running a deficit in 2010 and exhaust its "trust fund" by 2018, and whose financial liability is six (6) times greater than social security.

Of course, there's an easy solution: cut benefits. Let's see a show of hands, who believes that the Me Generation will vote for politicians who promise to cut benefits? Yeah. So, if benefits are kept at their current level, Congress has three options to raise that money: raise taxes, print money, or borrow from foreign creditors. Of course, the second option doesn't really work since social security benefits are tied to inflation, so running the printing presses overtime is treading water, at best (nevermind the effect of inflation on the rest of the economy). The third option isn't going to cover all the shortfall for two reasons. First, many Americans are already upset about the amount owed to foreign creditors, so any politician seeking to increase that amount is going to have a rough time of it. Two, foreign creditors have to be willing to buy, and given the amount of debt the U.S. already has and some of the frightening long-term economic predictions, it's far from guaranteed that any wish to buy more U.S. debt. Therefore, the choice is obvious: raise taxes.

Raised taxes will hurt the victims of this Ponzi scheme Generation X and those that follow in two ways. First, they will never see a dime of money from social security, so they will have to build their retirement funds on their own. Second, all that money flowing to the baby boomers will reduce the amount they can save for their own retirement. They will have a great deal of money taken from them (won't it be fun when the $1050 a month to a retiree is shouldered by two workers!), they won't get a return, and it will be difficult to save for retirement. A silver tsunami? To Generation X, Y, and beyond, it will be a never-ending golden shower.

Tuesday, October 16, 2007

Supremely Unintended

I would like to draw your attention, if I may, to this Supreme Court opinion. I should explain a bit of the procedure involved, because it might be difficult to understand what this opinion is unless you know a little about the law.

People who would like to have their cases heard by the United States Supreme Court must (generally) file a petition for a writ of certiorari ("cert" for short). If the writ is granted, the parties brief the cases, submit those briefs to the Court, and an oral argument is scheduled. Cert is granted when four of the nine justices on the Court agree that the case is important enough for them to hear. When three or fewer justices vote to hear the case, cert is denied. The denial of cert will come out in an order, often in a large list of other petitions which were denied. Usually nothing more is written than "the petitions for a writ of certiorari are denied." However, a justice who feels adamantly that cert ought to have been granted can, and occasionally (though it's fairly rare) does, write a dissent from the denial of cert. Bizarrely enough, in some cases, a justice who voted against cert can write a concurring opinion in the denial of cert. What we have here, from Justice Breyer, is a dissent from a denial of cert.

The facts and main issue are as follows:
Joe Clarence Smith, petitioner in this case, was first sentenced to death 30 years ago. Due to constitutional error, the Arizona courts in 1979 set this first sentencing aside. Smith was again sentenced to death later that year. Due to ineffective assistance of counsel, the federal courts in 1999 set this second sentencing aside. Smith was again sentenced to death in 2004. He now argues that the Federal Constitution’s prohibition against cruel and unusual punishments forbids his execution more than 30 years after he was initially convicted.
What vexes Justice Breyer is the thought that excecuting someone thirty years after his conviction is arguably unusual; and holding someone on death row for thirty years is cruel. Well, if both are so, why have they come about in this case? A man who should have been lawfully executed decades ago has been able to avoid execution because of overzealous courts and an appeals procedure that delays, even thwarts, justice. Perhaps, in order to avoid violating the Eighth Amendment's prohibition on cruel and unusual punishment, we ought to do away with such appeals. Is that more just?

Unintentionally, this liberal justice is pressing a dilemma upon us - either stop executing those who have been sentenced to death, or sharply curtail judicial review of capital sentences. Probably thinking that the second option is abhorrent, Justice Breyer thinks that the logical outcome of Supreme Court review would be to stop executing people such as Mr. Smith. I wonder how most people who don't wear black robes would feel about the choice?

Insomniac Ramble

I suppose, for lack of a better idea, I ought to say something about this "call to blogs."
On October 15th, bloggers around the web will unite to put a single important issue on everyone’s mind - the environment. Every blogger will post about the environment in their own way and relating to their own topic.
It's a good thing we have a day set aside for this, so that we can put that overlooked environmental issue on everyone's mind - everyone's mind not already gagging with a surfeit of environmental drum-beating, or anything, right? And, apparently in an attempt to absolve bloggers of their responsibility to discuss the environment, the site predicts that bloggers will post "relating to their own topic." Oh, all right. So I don't have to blog about the environment, the topic you suggested, to participate. That seems surpassing strange.

By the time I finish this entry, it will be a day late for the Blog Action Day, and I can only hope the environment isn't totally ignored again (like it usually is, before and after Blog Action Day). I think we should take this time to celebrate just a few heroes of the environmental movement:

Mao Zedong - generous estimates claim that he permanently eliminated as many as 100 million sources of carbon dioxide, a greenhouse gas

Rachel Carson - in the cost/benefit analysis Carson thrust upon the world, animals were by far more worthy of life than third-world humans. It takes courage to stand up against oppressed ethnic minorities in favor of fish, and for that, we applaud her.

Al Gore - by lying about data and scaring people with dire, unfounded predictions, Al Gore has stood firmly against the assault on reason. Nostradamus himself could not match the relentless logic and clear thought of the former vice president.

open borders supporters - by advocating massive increases in U.S. population, these people propose to add tens of millions of disproportionately-intense consumers of scarce world resources to the First World. If these people were forced to remain in their native countries, they would live simpler lives and put less strain on the environment, but in this country, they can take advantage of discounted education to learn about how badly off the environment is, turning them, at least, into guilty consumers, rather than shameless consumers (one hopes!).

Keep up the good work, fellas.

Friday, October 12, 2007

Chicken Little Honored

Well, that's rather embarrassing, huh?
Al Gore won the Nobel Peace Prize today for warning the world about the dangers of global warming, and leading the campaign to persuade governments and individuals to reduce their reliance on fossil fuels.
Thank goodness Al Gore is here to let us know about the dangers of global warming - the entire issue was threatening to evade all notice, until he came along. Thank goodness, as I said.

This is what I mean by "embarrassing."
Al Gore’s award-winning climate change documentary was littered with nine inconvenient untruths, a judge ruled yesterday.
It's sort of strange that a judge would rule on truth, but then, they must do things differently across the pond. Over here, while we occasionally get even a Supreme Court justice to cite "reason and truth" when his clerk isn't able to find any case law to support his point (Mapp v. Ohio, 367 U.S. 643, 660 (1961)), we usually have to settle for having scientists decide scientific questions. I suppose you could say that expert testimony puts science in the courtroom, but for a limited purpose - for establishing facts relevant to the case, and to the standards of the court, not for establishing objective truth.

I digress. The Nobel committee's choice of recipients should be embarrassing, but I can't imagine it's going to cause any more shame than the previous choices.

Kofi Annan? Jimmy Carter? Yasser Arafat?

Ouch. That has to sting. I think Genghis Khan should be nominated for a prize; can you do that?

Thursday, October 11, 2007

Strange Life

I had a dream last night that a spot on the Supreme Court opened up and Ted Olson got nominated for the position. I don't want to say whether I thought it was a good thing. I just want to put that out there, for anyone who wonders what is really going on in my head.

Have fun with that, psychology!

Monday, October 08, 2007

Shrugged off

Like Vernunft, my exposure to Rand has been less than satisfactory. Unlike him, though, I managed to avoid her until I was in college. I don't recall hearing of her or her philosophy while young, and none of my high school courses involved one of her books (from what I understand, many people read The Fountainhead in a high school course). I remember hearing about Atlas Shrugged in high school, but only to the extent of admiring the title.

My first major exposure occurred during my junior year of undergrad. I had finally begun to self-identify as a libertarian (which, in retrospect, was a place I had been headed towards for some time), and I kept hearing about Rand from other libertarians. The majority of them disliked her and her books, but claimed her books were important anyway. The Objectivists in the libertarian groups asserted that Atlas Shrugged was her greatest work, so I decided to see what the fuss was about.

As it turned out, the fuss was about an overly long book filled with unsympathetic characters, wooden dialogue, gold fetishism, and ridiculous happenings in settings where characters proclaimed the existence of an objective reality. Every character was identifiable as either Good or Bad, and any character that wasn't pure Good was a moocher and anti-life. To top it off is the infamous John Galt speech that if actually given would take three hours. In short, not a good book by any account.

However, I could see the value of some of her ideas. At that point, I had not encountered anyone who claimed selfishness was morally good, and her belief that capitalism was the only economic system compatible with human freedom was refreshing (especially when surrounded by college-age lefties). I decided to read some of her philosophy in the hope she was a better philosopher than novelist.

I read a number of her philosophical tracts available on the internet, and I reached several conclusions. First, she never actually read anything written by Kant. Her attacks on him were nonsensical, either because she was attacking something that actually supported her position or because she was attacking something he never claimed (many of her critiques seemed to address points made by Hume, not Kant). Second, she borrowed heavily from Nietzsche. While understandable to the extent they were both egoists, there's something very amusing about a person incorporating Nietszche's work when trying to prove the existence of objective morality. Finally, the reason most philosophers ignore her work is because it's contradictory and incoherent.

On that last point, I decided to make one more attempt to understand her philosophy by attending some Objectivist meetings. I hoped that adherents to her philosophy could clarify her beliefs. Instead, I found people who refused to acknowledge any shortcomings in her work, claimed there was a massive philosophical conspiracy against her by people afraid of the truth, and who doubted quantum mechanics because it conflicted with the notion of an objective reality. They even believed Rand was correct in proclaiming the existence of objective aesthetics. I attended several meetings, but it was no use. They were devoted to Rand, and any questioning of her works meant I was too dim to truly grasp the material.

From talking to other libertarians, apparently my experience is fairly common. In some ways, it's a shame. Libertarianism does owe Rand a debt for attempting to create a moral underpinning for minarchism, but it's hard to rationally explain that to the average individual whose only exposure to Rand usually involves an asshole Objectivist (which is a somewhat redundant description). Like many true believers, the Objectivists have managed to drive away those interested in rationally discussing ideas and thereby marginalize the object of their worship.

I do disagree with Vernunft on the marginalization of Rand by university professors, though. First, it gives the Objectivists material for their claims of an anti-Rand conspiracy. Second, I think her work would be ideal material for critique by philosophy classes. Evaluating a flawed philosophy seems like it would help develop the critical thinking skills that higher education is supposed to instill in students.

Then again, when I consider the critical thinking I encounter in many educated people, assigning Rand to undergrads would probably convert them to Objectivism. Maybe ignoring her is for the best.

Law School, Again

Let me explain law school to you.

You may get an assignment to write a brief, then to argue, at some later date, in front of a panel of mock judges, advocating that brief. Be very careful to know exactly what is expected of you in this brief. Otherwise, you might have wasted many hours of your life on precisely the wrong things, like...this friend of mine.

The assignment will come with an appellate record - in the case of my friend, a trial order (with facts), an appellate opinion, a concurrence in that opinion, a dissent from that opinion, and, finally, a grant of writ of certiorari to the United States Supreme Court. My friend's role, then, was to write a brief to be submitted to SCOTUS.

Here is the mistake my friend made. The record contains a discussion, at two levels of the federal courts, of various cases on the two issues relevant to the case. My friend teamed up with another friend, and they took an issue apiece, as is appropriate for this team moot court competition. These two friends, thinking that they had better make the best argument possible, researched the cases in the record, and found other relevant cases by spending varying amounts of time on Westlaw. Roughly twenty-four hours before the finished brief was due, they started the minor things - checking citations, writing the non-substantive parts of the brief, making sure indentations and headings were all right, checking spacing, and all these other seemingly irrelevant things. Twenty-four hours seemed like a good amount of time to set aside for this, although it took over twelve hours to actually fix everything, get everything right, and print off fifteen (yes!) copies (plus, bizarrely, but per requirements, an electronic file on a 3.5" floppy disk. Do they still sell these? Yes, these friends found out, they actually do.). Having put a good deal of time into the minutiae of the brief, but not enough time to recheck everything for utter accuracy, these friends turned in what was a good technical brief with an excellent argument section - twenty pages, singled-spaced, in the argument section alone.

The brief was graded and returned. The grading form was roughly three pages - my friend is too lazy to check right now. The substantive grade was determined on three lines; the technical grade determined on almost three full pages, containing almost innumerable factors. Apparently, to my friends' surprise, the substantive discussion would have been perfectly fulfilled had they included no more than those cases, statutes, and Constitutional provisions contained in the record. The extra work on Westlaw was purely wasted. Perversely, the technical details were paramount, and more care had probably been taken by the graders in checking the table of authorities for the right number of trailing periods (my friend is not making this up - points were deducted viciously in the table of authorities) after case citations and before the corresponding page numbers than in actually reading the argument.

My friends have the first round of oral argument tomorrow. If the brief grade is anything to go by, the judges are more likely to ask, "Why is the short form citation of Widmar on page 7 of your brief so sloppy?" than "Do you dispute opposing counsel's assertion that Widmar and its progeny provide a compelling argument that the forum in this case was public?"

That's law school. If you want substance, join another field. If you like to spend more time checking the form of citations than actually reading the case cited, you'll fit right in. Welcome to law school - a position on law review is waiting.


Friday, October 05, 2007

Dull Shrug

Apparently Atlas Shrugged was published on October 10th, 1957, making next Wednesday its fiftieth anniversary.

I suppose I should mark the occasion. And I will give it what marking it deserves.

Now, I can't offer much commentary on the historical reception of the novel. It will be fifty when I am still twenty-five, making me, bizarrely, half as old as it as of next week. What I propose to offer, instead, is the history of my exposure to Ayn Rand, her works, her ideas, and her disciples. The road to disdain is twisted indeed.

I first heard of Rand in grade school. I can't remember the circumstances at all. I think my mother mentioned someone about Rand's being conservative. I learned, perhaps from my mother as well, that Rand's work embodied an ideology of radical individualism. Well, this appealed to me quite well. I was already a heartless conservative and wished that people would do for themselves, and stop bothering others. Given that my entire existence was, at the time, fully supported by two people who worked, and that I was not working myself, I was probably not being very intellectually consistent. Well, whatever; work was in the future. For now, I knew little of Ayn Rand, but what I knew, I liked. Here was a woman unafraid to tell the moochers to stop mooching.

My next major encounter with Rand came during undergrad. I was reading back issues of National Review - 1950's to 1980's, in fact. It was in those pages that I became aware of the internecine feud between the Objectivists, on the one hand, and Whittaker Chambers, on the other, representing a classical (and theistic) conservatism. Rand was an atheists; this did not sit well with me. I still thought, though, that her basic ideas must be right, and that she just must be wrong about God.

Now I was in for a surprise. Later in my undergraduate career, I came upon certain Objectivist forums. Let me outline the exchange:

I: "I think Ayn Rand had some good ideas."

They: "Of course. She was always right."

I: "And I say this as a more-or-less Kantian ethicist and epistemologist."

They: "HERETIC!"

That's not far off the truth. Ayn Rand's moral philosophy, which she seems to have bequeathed unmaimed to her intellectual heirs, is a confused and confusing mess. On the one hand, Rand was clearly, undoubtedly influenced by Immanuel Kant in her insistence that objective moral standards, discernible by reason alone, were the sole possible basis for morally sound behavior. Were you to pass the works of Kant and of Rand on this point through some online translators a few times, in fact, it would not take long until they came out identical. Objectivism, at least as far as morality goes, it clearly Kantian.

But Rand never acknowledged her debt to Kant. In fact, as you may know, what I just said is as much an understatement as Justice Scalia's describing the Telecommunications Act of 1996 as "not a model of clarity." Rand despised Kant, and ascribed to him the blame for the totalitarian regimes of the 20th century, for the murder of tens of millions of people, and for an ideology of nihilism which had infected Europe's intellectual elite. Kant, of course, is innocent of such charges, due to two misunderstandings. First, the Objectivists posit a false dilemma in ethics - either one is an egoist or an altruist. Because Kant clearly was not an egoist, and explicitly refutes egoism, he is branded an altruist. But this is unfair. Kant explicitly refutes altruism in Critique of Practical Reason right after he's refuted egoism - he says that even a philosophy designed to cause the greatest amount of aggregate happiness in the human race would not have true moral worth on its own. Kant's deontological ethics is something beyond egoism and altruism, whereby, instead of basing action on pleasure, individual happiness, or collective happiness, a person is supposed to do what is right solely because it is right. Objectivists either cannot or will not accept the distinction.

The second misunderstanding of Kant that has been shared by Objectivists since Rand is that he was a subjectivist, and, consequently, a nihilist. Kant does locate the validity of mathematics and physics in the human subject, but to leap from that "subjectivity" to relativism is unwarranted. Kant proposed transcendental idealism as a way of avoiding the contradictions, indeed, the nihilism into which realism falls. If all we know are things outside us, then we cannot be sure that those things will remain the same. If, however, what we know is conditioned on objects' conforming to our minds' very way of thinking, then objectivity is assured. This is baffling upon first glance to any student of Kant - because laws of nature are merely aspects of the human mind (thus subjective in some sense), those laws hold universally, because the mind cannot think of anything except in those fixed terms (making knowledge entirely objective by definition).

It is apparent, then, that Kant's "subjectivism" is more complex than the Objectivists claim. His subjectivism, if one takes him to be speaking truly, actually secures objective knowledge in the only way possible. Again, Rand and her cronies have misleadingly applied a label to Kant based on some aspects of his philosophy, then proceeded to extract all the implications from that label, no longer even trying to connect accusations to Kantian citations. It's intellectually dishonest, and lazy, and ignorant.

My final major encounter with the Randian school was a book by Leonard Peikoff, one of her disciples. In it Peikoff essentially argues that Kantian relativism infected German thought in the 19th century; that such relativism in morality and epistemology lead to a fascistic political theory embodied in Hegel; that later philosophers, influenced by Kant and Hegel, developed nihilism (see especially Nietzsche); and that this intellectual tradition ultimately provided the ideological backbone for the Nazi regime. In an effort, apparently, to make his book even more surreal, Herr Peikoff asserted that the same intellectual trends were apparent in the United States and would lead to a fascist regime in due time.

Well, it's absurd because it includes, among other things, the "Kant is the root of all evil" premise, which is demonstrably false. Further, while Hegel's political philosophy is peculiar, perhaps even anti-Lockean, he was no Nazi. Nor do I think that Nietzsche's nihilism had as much an effect on the Nazis as Peikoff thought, although perhaps here there is a hint of truth.

Well, that's been my experience with Ayn Rand and Objectivism. I have found the movement to be a political cult. The members appear unable or unwilling to think, to research, and to have their beliefs challenged. Ayn Rand herself seems to have been the queen of this sort of dogmatism, so I can't greet the coming anniversary with any great joy. Thank goodness she's marginalized in most universities' curricula.