Читать книгу The Permission Society - Timothy Sandefur - Страница 9
ОглавлениеCHARTERS OF LIBERTY GRANTED BY POWER
IN 1792, in a short essay called “Charters,” James Madison succinctly explained what he thought was the essential difference between the United States Constitution and the constitutions of every other nation in history. “In Europe,” he wrote, “charters of liberty have been granted by power. America has set the example ... of charters of power granted by liberty. This revolution in the practice of the world may, with an honest praise, be pronounced the most triumphant epoch of its history.”1
The “charters of liberty ... granted by power” that Madison had in mind were the celebrated documents of freedom that kings and parliaments had issued throughout the ages, many still honored today: Magna Carta of 1215, the English Petition of Right of 1628, the English Bill of Rights of 1689. Documents like these had made the British constitution – unwritten though it was – the freest in the world prior to the American Revolution. A British subject enjoyed more room to express his opinions, more liberty to do as he liked with his property, more security against government intrusion, and greater religious toleration than the subject of any other monarchy in the known world.
Yet for Madison and his contemporaries, that was not enough. He and his fellow patriots considered “charters of liberty ... granted by power” a poor substitute for actual freedom because however noble their words, such charters were still nothing more than pledges by those in power not to invade a subject’s freedom. And because those pledges were “granted by power,” they could also be revoked by the same power. If freedom was only a privilege the king gave subjects out of his own magnanimity, then freedom could also be taken away whenever the king saw fit.
Whether Parliament could repeal the charters of British freedom was a point of controversy among lawyers and political thinkers up to, and after, the muskets began firing at Lexington and Concord. The British judge William Blackstone, whose four-volume Commentaries on the Laws of England was published in the 1760s and became a landmark in legal history, was proud that Great Britain was foremost in the world in terms of respecting the rights God gave all people. Yet at the same time, he believed that Parliament’s power was “supreme” and “absolute”2 and that, if it chose, it could change the rules of monarchical succession, alter the country’s religion, and “do everything that is not naturally impossible.”3 Parliament’s “omnipotence” was so vast that it had power over “[a]ll mischiefs and grievances, operations and remedies, that transcend the ordinary course of the laws.”4 Other thinkers, most notably John Locke, had argued that individual rights took precedence over government power, so that the people always retain the right to overthrow tyrannical rulers. But Blackstone rejected this idea because it “would jeopardise the authority of all positive laws before enacted.” As long as the British government exists, he wrote, “the power of parliament is absolute and without control.”5
The idea that Parliament’s “absolute” power included a right to revoke protections for individual rights repelled America’s founders. They believed that people are inherently free and that government answers to them, not the other way around. James Wilson, a signer of the Declaration of Independence who served alongside Madison at the Constitutional Convention, pointed out that if Blackstone was right in thinking that freedom is given to people by all-powerful rulers, then the “undeniable and unavoidable” consequence would be that “the right of individuals to their private property, to their personal liberty, to their health, to their reputation, and to their life, flow from a human establishment, and can be traced to no higher source.” That would mean that “man is not only made for, but made by the government: he is nothing but what society frames: he can claim nothing but what the society provides.”6 The fundamental problem with the monarchical idea of charters of liberty granted by power was that freedom could then only consist of those rights the king chose to grant and only for so long as he chose to grant them.
This was not just a theoretical problem. Monarchs often revoked “charters of liberty” after granting them. Even the glorified Magna Carta was repudiated not long after it was issued, and many kings refused to acknowledge its authority. Perhaps the most notorious example of the fragility of such charters came from France. In 1598, King Henry IV issued the Edict of Nantes, promising religious toleration to Protestants. For decades, Protestants and Catholics had murdered one another, most infamously in the St. Bartholomew’s Day Massacre of 1572, during which unknown thousands were slaughtered. Henry himself was spared only when he converted to Catholicism. (Three decades later, he was assassinated anyway, after more than a dozen attempts on his life.) Although the Edict proclaimed Catholicism the national religion, it also allowed Protestants to “live and abide in all the cities and places of this our kingdom ... without being annoyed, molested, or compelled to do anything in the matter of religion contrary to their consciences,” so long as they complied with the secular laws. This, Henry proclaimed, would “leave no occasion for troubles or differences between our subjects.”7
The Edict remained in place for nearly a century – until 1685, when King Louis XIV revoked it and proclaimed Protestantism illegal. Faced with new rounds of persecution, as many as 400,000 French Protestants fled to Britain, Sweden, and the North American colonies. Among them was Apollos Rivoire, whose son, taking the Anglicized name Paul Revere, became a leading Boston patriot. The revocation of the Edict terrified the Protestants of Great Britain, too; that country’s king was also a Catholic, and they feared he might imitate the French monarch.
British kings often betrayed their past promises. In the years after 1660, King Charles II and his successor, James II, sought to reorganize the North American colonies and bring them more directly under the Crown’s control. This, they hoped, would ensure that the colonists produced more profit for the mother country. Charles II decreed that what is now Maine, New Hampshire, Vermont, Massachusetts, Rhode Island, and Connecticut would be reorganized as a new “Dominion of New England” governed by a single man who answered solely to the king. New York and New Jersey were soon added.
In 1684, Charles’s agent, Edmond Andros, and Andros’s aide, Joseph Dudley, arrived to take control of the new Dominion. They dismissed the Massachusetts colonial assembly and instituted autocratic rule, jailing those who resisted and rejecting the colonists’ assertions of British liberties. “You have no more privileges left you than not to be sold as slaves,” Dudley told one prisoner who asserted his right to a fair trial under Magna Carta.8 Andros and Dudley’s autocracy ended only when James II was overthrown in the Glorious Revolution of 1688. New England colonists, learning of the rebellion, immediately arrested the pair and sent them back to England. Only three years after the Dominion had been proclaimed, it was dissolved and the old colonies restored.
Almost a century later, good Massachusetts men like John Adams still seethed at the memory. George III’s ministers, Adams wrote in 1775, were “but the servile copyers of the designs of Andross [and] Dudley [sic].”9 Adams had good cause for this allegation: in the Declaratory Act of 1766, Parliament had asserted that it had authority to legislate for the colonies “in all cases whatsoever.”10 Some colonists viewed that act as essentially repealing Magna Carta. When it came time to declare independence, Adams and the other revolutionaries listed among Parliament’s malefactions “taking away our Charters, abolishing our most valuable Laws, ... altering fundamentally the Forms of our Government, ... suspending our own Legislatures, and declaring themselves invested with power to legislate for us in all cases whatsoever.”11 Americans had learned that royal “charters of liberty” were pie crust promises, which crumbled all too easily.12
Even after the Revolution, the founders were so skeptical of paper pledges of rights that the Constitution’s authors initially demurred when Americans demanded that it be amended to include a Bill of Rights. In their view, such “parchment barriers” typically proved useless in times of crisis, because those in power could so easily revoke them, ignore them, or argue them away. Better to focus instead on designing a government that would include checks and balances and other structural protections to prevent the government from acting tyrannically. Even when they agreed to add a Bill of Rights, they remained convinced that freedom could never be secured solely through written promises.
To them, freedom was not a privilege the state provides but a birthright the state must protect. George Mason put this point succinctly in June 1776, when he wrote in the Virginia Declaration of Rights that “all men are by nature equally free and independent and have certain inherent rights,” which include “the enjoyment of life and liberty, with the means of acquiring and possessing property, and pursuing and obtaining happiness and safety.”13 Government does not give people these rights – people already have them, and the people “cannot, by any compact, deprive or divest their posterity” of these rights. Thomas Jefferson would make the point even more concisely a month later, when he wrote in the Declaration of Independence that “all men are created equal” and are “endowed” with “inalienable rights,” which include “life, liberty, and the pursuit of happiness.” Government exists “to secure these rights,” not to grant them, and if it turns instead to destroying those rights, “it is the right of the people to alter or to abolish” that government.
Freedom Is Not Permission
These phrases are not mere rhetoric. They express a profound and elegant political philosophy. To understand it, we must begin with a basic presumption or default position. Logicians, lawyers, and laymen use such presumptions as the foundation for any argument. Presumptions of the “when in doubt do x” variety serve as starting points for any sort of discussion or agreement, so that we know where to go in the event that the agreement or argument fails. These sorts of “default rules” surround us every day. Many subscription services, for example, require people to opt out of annual renewal; unless they unsubscribe, the company presumes at the end of the first year that users want to pay for a second. These defaults are not set in stone, of course – the subscriber can decline the second year if he wants – but as a rule of thumb, it makes things easier by placing the burden on the subscriber to refuse a second year, rather than on the company to ask again after the first.
As this example suggests, choosing who bears the burden can have important consequences. If we start from a flawed initial position, we risk a dangerous and costly error. A subscriber who forgets to cancel might be surprised to see the second year’s charge appear on his bill.
Choosing an initial presumption is extremely important in the realm of politics or law, where the stakes are much greater. When we establish a presumption or a starting point for a political or legal argument, we are choosing what the normal rule will be, in the absence of good reason to deviate or in the event that we make a mistake. The obvious example is criminal law: courts presume a defendant to be not guilty and place the burden on prosecutors, which means that if the prosecutor fails to persuade the judge, or makes a mistake, the accused person is free to go.
In discussing politics, there are two possible candidates for an initial presumption. We might presume in favor of totalitarianism – everything is controlled by the government, and citizens must justify any desire to be free – or we can presume in favor of liberty and require anyone who proposes to restrict freedom to justify that restriction. Either everything is allowed that is not forbidden, or everything is forbidden that is not allowed. As Professor Richard Epstein observes, there is no third, middle-ground option, because there is no obvious midpoint between the two extremes: people will bicker endlessly about what qualifies as exactly halfway.14 So we must start by presuming either in favor of freedom or against it.
Yet these two candidates for starting points are not mirror images, and their differences are crucial. The differences are both procedural and substantive.
As a matter of procedure, starting with a presumption in favor of freedom is preferable because each step people take away from a state of liberty can be justified in theory by measuring whether they are better off. When two people sign a contract, they bind themselves, and in that sense are less free. But they consider themselves better off, and that is good enough, as long as they harm nobody else. It is not so easy to justify the reverse – a movement from a state of total unfreedom to one that is freer – because each step affects far more people. The totalitarian state is frozen solid, so that every action inflicts consequences on everyone else, and the slightest deviation from rigid order must therefore receive the approval of everyone affected. This means it is not always possible to determine whether people are better off at each step when they move in that direction. This, writes Epstein, “is why the restoration of even modest elements of a market system seem to pose such radical problems for Eastern European and Third World nations.”15
The point becomes clearer when we think about an individual: a free person can choose to become less free – he can sign contracts that limit his future choices, can voluntarily give up certain rights, or can surrender property he once could have used – but an unfree person cannot choose to become more free. Precisely because he starts out with no freedom, his capacity to choose alternatives for himself has vanished. He must ask his master for permission instead. This is why the road between freedom and unfreedom usually moves in only one direction. As Jefferson said, the “natural progress of things is for liberty to yield, and government to gain ground.”16
There is a deeper procedural reason why it is better to presume in favor of freedom than against it. The liberty presumption rests on a basic rule of logic: anyone who makes a claim must prove it – or, as a classic legal textbook puts it, “the issue must be proved by the party who states an affirmative, not by the party who states a negative.”17 In Latin, this is sometimes called the rule of onus probandi. Since proving a negative is technically impossible, the rule of onus probandi applies across the board to all claims: a person who asserts that the moon is made of green cheese, or that Herman Melville wrote Moby Dick, or one who claims that he may justly stop another from publishing his opinions or praying to the god of his choice, bears the burden of proving those claims.
The opposite rule – “throwing the burden of proof on the wrong side”18 – is a logical fallacy, and it too has a Latin name: probatio diabolica, or “the Devil’s proof.” Requiring someone to prove a negative is devilish because it perverts thought and leads to absurd results. It is also sometimes called “Russell’s teapot,” after an example given by the philosopher Bertrand Russell: nobody can prove that there is not a tiny teapot orbiting the sun, so small as to elude the world’s best telescopes.19 If you cannot see it, why, that just proves how tiny the teapot is! If you cannot detect it with the finest instruments, that only shows that you are not looking hard enough – and so forth. It is not possible to disprove even an absurd claim. Just as a person who says there is a teapot in space has the duty to prove it, so a person who claims the right to govern another person bears the burden of justifying that assertion. It is not possible to prove that one should not be ruled by another.
These procedural differences between the presumption of freedom and the presumption against freedom are a consequence of two substantive differences. First is the asymmetry between the past and the future. The past cannot be changed, but the future is what we make of it. That is why, offered the choice between a prize in cash or frozen assets of equal worth, a rational person chooses cash. Liquidity itself has value: cash can easily be converted into whatever the holder wants, while frozen assets may not serve one’s present needs, and selling or trading them takes time and effort. Freedom of choice is the liquidity of human action, just as property is human action in frozen form.
The second asymmetry is the asymmetry of personal consequences: someone deprived of freedom suffers a personal injury that is qualitatively different from the cost that a person suffers when he is stopped from taking away another person’s freedom. If Tom beats Joe or takes his property, Joe suffers a direct, personal injury different in kind from the harm Tom suffers when someone intervenes to stop him from robbing Joe. Our right to control our own lives and our rights (if any) to control the lives of others are simply not equivalent.
When thinking about government, therefore, it is better to presume in favor of freedom than to presume against it, not only because of the basic procedural rule of onus probandi, but also because of the substantive difference between freedom and its opposite. Simply put, the cost of having too much freedom is far smaller than the cost of having too little. At the very least, if we start with a presumption of freedom and later decide that less freedom would be preferable, we can move in that direction – whereas the reverse is not always true. And if we begin with the presumption of freedom and later conclude that that was an error, we are less likely to have hurt somebody as a result of that mistake than if we began the process by assuming away everybody’s liberty.
As mentioned earlier, one real-life example of these abstractions at work is the presumption of innocence in criminal law. If a judge put the burden of proof on the defendant to show that he did not commit the crime, the judge would be loading the dice against him. Even if the defendant proved he did not own the gun used to commit the murder, well, perhaps he borrowed it! To disprove that, he must now prove that he did not know the gun’s owner. But perhaps he paid that person to lie! – and so forth, infinitely. Every disproof only creates a new speculation, which must again be disproved. These speculations might seem silly, but they are not logically impossible, and requiring the defendant to prove his innocence – imposing the Devil’s proof on him – would require him to disprove even such bizarre conjectures. Every accused person would find himself in a hall of mirrors, forced to prove himself innocent of an endless series of baseless accusations, without regard for the rules of logic.
As a procedural matter, presuming innocence is preferable, because an erroneous conviction is harder to fix than an erroneous finding of innocence.20 And as a substantive matter, the presumption of innocence is better because a wrongfully convicted person suffers a different, more personal harm than the public experiences if a guilty person goes free.
Likewise, there are an indefinite number of speculative reasons that might defeat anyone trying to prove that he should not be deprived of freedom, just as there are an infinite number of “what ifs” that the “Devil” could use against a defendant who tries to prove he did not commit a crime, or a person who tries to disprove the existence of an invisible teapot: What if a person abuses his liberty? What if he doesn’t know how to use it wisely? What if he turns out to be a psychopath – or perhaps his children or grandchildren turn out to be psychopaths? What if there are top-secret reasons of state that warrant imprisoning him – reasons no judge may be allowed to see? Wary of the Devil’s proof, logicians place the burden on the person who asserts a claim, because that is the only logically coherent way to think. Likewise, the presumption of freedom requires those who would take away our liberty to justify doing so, because that is the only logically workable way to think about politics and law.
The Eleventh Circuit Court of Appeals made this point well in a 2013 case, when it required government officials to justify a policy of random drug testing that was challenged as an unconstitutional search. It would be “impossible,” the court said, to force the people who complained about the tests “to speculate as to all possible reasons justifying the policy they are challenging and then to prove a negative – that is, prove that the government had no special needs when it enacted its drug testing policy.”21 For the same reasons, we presume people are free and require those who would limit our freedom to justify doing so.
When the founders spoke of all people being created free and equal, they were not merely uttering slogans. They were making important statements about logic and human nature. Their starting point was equality: every person possesses himself or herself, and no person is singled out to rule another person by automatic right. There are exceptions to this rule – adults are the natural rulers of children, for example – but even this is only a temporary and limited condition; parents do not own their children. Normal, mature adults who communicate with one another and use reason have no fundamental entitlement to control one another. As Jefferson put it, nobody is born with a saddle on his back, and nobody is born wearing spurs.22 Or, as the Continental Congress declared in 1775, “If it [were] possible for men who exercise their reason, to believe that the divine Author of our existence intended a part of the human race to hold an absolute property in, and an unbounded power over others ... the inhabitants of these Colonies might at least require from the Parliament of Great Britain some evidence, that this dreadful authority over them has been granted to that body.”23 Anyone who purports to govern another must justify his right to do so. Merely making the assertion is not enough.
This was the idea James Madison considered the “most triumphant” achievement of the American Revolution. Under the British constitution, where “charters of liberty” were “granted by power,” the subject was not free unless he could persuade the government to allow him some freedom. Even then, his freedom might be taken away if the government saw fit to do so. But the American Revolution ushered in a new society, one which recognized that people are basically free, and the government exists at their mercy. The new Constitution was a charter of power granted by liberty. Freedom would be the general rule and government power the exception. This principle marked the revolutionary core of the Declaration of Independence.
Nonsense on Stilts?
When the Declaration was published, critics promptly saw it as a dangerous first step toward ending monarchy and proclaiming liberty to all mankind. Many royals prohibited newspapers from printing translations of it.24 In England, conservatives such as Jeremy Bentham and former royal governor of Massachusetts Thomas Hutchinson published rebuttals of it. Hutchinson dismissed the proposition that all people are equally entitled to freedom as “absurd,” because “if these rights are so absolutely unalienable,” it would be impossible to “justify depriving more than an hundred thousand Africans of their rights to liberty.”25 That accusation of hypocrisy certainly struck home, but it hardly proved that the patriots were wrong to pronounce the right of all human beings to be free. On the contrary, slavery is unjust only because the Declaration’s principles are true. Freedom is not a privilege that white people could justly withhold from black people – it is a right. Freeing the slave is not doing him a courtesy but undoing a wrong.
Bentham was more specific. Ridiculing the idea of equality, he proclaimed that the Declaration’s self-evident truths about inalienable human rights were absurd: “nothing which can be called Government ever was, or ever could be, in any instance, exercised, but at the expense of one or other of those rights.”26
That reaction is unsurprising, given Bentham’s well-known view that natural rights are “nonsense upon stilts.”27 Rights, in his opinion, were only privileges created by the government – the “sweet fruits” of government,28 which are essentially “fictitious,”29 while law is “real.”30 Law is fundamentally a “command [which] supposes eventual punishment.”31 So when we speak of rights, we only mean that the government will punish anyone who interferes with whatever thing is labeled someone’s right. And because “[t] he law cannot confer a benefit, without at the same time, imposing a burthen somewhere,” the government can only give one person a right by taking away the rights of someone else.32
To this day, many follow Bentham in dismissing the idea of natural rights and arguing that freedom is really given to us by the government, when it restricts the freedoms of others – that, for example, our right to private property is really nothing more than the government barring others from taking away our things. This “positivist” theory found its most influential supporter in the twentieth century in Supreme Court Justice Oliver Wendell Holmes Jr. Holmes, who proudly “sneered at the natural rights of man,”33 followed Bentham in arguing that what we call rights are really only “preferences,” supported by “the fighting will of the subject to maintain them.” They are essentially “arbitrary,” just as “you cannot argue a man into liking a glass of beer.”34 Rights are only subjective, personal desires, which the government chooses to protect on pain of punishment. They are manufactured at the state’s pleasure and for the state’s own purposes.
Positivism’s adherents have always claimed that this is a more “realistic” way of looking at things and have lauded themselves for waving away the Declaration’s abstractions about natural rights, which Holmes likened to “churning the void in the hope of making cheese.”35 But this alleged realism is far weaker than positivists maintain.
For one thing, the idea that rights are created by government fiat depends on the presumption that laws are only commands issued by the ruler. But this is not the case. Laws are not commands, as the influential legal philosopher H. L. A. Hart (himself a positivist) explained in his classic book, The Concept of Law. Laws are general rules that remain in place indefinitely, whereas commands are directed to specific people for particular reasons and are usually temporary. Also, laws are not always backed up by punishments: there is no punishment if a person fails to sign a will, for instance, even though a will must be signed to be legally valid. Marriage laws require a person to get a license, but there is no punishment for those who fail to get one, and some laws even recognize unlicensed “common-law marriages.” The rules for entering into a marriage or for writing a will cannot plausibly be called “commands.” Hart called them instead “power-enabling” rules – laws that enable people to act, rather than limiting what they can do – and these are laws even though they are not commands and are not backed up by punishment.36
Another way in which laws are not commands involves what the legal philosopher Lon Fuller called “the force which ideas have without reference to their human sponsorship.”37 Most legal questions, or disputes about ownership, are resolved outside a courtroom, by people who extrapolate from the existing rules to determine what they can do and what they own. The government is rarely even involved in this process, and it usually issues no commands. Instead, people consult the law – which has an internal logic from which they can decide whether something is legal, even if the government has never spoken on the question. Judges themselves use this technique to decide what the law is. If asked to determine whether some past event was legal, a judge will not issue a command. Instead, he determines that the thing that was done – the contract signed or the will drafted – was legal at the time it was done. Even when the Supreme Court issues controversial constitutional rulings, it pronounces that the Constitution has always meant such-and-such, that its natural logic has always provided this answer, even if nobody realized it at the time. For instance, when the court ruled in 2003 that state laws criminalizing private sex between two men or two women violated the Constitution, it explained that a previous decision holding otherwise “was not correct when it was decided.”38 Law has a quality of permanence that commands lack. That is why we speak of a “legal system.” Commands do not hold together as a “system” in this way.
Most importantly, commands represent a form of organization that Fuller called “managerial direction”; they are intended primarily to ensure that people accomplish tasks that their superiors set for them. But law is meant to enable people to accomplish their own purposes. It is essentially reciprocal – more like a promise than a command. Whereas managerial direction is a matter of expediently and efficiently achieving the manager’s purposes, the law is concerned with providing a framework of principles for people to pursue their own goals.
If laws are not commands, then the rights secured by laws cannot be privileges manufactured by the government. Rights can be created between people on their own, in accordance with a legal system, without the ruler even being aware of it. This happens whenever people buy or trade things. This is not true of privileges. A person can give or sell a car or a house to another person without first getting approval from some superior, because he owns the house or car by right. But a soldier who is given a special privilege to leave the base for the weekend cannot sell that pass to another soldier without his officer’s permission. The soldier has only a privilege manufactured by a command – not a right that the law must respect.
Characterizing rights as privileges granted by the command of a ruler deprives rights of the moral weight that is essential to their character as rights.39 According to the Declaration, rights are rooted in profound principles of justice and human flourishing. They connect government policy to moral rules about how we treat other people. The most essential right – the right to one’s own self – is “inalienable” in the sense that no matter how much we try, we cannot give it up. We cannot abandon our own minds, our own responsibility, our own hopes and fears. Self-possession, or what philosopher Tom G. Palmer calls a person’s “ownness,” is an inescapable fact of nature, not a gift from the government, and it is not possible to abolish it (although people can certainly be killed or imprisoned). “Each person is an individual and the owner of his or her acts,” writes Palmer. “[O]ne’s personhood is achieved by the acts that one owns, and the responsibility for those acts is the foundation for one’s rights, for the reason that hindering another from fulfilling his or her obligations is precisely to hinder that person from doing what is right, and therefore to act contrary to right.”40 That is why it is wrong to violate someone’s rights.
Privileges, by contrast, are parceled out on the basis of policy considerations, not moral considerations, and they may be altered for whatever reason the person who grants them considers sufficient. It is not wrong to decline to give someone a privilege or to revoke a privilege once granted. If freedom were only a privilege – a space the government draws around the individual and gives to him as a favor – then the distinctive character of rights would be lost, and they would lie on the same moral plane as, say, permission to go on land owned by the government, which it can revoke when it pleases. In such a world, we would not own our lives but would only have the permission to use ourselves as long as the government allows us.
This may seem like an extreme conclusion, but Bentham openly embraced it. In his view, the obvious conclusion of “reason and plain sense” was that “there is no right which, when the abolition of it is advantageous to society, should not be abolished.”41
Even if it were possible to imagine that the government gives each of us our rights, the next question to ask would be, Where did the government get them? Just as the government cannot give away money that it did not either obtain through taxes or print by fiat, so, if rights are the gift of the state, either it must have acquired them from us to start with or it must have simply manufactured those rights itself. The first option is ruled out, because that would imply that we have rights to begin with – something Bentham and his followers rejected. But the latter option only makes sense if the government is qualitatively different from us common folk, in that it can create rights when we cannot. In this theory, government is somehow fundamentally superior, deriving its powers by mere say-so.
Bentham endorsed this conclusion. Having ridiculed the idea that all men are created equal, he wrote that a law is simply the “wish of a certain person, who, supposing his power independent of that of any other person, and to a certain extent sufficiently ample ... is a legislator.”42 In other words, law is whatever the person with the biggest gun declares it to be. The king may parcel out to the people whatever privileges he sees fit and may take from them whatever he considers it necessary to take. In this theory, the government essentially owns us and chooses when to allow any of us to get a job, to marry, to own a house, to publish a book – or even when to not be robbed, raped, or murdered – and it may choose to “abolish” these rights whenever it likes. This is just what James Wilson meant when he said that people like Bentham think “man is not only made for, but made by the government.”43
One reason for Bentham’s rejection of natural rights, shared by many thinkers today, is that these rights can be violated.44 How can rights be “natural,” it is often asked, if they cannot prevent violations of freedom? But the advocates of natural rights never claimed they were inviolable. Indeed, the point of the Declaration was that these rights often had been violated. The natural rights theory only holds that violating a right is an injustice and that this is inescapable. Unlike a privilege, which can be justly abrogated, a person cannot justly be deprived of a natural right, and although the injustice of violating a person’s rights may go unpunished, it still remains an injustice. As rights are not created by the ruler’s mere will, so an unjust act cannot become just simply because the government does it.
This inescapable quality of justice was given eloquent expression in W. H. Auden’s poem “The Hidden Law”: although the hidden law “answers nothing when we lie” and “will not try / To stop us if we want to die,” it is precisely when we try to “escape it” or “forget it,” that we are “punished by / The Hidden Law.”45 As Auden’s language suggests, the argument that rights are “nonsense” because they can be violated is akin to arguing that law itself is nonsense because laws can be violated. That actually is what Bentham and his followers believed, which is why they strove to substitute command for law. Because they could not imagine that law could have any meaning unless backed up by punishment, they confused laws with commands and thus confused rights with permissions.
This leads to the most profound objection to the idea that rights are privileges “granted by power.” The Declaration asserts a presumption of equal rights – that everyone has the right to use himself, his skills, and his belongings as he wishes, as long as he respects the equal right of others to do the same. The Declaration therefore regards each person as an individual possessing dignity that the state must respect. Bentham’s permission model, on the other hand, depends on a fundamental inequality. A permission is something granted by someone above to someone beneath. One must ask one’s superior for a privilege and when one receives it, say “thank you.” But we do not normally ask our equals to respect our rights or thank those who do. We take it for granted that they should.
In a democratic society, laws are more like promises between equal partners than like commands from superiors to subordinates. Laws contain an element of reciprocity, in which the citizen and the state in some sense agree to act in certain ways.46 But the Permission Society, in which rights are only privileges conferred by the government, regards people as subjects to be alternately commanded and rewarded. The citizens of the Permission Society must treat their superiors with subservient meekness, begging and praising their rulers in hopes of being given favors. A free society, by contrast, encourages and depends upon a proud sense of self-reliance in the people. Thomas Jefferson emphasized this in his 1774 pamphlet, A Summary View of the Rights of British America, when he refused to apologize for the candid words he used when addressing King George III. The “freedom of language and sentiment” in which he expressed himself, said Jefferson, “becomes a free people claiming their rights, as derived from the laws of nature, and not as the gift of their chief magistrate.” To “flatter” the king would “ill beseem those who are asserting the rights of human nature.... [K]ings are the servants, not the proprietors of the people.”47
Is There a Right to Liberty?
Bentham claimed, and his positivist admirers still believe, that the rejection of natural rights represents a modern, scientific attitude. Those who believe in the theory of natural law, said Oliver Wendell Holmes, “seem to me to be in [a] naïve state of mind,”48 and his contemporary, law professor John Chipman Gray, called the idea of natural law an “exploded superstition.”49 But in fact, it was they who represented a regression to the ancient idea of the divine right of kings.50 By embracing the fallacy of the Devil’s proof – assuming that people are not free unless the all-powerful government says they are – they and their modern followers actually embraced a form of ultraconservatism, harkening back to the ancient mystique of royal absolutism. To them, laws are arbitrary pronouncements by the powerful – essentially a form of magic that citizens must believe in, on pain of punishment – instead of rational principles based on human nature. They were saboteurs, not iconoclasts.
A more curious example of contemporary rejection of the presumption of liberty is the influential philosopher Ronald Dworkin. Although he was no admirer of Bentham, Dworkin advanced a more sophisticated argument against the proposition that people are naturally entitled to liberty, and the problems with that argument reveals some of the essential flaws in the Permission Society generally.
Dworkin set out in his 1977 book, Taking Rights Seriously, to defend the idea of individual rights against positivist criticisms. But although he believed that people have rights the government must respect, he nevertheless argued that there is no right to liberty – there are only specific rights to particular liberties that are parceled out by the government. People have a right to try to persuade the government to give them these freedoms but no general right to lead their own lives as they choose. Later, apparently recognizing that this was not much improvement over positivism’s scorn for rights, Dworkin bizarrely reversed course and embraced the proposition that people do have a basic right to freedom.
Dworkin began by arguing that the crucial political question is not how to protect individual autonomy but “what inequalities in goods, opportunities and liberties are to be permitted” in society.51 This starting point immediately biased his argument, because it implicitly assumes that inequality is something that is or is not “permitted” – that is, that inequality only exists because the government allows it and that government should instead find ways to eliminate inequality by redistributing “goods, opportunities, and liberties.”
But inequality is not unjust if it is not the consequence of any wrongful act. To borrow an old example, imagine a world in which everyone has equal wealth. If some people freely choose to pay for tickets to see a famous basketball player demonstrate his superior skills, the ball player will amass millions of dollars, and the fans who paid for tickets will each have less money than they had before. But the resulting inequality is not unjust, because nobody has been injured.52 For the government to seize the basketball player’s earnings and redistribute them because inequality is not “permitted” really would be unjust. It would mean confiscating his fairly acquired wealth – essentially taking away his unique basketball skills without payment. As James Madison put it, people have “different and unequal faculties,” which enable them to earn “different degrees and kinds of property,” but although this will result in inequalities, “[t]he protection of these faculties” is the “first object of government.”53
Dworkin rejected this. He believed that “differences in talent” are “morally irrelevant.”54 In his view, justice did not consist of protecting people’s rights to the things they earn by employing their different talents and skills, or the things they inherit, such as their bodies. Instead, it consists of finding a proper “distribution” of the “goods, opportunities, and liberties” that are found in society. Where did these goods, opportunities, and liberties come from? Dworkin ignored this question and focused solely on questions of distribution – on slicing up the cake equally, so to speak, while disregarding the rights of the baker.55
Dworkin was wrong to equate justice with distribution.56 Actual justice occurs when people are allowed to keep what belongs to them or are compensated for having their things wrongfully taken away. Courts do justice by “making people whole” – by remedying injuries people have suffered – not by shaping society through the redistribution of goods, opportunities, or liberties. That is why judges normally do not ask how things should be divided up but instead look for evidence about who stole what, or who broke what, or whether the accused had some excuse for doing what he did. Society is not distributing goods or liberties to the victim of a robbery when she has her stolen property returned, or to an injured worker who receives compensation for a job-related injury, or to a slave who is liberated. Rather, justice has been done in these cases because the people whose property or rights were wrongly taken away are now having them restored.
Of course, what Dworkin had in mind was not that sort of justice but a different kind of justice – “social justice” – by which the government allocates property and freedom according to some preconceived formula. Yet calling this “justice” perverts the concept and corrupts the idea of rights. Because it holds that people’s talents and inheritances are “morally irrelevant,” the only way this theory can justify the ownership of “goods, opportunities, and liberties” is to hold that society has distributed these things in accordance with some recipe. But this takes an enormous stride toward authoritarian government. As philosopher Wallace Matson observed, the essential difference between free and unfree societies “is that in the latter, a person’s rank, etc., are assigned by bureaucrats, whereas in the former nobody makes such assignments – the individual decides what sort of life he wants to lead, and then pursues it.” Instead of “showing that it is a good idea to have a system of assignments at all,” writes Matson, Dworkin simply assumed this and devoted his energy to figuring out the formula bureaucrats should use when making distributions. But the real question is not between this or that method of distributing things, but between a controlled society, in which people have privileges distributed to them, and a free society, in which people’s inherent freedom is respected – or, in Matson’s words, “between that condition in which the economic decisions of individuals have their natural effect, on the one hand, and an artificially structured economy on the other.”57 The assumption that society must be artificially structured quietly transforms the free society into the Permission Society.
Dworkin seemed not to have recognized this. He wrote in Taking Rights Seriously that the principle of equality in a democracy lets government “constrain liberty only on certain very limited types of justification.”58 But if rights are something the state distributes, there can be no basis for this assertion, and Dworkin’s other writings contradicted it. He believed the citizen’s “fundamental” right is not a right to be left alone, but a right to “equal concern and respect in the political decision” about how “goods and opportunities are to be distributed.”59 This is only a right to take part in a collective, political choice about how to distribute resources, including freedom – essentially, a right to vote, not a right of private enjoyment.
Because his theory was based on distributing things in society, Dworkin had trouble justifying simple, personal freedoms – the right to go on a picnic, the right to compose a poem, or the right to marry a person of one’s choice – that have little political valence. True, he referred to a right of “moral independence”60 – the right to pursue the good life one sees fit – and argued that people should not be allowed to interfere with each other’s independence by imposing their notion of the good life on other people. But laws that interfere with “moral independence” do not interfere with a person’s right to participate in political decisions about distribution, which is the right he labels “fundamental.” It’s unclear, therefore, how moral independence could fit into his argument.
Dworkin insisted his theory would prohibit deprivations of personal freedoms because the people affected by such deprivations “suffer ... because their conception of a proper or desirable form of life is despised by others,”61 which he considered unacceptable. But this only begged the question, because he gave no reason to nullify decisions motivated by such disapproval, as long as those decisions do not interfere with a person’s more “fundamental” right to participate in political choices about the distribution of property or freedom.
When, in one essay, he focused in the question of “external preferences” – efforts by some people to tell others how to live – Dworkin was at last forced to turn to the libertarian theory of rights that he had rejected when arguing that there is no general right to freedom. His argument proceeded this way: a basic principle of democracy is that everyone gets an equal vote. But that principle would be undermined by a rule that, for example, allowed some people to vote twice as often as others. This proves that some kinds of political desires are automatically ruled invalid by the deeper principles of democracy.62 So, too, with moral beliefs about how others should live. Because such beliefs also have distorting consequences, they should be considered an invalid basis for a person’s vote.
To say that a deeper value – equal freedom, moral independence, or a right to guide one’s own life – takes precedence over democratic decision-making is to say that our natural right to freedom trumps any effort by government to dictate how we should live. Dworkin was embedding rules that protect liberty into his view of democracy – even though he had started by denying that there was any such general right to liberty. Asked why he thought democracy should remain “neutral” about people’s moral choices, Dworkin answered that he “assumed” the goal of politics is to create a society in which people can “make the best and most informed choice about how to lead their lives.”63 But that is exactly the basic right to freedom he had set out to disprove.
The Right to Lead Our Own Lives – Always
This was not the only way Dworkin contradicted himself. The “assumption” that politics should enable us to lead our own lives is ultimately incompatible with the idea that justice is accomplished by “distributing” rights. To take a talented basketball player’s earnings away does not enable him to lead his own life, for one thing. For another, Dworkin’s “assumption” suggests that there are some rights that may never be justly redistributed by the state. But that means that distribution cannot be the foundation of justice, as he claimed. Instead, some rights must be too important to be “distributed.”
This objection came to the surface when Dworkin argued that government should not limit someone’s freedom “in virtue of an argument that the [person] could not accept without abandoning his sense of equal worth.”64 Presumably, “that sense of equal worth” is not itself one of the “goods, opportunities, and liberties” that the state may redistribute. But what about the self-worth of people who resent being forced by the welfare state to support idle people out of their paychecks? What about the basketball player’s sense of self-worth in a society that considers his talent to be “morally irrelevant”65 and seizes his justly acquired earnings to give to others?
We see a hint of an answer when Dworkin writes that laws persecuting atheists or other religious minorities would fail his “equal concern and respect” test because “[n]o self-respecting atheist can agree that a community in which religion is mandatory is for that reason finer.”66 But many self-respecting property and business owners are just as offended by laws that seize their belongings in order to “distribute” them to others. If “self-worth” or “equal dignity” are so important, then the same principles would also protect the rights of property owners. In casting about for a strong foundation for individual rights, therefore, Dworkin ended up finding a principle that really is fundamental – a substantive limit on what the state may “distribute.” But it was the general right to liberty he rejected: the right to pursue one’s own life and keep the fruits of one’s labor. Whether called “a sense of equal worth,” or “moral independence,” or an “assumption” that politics should enable people to “lead their lives,” this was just the old-fashioned presumption of freedom.
Why did Dworkin take this bizarre detour? Because as a political liberal, he hoped to fashion an answer to critics who accused him of hypocrisy for endorsing strong protections for “personal” and “political” rights such as sexual privacy and free speech while simultaneously denigrating “economic” rights such as private property and the freedom to make contracts. If, as Dworkin argued, “there is no such thing as any general right to liberty,”67 and the justification of “any specific liberty” can differ from the justification for any other, then it would be perfectly consistent for him to support some kinds of freedom while ignoring other freedoms: he could argue that atheists should not have their rights infringed, while also holding that the government could override “the right to liberty of contract sustained in the famous Lochner case.” “I cannot think of any argument,” said Dworkin, “that a political decision to limit such a right” as freedom of contract would “offend the right of those whose liberty is curtailed to equal consideration and respect.”68
The self-contradiction here is obvious. The Lochner case involved a New York law that banned bakers from working more than ten hours a day in bakeries. A product of lobbying by organized labor, Progressive activists, and owners of machine-run bakeries who saw restrictions on working hours as a way to restrict their competition,69 the New York Bakeshop Act was rooted in just the sort of “external preferences” that Dworkin otherwise considered inadmissible. It was not a decision expressed by bakers themselves but a law imposed on them and on bakery shop owners by the state legislature. The act was plainly designed to dictate the choices bakery workers might make about how to lead their lives – it gave force to government’s disapproval about certain types of economic decisions, and it violated the right of moral independence on the part of baker Aman Schmitter and his employer, Joseph Lochner. This was precisely why the Supreme Court ruled in their favor and struck the law down. Bakers, the court declared, were “equal in intelligence and capacity to men in other trades or manual occupations” and were “able to assert their rights and care for themselves” without needing the government’s “protecting arm.” Laws “interfering with their independence of judgment and of action” were “meddlesome”70 and demeaning. As mature adults, Lochner and Schmitter were “in no sense wards of the state.”71 The Bakeshop Act was unconstitutional because it offended their right to have their own economic choices accorded equal consideration and respect.
Dworkin’s effort to distance himself from Lochner was simply a failure. He was right about one thing: political decisions that dictate how others should live their lives ought not to be given the same respect as the choices people make about their own lives. But that argument only makes sense from the perspective of the libertarian position endorsed in the Lochner decision. Dworkin’s commitment to individual freedom clashed with his effort to justify modern liberalism’s hostility toward economic liberty, as symbolized by Lochner. To the degree that his argument supported personal freedom, it did so only by giving force – though often in different terminology – to the classical liberal principles of equality and liberty that he tried to refute.
The libertarianism of the Declaration of Independence locks the ideas of freedom and equality together for good reason. Under the monarchical system and its modern variants, government stands in a position of inherent superiority – above the law, dispensing the laws to inferior citizens below. Perhaps, if it thinks fit, it might also issue “charters of liberty,” but these are revocable, whenever the ruler thinks abolishing them would be “advantageous to society,” or whenever voters think a new “distribution” of our belongings is in order. But on the Declaration’s premise of equality, government does not stand in a position of superiority and does not distribute rights to citizens. Each of us is born free, with the right to act as we choose unless we interfere with the rights of others. It is the government that must ask permission of us, not the other way around.