Imprimis

What Public Employee Unions are Doing to Our Country

Bill McGurn

Bill McGurn

William McGurn
News Corporation

Printable PDF

WILLIAM MCGURN is a vice president for News Corporation and writes the weekly “Main Street” column for the Wall Street Journal. From 2005 to 2008, he served as chief speechwriter for President George W. Bush. Prior to that he was the chief editorial writer for the Wall Street Journal and spent more than ten years in Europe and Asia for Dow Jones. He has written for a wide variety of publications, including Esquire, the Washington Post, the Spectator of London and the National Catholic Register. He holds a B.A. from the University of Notre Dame and a master’s degree in communications from Boston University, and currently serves on the board of Notre Dame’s Center for Ethics and Culture.

The following is adapted from a speech delivered on February 15, 2012, at a Hillsdale College National Leadership Seminar in Newport Beach, California.

MANY SCHOLARS ARE better versed on the history of public employee unions than I am, but there is one credential I can claim that they cannot: I am a taxpayer in the People’s Republic of New Jerseystan. That makes me an authority on how public sector unions—especially at the state and local level—are thwarting economic growth, strangling the middle class, and generally hijacking the democratic process to serve their own ends rather than the public.

Now in my experience, when one says the words “New Jersey,” people for some reason think it is a laugh line. Perhaps you know us from The Sopranos or Jersey Shore. You might think that such a state has nothing to teach you. If so, you would be very wrong. New Jersey offers something that can profit the entire nation: We are the perfect bad example.

As conservatives, of course, we believe in virtue. We like to point to policies and practices that work—low taxes and light regulation for the economy, a strong national defense to keep us safe from foreign attack, and social policies that favor community over government. These are all valuable. But the bad example has its honored place as well: It’s how we illustrate our warnings.

As parents, for example, selling virtue only takes us so far. To make our point when we see a character trait we don’t care for in our kids, we’re far more likely to say something like, “You don’t want to grow up to be like Uncle Bob, do you?”

This is the reason Governor Chris Christie’s reforms have had such resonance. Almost anywhere he points, he has before him an example of how New Jersey’s bloated public sector is hurting growth, limiting the efficiency of government services, and squeezing middle class families. How many state governors and legislators might be more inclined to do the right thing if before they acted they first said to themselves, “We don’t want to be like New Jersey, do we?”

These days, when conservatives get together to discuss the debilitating role played by government workers, we reassure ourselves with statements by FDR and labor leader Samuel Gompers about the fundamental incompatibilities between a union of private workers working for a private company and a union of government workers laboring for our city, state, or federal governments. We also trace the line of expansion to various events, including John F. Kennedy’s executive order that opened the path for collective bargaining for public employees at the federal level.

I don’t want to rehash that today. Today I want to talk about the situation as we find it, and suggest that the first step toward a cure is to diagnose the illness accurately. This means changing the way we think of public sector unions. And in what I have to say, I will concentrate on public sector unions at the state and local levels.

It’s not that I don’t consider the unionization of federal workers to be an issue. Plainly it is an issue when the teachers unions represent one of the largest blocs of delegates at Democratic conventions, when the largest single campaign contributor in the 2010 elections was the American Federation of State, County and Municipal Employees, when union money at the federal level goes at an overwhelming rate to Democratic candidates, and when the Congressional Budget Office tells us that federal employees earn more than their counterparts in the private sector. Nonetheless, I believe that the greater challenge today—to state and city finances, to democratic representation, to the middle class—is at the state and local level. This is partly because state and city unions have the power to negotiate wages and benefits that their counterparts at the federal level largely do not. More fundamentally, it is because we cannot reform at the federal level without correcting a problem that is bringing our cities and states to bankruptcy.

When I say we need to change our understanding, what I mean is that we have to recognize that public sector unions have successfully redefined key relationships in our economic and civic life. In making this argument, I will suggest that the elected politicians who represent us at the negotiating table are not in fact management, that our taxing and spending decisions at the city and state level are in practice decided by our public sector contracts, and that when you put this all together, what emerges is a completely different picture of the modern civil servant. In short, we work for him, not the other way around.

Who is Managing Whom?

Let me start with the relationship between government employee unions and our elected officials. On paper, it is true, mayors and governors sit across the table from city and state workers collectively bargaining for wages and benefits. On paper, this makes them management—representing us, the taxpayers. But in practice, these people often serve more as the employees of unions than as their managers. New Jersey has been telling here. Look at our former governor, Jon Corzine.

You Hillsdale folks are a genteel sort. When you speak about the unions being in bed with the Democratic politicians, you mean it metaphorically. In New Jersey, we take it to Snooki levels: Mr. Corzine once shared a home with the New Jersey leader of the Communication Workers of America, Carla Katz. Back when he was running for governor, he was asked whether that relationship would compromise his ability to represent the taxpayers in negotiations with outfits such as CWA. “As the governor,” Mr. Corzine responded, “you represent eight-and-a-half million people. You don’t represent one union. You don’t represent one person. You represent the people who elected you.”

That’s the way it ought to be. In real life, it turned out that during heated negotiations over a contested CWA contract, Mr. Corzine and Ms. Katz had a long email chain—subsequently published by the Newark Star Ledger, despite the governor’s legal attempts to keep them private—in which she pressed him on the union issues.

But it wasn’t just the CWA. Scarcely six months after he was elected, Governor Corzine appeared before a rally of state workers in Trenton in support of a one percent sales tax designed to bring in revenues to a state hemorrhaging money. Not cutbacks, but a tax. Naturally, Mr. Corzine’s solution was the one the public sector unions wanted: Get the needed revenues by introducing a new tax.

The twist was that there was someone in the New Jersey government who understood the problem—who understood that a new sales tax wouldn’t do much to fix New Jersey’s problems, and that the only way to get a handle on them was to get state workers to start contributing more to their health care and pensions.

These were the pre-Chris Christie days, so the author of this bold proposal was the Senate president, Stephen Sweeney. Mr. Sweeney is not only interesting because he is a prominent and powerful Democrat. He is also interesting because in addition to his political office, he represents the state’s ironworkers. And what Mr. Sweeney proposed for the public sector unions was something private union members such as his ironworkers already paid for. It was also common sense: He knew that if New Jersey didn’t get a handle on its gold-plated pay and benefits for its government employees, it would squeeze out the private sector that hires people such as ironworkers.

If the leader of an ironworkers union could realize that, surely so could a governor who had earlier served as a high-powered executive for Goldman Sachs. But Mr. Corzine was having none of it. Instead, he told the crowd of state workers: “We’re gonna fight for a fair contract.”

The question is, whom was he planning on fighting? Wasn’t he management in these negotiations?

Six months later, Governor Corzine proved this was not simply a slip of the tongue. When workers at Rutgers University were planning to unionize, he turned up at their rally. This was too much even for the liberal Star Ledger, which—in an article entitled “Jon Corzine, Union Rep?”—noted that Mr. Corzine’s appearance at the rally raised the question whether he truly understood that “he represents the ‘management’ side in ongoing contract talks with state employees unions.”

Manifestly, the problem is not that Mr. Corzine and other elected leaders like him—mostly Democrats—do not understand. In fact, they understand all too well that they are the hired help. The public employees they are supposed to manage in effect manage them. The unions provide politicians with campaign funds and volunteers and votes, and the politicians pay for what the unions demand in return with public money.

In New Jersey as elsewhere, most leaders of public sector unions are not sleeping with the politicians who set their salary and benefits. They are, however, doing all they can to install and keep in office those they wish—while fighting hard against the ones they oppose. And until we recognize the real master in this relationship, we will never reform the system.

The Tail Wagging the Dog

My second point relates to my first. Not only have the public unions too often become the dominant partner in the relationship with elected officials, but the contracts and the spending that goes with them are setting the other policy agenda. In other words, even when we recognize that the packages favored by public employees are too generous, we think of them simply as spending items. We need to wake up and recognize that in fact these spending items are the tail wagging the dog—that they set tax and borrowing decisions rather than follow from them.

Take the case of Northvale, a small, affluent town of about 4,600 people at the northeast tip of New Jersey. Its median income is about $99,000, comfortably above both the New Jersey and national levels, and its budget is $21.8 million. Of this, $13.2 million—or nearly two-thirds—goes to the schools. The lion’s share of that, of course, goes to salaries and benefits.

Northvale’s school budget is voted on in the spring. That’s part of the scam, because turnout for these elections is much lower than it is in November for the regular elections. With lower turnout, it’s easier for teachers and other interested parties to dominate the elections. Thus the great bulk of Northvale’s budget is not determined in the regular elections, or by the mayor and city council. Effectively, it is determined by the education lobby and school officials—who in turn are chosen in elections involving only 20 percent of the electorate.

From the other one-third of the budget, Northvale has to run its police force and fire department, remove snow, arrange for garbage pickup, and so on. That means there is not much discretionary spending left. Even when voters rebel—last spring Northvale voters overwhelmingly repudiated the budget—they are frequently ignored, and the back door system ensures there is little in the way of accountability.

But there are consequences: This dynamic helps explain why, in the decade before Chris Christie was elected governor, the property taxes of New Jersey residents went up 70 percent.

Mr. Christie is not in charge of local spending. But he understands that this is part of an exceptionally unvirtuous circle. So he’s made some changes. Last year, for instance, with the help of allies such as Mr. Sweeney, he pushed a reform through the legislature that required public workers to start contributing to their health care and up their contributions to their pensions. It’s not nearly the same percentage as their counterparts in the private sector, but it’s a start.

Mr. Christie also put through a property tax cap that forces cities to go to the people for a vote if they increase property taxes by more than two percent. And just last month, he signed a bill that will allow towns to move their school budget votes to the November ballot—not only saving money, but also ensuring that more citizens vote, not simply those who have a vested interest.

At the same time, Mr. Christie has begun to campaign against abuses using language that people can understand. His most recent target is the practice of awarding six-figure checks to public employees who are allowed to accumulate—and cash out—unused sick pay. In New Jersey these payments are called “boat money,” largely because retired government workers often use the money to buy pleasure boats when they retire. Across the state, cities have liabilities of $825 million because of these boat checks.

And what’s been the opposition’s response? Instead of agreeing to reasonable cuts, the Democrats keep thumping for a millionaire’s tax. New Jersey being New Jersey, the millionaire’s tax aims at people making far less than a million dollars. But even if it didn’t, it’s hard to see how driving millionaires out of the state will help it meet its huge and growing unfunded pension liabilities.

To summarize my second point: You and I make spending decisions the way all households do. We take our income, and we live within our means. In sharp contrast, public employee unions have introduced a whole new dynamic: They negotiate pay and benefits in contracts we can’t rewrite. When the revenues to meet these obligations fall short, they push to raise taxes to make up the difference.

The Corruption of Public Service

That leads me to my third and final point: If I am right that the public employee unions are in fact the managers in the relationship with politicians, and that public sector spending is driving tax and borrowing policy, the inescapable conclusion is that you and I are working for them.

That’s not how we usually understand and speak of public service. Traditionally, the idea of a public servant is someone who is working for the public, with the implication that he or she is sacrificing a better material life to do so. But can anyone really define today’s relationship this way? Especially when health care and pensions are included, government workers increasingly seem to live better than the people who pay their salaries. How many of you walk into some local, state or federal office these days and leave thinking, “The men and women here are working for me”?

In some ways the change has been driven by larger changes in union life. From one out of three workers at its high point in the 1950s, today fewer than one out of 14 private sector workers belongs to a union, and the percentage continues to drop. Conversely, the unionization of government employees continues to grow, to the point where public sector union members now outnumber their private sector counterparts for the first time in American history.

In a recent interview with the Wall Street Journal, Fred Siegel notes that public sector unions have

become a vanguard movement within liberalism. And the reason for that is it’s the public sector that comes closest to the statist ideals of McGovern and post-McGovern liberals. And that is, there’s no connection between effort and reward. You’re guaranteed your job. You’re guaranteed your salary increase. There’s a kind of bureaucratic equality.

“This vanguard,” Siegel continues, “becomes in the eyes of many liberals the model for the middle class. Public-sector unions are what all workers should be like. Their benefits are the kind of benefits everyone should get.” So instead of the private sector defining the public, the public sector is thought to define the private.

As public employees unionize, their dues—often collected for the unions by the government—fund a permanent interest constantly lobbying for bigger government. To pay for this bigger and more expensive government, they advocate for higher taxes on those in the private sector. Only when they are threatened with layoffs are they inclined to compromise, and sometimes not even then. That is what I mean when I say that we work for them.

Where to Go From Here

One of the few silver linings of our tough economy today is that it is forcing tough decisions. Big city mayors and governors are having issues with their public employees, because we’ve reached a point where we simply cannot afford business as usual. With a sluggish economy—and fewer taxpayers—the problems that have piled up are becoming too difficult to ignore.

Across the nation we have governors and mayors trying to solve their public employee problems with varying degrees of seriousness, from Chris Christie in New Jersey to Jerry Brown in California to the great experiments going on in the Rust Belt—in Indiana, which has done the best, and Wisconsin, Ohio, and Michigan. Only Illinois, led by Democratic Governor Pat Quinn, has opted for business as usual with a mammoth tax increase that is now being followed up, in today’s typical way of Democratic governance, with tax breaks for large companies threatening to leave Chicago because of the tax burden.

In most of these places, there’s probably little we can do about the contracts that exist. What we can do is bring in new hires under more reasonable contracts and pro-rate contributions for existing employees. Even marginal changes can have a big impact, as Wisconsin found out when Governor Scott Walker’s collective bargaining reforms for public workers helped restore many of the state’s school districts back to fiscal health.

My father was a federal employee, as an FBI agent. I spent some time as a government worker in the White House. I also know many fine and devoted people on the public payroll who work hard, are good at what they do, and earn everything they get. But there are also those who work without results. I believe Americans are a generous people who can recognize the difference. We need to restore our public sector to a place where those in charge can make those distinctions and allocate rewards and resources accordingly.

In the meantime, I think the best thing we can do is speak honestly. That is what Mr. Christie is doing in New Jersey. His style isn’t for everyone. Yet his popularity suggests that Americans appreciate a politician willing to talk about the reality of public employee unions today—and the unreasonable costs they are imposing on our society.

We’ll never return to the ideal of public service until the rest of us start speaking honestly as well.

Comments Off more...

Reagan’s Moral Courage

Andrew Roberts - Photo by Nancy Ellison

Andrew Roberts
Historian

Printable PDF

ANDREW ROBERTS received his Ph.D. at Gonville and Caius College, Cambridge, where he is also an honorary senior scholar. He has written or edited 12 books, including A History of the English-Speaking Peoples Since 1900, Masters and Commanders: How Four Titans Won the War in the West, 1941-1945, and The Storm of War: A New History of the Second World War.

The following are excerpts from a speech delivered at Hillsdale College on October 7, 2011, at the dedication of a statue of Ronald Reagan by Hillsdale College Associate Professor of Art Anthony Frudakis.

The defining feature of Ronald Reagan was his moral courage. It takes tremendous moral courage to resist the overwhelming tide of received opinion and so-called expert wisdom and to say and do exactly the opposite. It could not have been pleasant for Reagan to be denounced as an ignorant cowboy, an extremist, a warmonger, a fascist, or worse by people who thought themselves intellectually superior to him. Yet Reagan responded to those brickbats with the cheery resolve that characterized not only the man, but his entire career. What is more, he proceeded during his two terms as president to prove his critics completely wrong . . . .

During Reagan’s presidency, America enjoyed its longest period of sustained economic growth in the 20th century. Meanwhile, in the realm of foreign policy, the Reagan Doctrine led to the defeat of the worst totalitarian scourge to blight the globe since the defeat of the Nazis in World War II. By the time he left office, the faith of Americans in the greatness of their country had been restored. In retrospect, Reagan’s was a great American success story. Born in rented rooms above a bank in Tampico, Illinois, he ended his days as the single most important American conservative figure of the last century. Not bad for an ignorant cowboy.

From his own reading and observation of life, Reagan understood that the doctrines of Marxism and Leninism were fundamentally opposed to the deepest and best impulses of human nature. Enforcing such doctrines would require vicious oppression, including propaganda, secret police such as the KGB, a debased and corrupt judicial system, huge standing armies stationed across Eastern Europe, children spying on their parents, the Berlin Wall, a gagged media, a shackled populace, a privileged nomenklatura, prisons posing as psychiatric hospitals, puppet trade unions, a subservient academy, and above all, what Aleksandr Solzhenitsyn dubbed a “gulag archipelago” of concentration camps. In sum, the entire apparatus that Reagan characterized so truthfully in a March 1983 speech as an “evil empire.” Yet he was immediately accused—not just in Russia, but also here in the West—of being mad, bad, and dangerous. He was written off as stupid, provocative, and oafish by huge swaths of the Western commentariat. Today, thanks to his published correspondence, we know that he was anything but. Indeed, he was very widely read and a thoughtful man, but it suited his purposes to be underestimated by his opponents. The cultural condescension of those experts and intellectuals who denounced his evil empire speech as unacceptably simplistic—even simple-minded—might have been despicable, but it worked to Reagan’s advantage. Although history was to prove him right in every particular about the true nature of the U.S.S.R., none of his critics have ever admitted as much, at least publicly, let alone apologized.

What helped to make Reagan great was that he couldn’t care less what his critics thought of him. He knew the image of the swaggering cowboy was very far removed from reality, but if his opponents chose to be mesmerized by it, all the better for him. It was he, not they, who in 1987 would stand at the Brandenburg Gate in Berlin and demand: “Mr. Gorbachev, tear down this wall!” The Left’s strategy of détente had been tried for 40 years, and it had led to ever wider Communist incursions, especially during the 1970s, into territories across Africa, Asia, and Latin America. The Reagan Doctrine, by contrast, marked a turn away from the doctrine of containment, adhered to by every president since Harry Truman. Reagan bravely declared that communism’s global march would not merely be checked but reversed.

For decades the Politburo in the Kremlin had been testing the West’s defenses, looking for weakness. Where it encountered strength and willpower, as during the Berlin airlift and the Cuban missile crisis, it pulled back. Where, as was all too often the case, it instead found vacillation and appeasement, it thrust forward until whole countries fell under its control. Under the Reagan Doctrine, non-Communist governments would be supported actively, and Communist governments, wherever they were not firmly established, would be undermined and if possible overthrown. Reagan did not act in the name of American imperialism, as his opponents predictably alleged, but rather in the name of human dignity. As he fought the Communists, he received gradually more and more support from the American people. He supported anti-Communist movements in Poland, El Salvador, and Guatemala, as well as open insurgencies in Afghanistan, Cambodia, Ethiopia, Laos, and Nicaragua. The Kremlin soon recognized that in Reagan it had a powerful and committed ideological foe on its hands, one who took seriously JFK’s words in his Inaugural Address, that the United States “shall pay any price, bear any burden, meet any hardship, support any friend, and oppose any foe, in order to assure the survival and success of liberty.” Believing in American exceptionalism, Reagan deployed an extensive political, economic, military, and psychological arsenal to confront the Soviet Union. And he did so mostly through proxies: Except for the Caribbean island of Grenada, where American citizens were in danger, he did not commit American troops to the battle . . . .

* * *

In the 1980s, Americans felt confident enough in their country’s future to spend, produce, and consume in a way they hadn’t under Jimmy Carter and don’t today. Reagan genuinely believed, as the 1984 campaign slogan put it, that it was “Morning in America.” His confidence in the country and its abilities spread to the American people and to the markets. After all, strong, confident leadership is infectious. There can be a virtuous cycle in economics, just as there can be a vicious one. Reagan’s Economic Recovery Act and his Tax Reform Act were the twin pillars of America’s renaissance in the 1980s. He reduced the highest marginal tax rate to 28 percent and simplified the tax code. He deregulated industry, tightened the money supply, and reduced the growth of public expenditure. By 1983, America had completely recovered economically, and by 1988, inflation, which had been at 12.5 percent under Carter, was down to 4.4 percent. Furthermore, unemployment came down to 5.5 percent as 18 million new jobs were created.

In one area, however, Reagan knew that he had to increase public spending dramatically, if the global threats to America were to be neutered. The overly cautious, nerve-wracked, and humiliated America of 1979 and 1980—when 52 American diplomats were taken hostage in Tehran for 444 days and were paraded, hooded and blindfolded, in the streets—was about to give way to a virile and self-confident America. It was no accident that, on the very day of Reagan’s inauguration, the Iranian regime released the hostages rather than face the fury of the incoming President. It was the last smart thing that regime ever did.

When Reagan entered office, defense spending had fallen to less than five percent of GDP from over 13 percent in the 1950s. His belief that the Soviet system would eventually crack under steady Western pressure encouraged him to increase defense spending from $119 billion under Carter to $273 billion in 1986, a level that the U.S.S.R. simply could not begin to match. The Left criticized what they believed to be wasteful spending, but this expenditure led to a massive savings once the U.S.S.R. no longer posed the global existential threat it once had.

America had achieved a huge technological advantage by the 1980s, which allowed Reagan to embark on the controversial Strategic Defense Initiative, nicknamed “Star Wars” by its opponents. The system was based on the idea that incoming ballistic missiles could be destroyed over the Atlantic or even earlier. Though the technology was still very much in its infancy, judicious leaking of suitably exaggerated test results further rattled the Soviet leadership. As Vladimir Lukin, the Soviet foreign policy expert and later ambassador to the U.S., admitted to the Carnegie Endowment for International Peace in 1992: “It is clear that SDI accelerated our catastrophe by at least five years.” Besides SDI, Reagan pursued rapid deployment forces, the neutron bomb, the MX Peacekeeper missile, Trident nuclear submarines, radar-evading stealth bombers, and new ways of looking at battlefield strategies and tactics . . . . In response to the deployment of these weapons, the Left issued strident denunciations and organized massive anti-American demonstrations all across Europe. These were faced down with characteristic moral courage by Ronald Reagan, ably supported by Margaret Thatcher. “Reagan’s great virtue,” said his former Secretary of State George Shultz, “was that he did not accept that extensive political opposition doomed an attractive idea. He would fight resolutely for an idea, believing that if it was valid, he could persuade the American people to support it.”

. . . In the words of Margaret Thatcher, Reagan helped the world break free of a monstrous creed. He understood that, in addition to being morally bankrupt—as it had been since the Bolshevik Revolution—the Soviet system was also financially bankrupt. Numerous so-called five-year plans had not delivered, because human beings simply will not work hard for an all-powerful state that will not pay them fairly for their labor. By contrast, Reagan believed that low taxes, a minimal state, a reduction in bureaucratic regulation, and a commitment to free market economics would lead to a dramatic expansion of the American economy. This would enable America to pay for a defense build-up so large that the Soviets would have to declare a surrender in the Cold War. That surrender began on September 12, 1989, when a non-Communist government took office in Poland. Within two months, on the night of November 9, the people of East and West Berlin tore down the wall that had separated them for over a quarter of a century. This was the greatest of Reagan’s many fine legacies.

The extension of freedom to Eastern Europe was not merely a political or military or economic phenomenon for Reagan; it was a spiritual one, too. Reagan believed that America had lost its sense of providential mission, and he meant to re-establish it. Beneath his folksy charm and anecdotes was a steely will and a determination to re-establish the moral superiority of democracy over totalitarianism, of the individual over the state, of freedom of speech over censorship, of faith over government-mandated atheism, and of free enterprise over the command economy. As the leader of the free world, he saw it as his responsibility to defend, extend, and above all proselytize for democracy and human dignity.

Reagan understood leadership in a way that I fear is sadly lacking in the West today. “To grasp and hold a vision,” he said in 1994, “that is the very essence of successful leadership. Not only on the movie set where I learned it, but everywhere.” Indeed, in some ways the world is an even more perilous place than it was in Reagan’s day. For all its undoubted evil, at least the Soviet Union was predictable, and it was fearful of the consequences of mutually assured destruction. By contrast, President Ahmadinejad of Iran is building a nuclear bomb while publicly calling for Israel to be wiped off the map. We know from the experience of 9/11 that Al Qaeda and its affiliates would not hesitate to explode a nuclear device in America if they got the chance. As the IRA pronounced when it narrowly missed murdering Margaret Thatcher in 1984: “You have to be lucky every time, we only have to be lucky once.” Yet, when looking at the dangers facing civilization today, there is this one vital difference from 30 years ago: I can see no leaders of the stamp of Ronald Reagan or Margaret Thatcher presently on hand to infuse us with that iron purpose and that sense of optimism that we had in the 1980s. Indeed, some of our present-day leaders only seem to make matters worse. This is why it is all the more important to erect splendid statues like this one. “The longer you can look back,” said Winston Churchill, “the further you can look forward.”

The point of raising a statue to Ronald Reagan is not just to honor him, although it rightly does do that. A statue inspires and encourages the rest of us to try and emulate his deeds, to live up to his ideals, to finish his work, and to “grasp and hold” his vision. Reagan wrote in his farewell message to the American people in November 1994 announcing his retirement from public life: “When the Lord calls me home, I will leave with the greatest love for this country of ours and eternal optimism for its future. I now begin the journey that will lead me into the sunset of my life. I know that for America, there will always be a bright dawn ahead.” Though characteristically upbeat, it will only remain true so long as America continues to produce leaders with the moral courage and the leadership abilities of Ronald Reagan, one of America’s greatest presidents.

Reprinted by permission from Imprimis, a publication of Hillsdale College.

Comments Off more...

Reaganomics and the American Character

Phil Gramm

November 2011

Phil Gramm
Former U.S. Senator

Printable PDF

CURRENTLY vice chairman of the investment bank division of UBS, Phil Gramm served as a member of the U.S. House of Representatives from Texas’s sixth congressional district from 1979-1985, and as a U.S. Senator from Texas from 1985-2002. Prior to his career in public service, he taught economics at Texas A&M University from 1967-1978. Sen. Gramm earned both his B.A. and doctorate degrees in economics from the University of Georgia.

 

The following is adapted from a speech delivered at Hillsdale College on October 3, 2011, during a four-day conference on “Reagan: A Centenary Retrospective,” sponsored by the College’s Center for Constructive Alternatives.

What was the American economy like in the decade prior to the Reagan presidency? The 1970s, for a myriad of reasons, were not a happy time. They featured a combination of stagnation and inflation, which came to be called “stagflation.” The inflation rate peaked at just over 13 percent, and prime interest rates rose as high as 21-and-a-half percent. Although President Jimmy Carter did not use the exact words, a malaise had certainly set in among Americans. Many wondered whether our nation’s time had passed. A Time magazine headline read, “Is the Joyride Over?” Did we really need, as Jimmy Carter told us, to learn to live on less?

Ronald Reagan did not believe America was in decline, but he did believe it had been suffering under wrongheaded economic policies. In response, he offered his own plan, a program for creating economic freedom that came to be known as Reaganomics. Of course, most of Reaganomics was nothing new. Mostly it was the revival of an older understanding that unlimited government will eventually destroy freedom and that decisions regarding the allocation of scarce resources are best left to the private sector. Reagan explained these old ideas well, and in terms people could understand.

But there was also a new element to Reaganomics, and looking back, it was a powerful element and new to the economic debate. It was the idea that tax rates affect a person’s incentive to work, save and invest. To put it simply: lower tax rates create more economic energy, which generates more economic activity, which produces a greater flow of revenue to the government. This idea—which came to be known as the Laffer Curve—was met with media and public skepticism. But in the end, it passed the critical test for any public policy. It worked.

To be sure, there were a couple of major impediments to the economic success of Reagan’s program. First, the Federal Reserve Bank clamped down on the money supply in 1981 and 1982, in an effort to break the back of inflation, and subsequently the economy slipped into the steepest recession of the post-World War II period. Second, Soviet communism was on the march, the U.S. was in retreat around the world, and President Reagan was determined to rebuild our national defense as part of a program of peace through strength. All of these factors worked strongly against Reagan in the battle to revive the American economy. Nor was it a forgone conclusion that his program would get through Congress. We shouldn’t forget that it was a tough program. For example, it eliminated three Social Security benefits in one day: the adult student benefit, the minimum benefit, and the death benefit. Reagan’s program represented a dramatic change in public policy.

With his great skill in communicating ideas, Reagan got his program through Congress. And despite Fed policies and large expenditures for national defense, his program succeeded. I don’t want to bore you with statistics, but I will have to present some to make my case. Most importantly, I hope I will succeed in demonstrating what a difference good policies make to the average citizen.

The evidence is, I think, overwhelming: the Reagan program, when fully implemented in 1983, ushered in a 25-year economic golden age. America experienced very rapid economic growth and only two minor recessions in those 25 years, whereas there were four recessions in the previous 12 years, two of them big ones.

What exactly did Reagan do? For starters, he cut the top tax rate from 70 percent to 28 percent. And yes, high income earners benefitted from these cuts. But as I used to say in Congress, no one poorer than I am ever hired me in my life. And despite lower rates, the rich ended up paying a greater share: In 1979, the top one percent of income earners in America paid 18.3 percent of the total tax bill. By 2006, the last year for which we have reliable numbers, they were paying 39.1 percent of the total tax bill. The top ten percent of earners in 1979 were paying 48.1 percent of all taxes. By 2006, they were paying 72.8 percent. The top 40 percent of all earners in 1979 were paying 85.1 percent of all taxes. By 2006, they were paying 98.7 percent. The bottom 40 percent of earners in 1979 paid 4.1 percent of all taxes. By 2006, they were receiving 3.3 percent in direct payments from the U.S. Treasury.

In the 12 years prior to the Reagan program, economic growth averaged 2.5 percent. For the following 25 years, it averaged 3.3 percent. What about per capita income? In the 12 years prior to the Reagan program, per capita GDP, in real terms, grew by 1.5 percent. For the 25 years after the Reagan program was implemented, real per capita income grew by 2.2 percent. By 2006, the average American was making $7,400 more than he would have made if growth rates had remained at the same level as they were during the 12 years prior to the Reagan program. A family of four was making $29,602 more. During the 12 years prior to Reagan, America created 1.3 million jobs per year. That number is pretty impressive compared to today’s stagnant economy. But during the Reagan years, America added two million jobs per year. That means as of 2007 there were 17.5 million more Americans at work than would have been working had the growth rates of the pre-Reagan era continued.

Inflation, which had been 7.6 percent for the previous 12 years, fell to 3.1 percent. Interest rates plummeted. The average homeowner in America had a monthly mortgage payment of $1,000 less as a result of the success of the Reagan program. Poverty, which had grown throughout the 1970s despite massive increases in anti-poverty programs, plummeted despite cuts to these programs. The poverty level fell from 15 percent to 11.3 percent. These results are tangible evidence that government policy matters.

This is not to say that no mistakes were made. In order to secure lower tax rates, it became good politics to raise the number and amount of income tax deductions, thereby removing about 50 percent of Americans from the tax rolls. In my opinion, that was a mistake, and I think we are suffering for it today. I believe everyone should pay some income taxes. Nevertheless, the net result of the Reagan program was good for all Americans.

So how does the Reagan recovery compare to the recovery going on today? In sum, this is the most disappointing recovery of the post-World War II period by a large margin. I don’t think people understand what an outlier this recovery period is. If the economy had recovered from this recession at the rate it recovered from the 1982 recession, which was roughly the same size in terms of unemployment, there would be 16.3 million more Americans at work today—in other words, all those who say they are unemployed plus almost 60 percent of “discouraged workers” who have dropped out of the labor force. If real per capita income had grown in this recovery at the same rate it grew during the Reagan recovery, real per capita income would be $5,139 higher today. Both the Reagan program and the Obama program instituted dramatic changes. One program worked. The other is failing.

In the end, government policy matters. The truth is, Americans are pretty ordinary people. What is unique about America is an understanding of freedom and limited government that lets ordinary people achieve extraordinary things. We have been getting away from that view recently, but if we can get back to that understanding, which was Reagan’s, our nation will be fine.

Let me conclude by saying that the argument I am making is not just about money or GDP. It’s an argument about character.

If you want to see the effect of bad government policy on character, simply turn on the news and see how Greek civil servants have been behaving recently. They are victimizers behaving like victims. Greek government policies have made them what they are. But what made Americans who we are is a historically unprecedented level of freedom and responsibility. The real danger today is not merely a loss of prosperity, but a loss of the kind of character on which prosperity is based.

I occasionally hire a man to do bulldozer work on my ranch. He doesn’t know a lot about foreign policy, but he knows a lot about the economics of the bulldozing business. In his freedom to pursue that business and to be the best he can be at it, he’s the equal of any man. He’s proud, he’s independent, and he knows his trade as well as anybody else in America knows theirs. That’s what America is about. For me, today’s battle, as it was in 1980, is not just about prosperity or goods and services. It’s about freedom, and it’s about the kind of character that only freedom creates.

Reprinted by permission from Imprimis, a publication of Hillsdale College.

Comments Off more...

The Right to Work: A Fundamental Freedom

Mark Mix

Mark Mix

May/June 2011

Mark Mix
President,
National Right to Work Legal Defense Foundation

PRINTABLE PDF

The Right to Work: A Fundamental Freedom

MARK MIX is president of the National Right to Work Legal Defense Foundation, as well as of the National Right to Work Committee, a 2.2 million member public policy organization. He holds a B.A. in finance from James Madison University and an associate’s degree in marketing from the State University of New York. His writings have appeared in such newspapers and magazines as the Wall Street Journal, the Washington Times, the Detroit Free Press, the San Antonio Express-News, the Orange County Register and National Review.

The following is adapted from a lecture delivered at Hillsdale College on January 31, 2011, during a conference co-sponsored by the Center for Constructive Alternatives and the Ludwig von Mises Lecture Series.

BOEING IS A GREAT AMERICAN COMPANY. Recently it has built a second production line—its other is in Washington State—in South Carolina for its 787 Dreamliner airplane, creating 1,000 jobs there so far. Who knows what factors led to its decision to do this? As with all such business decisions, there were many. But the National Labor Relations Board (NLRB)—a five-member agency created in 1935 by the Wagner Act (about which I will speak momentarily)—has taken exception to this decision, ultimately based on the fact that South Carolina is a right-to-work state. That is, South Carolina, like 21 other states today, protects a worker’s right not only to join a union, but also to make the choice not to join or financially support a union. Washington State does not. The general counsel of the NLRB, on behalf of the International Association of Machinists union, has issued a complaint against Boeing, which, if successful, would require it to move its South Carolina operation back to Washington State. This would represent an unprecedented act of intervention by the federal government that appears, on its face, un-American. But it is an act long in the making, and boils down to a fundamental misunderstanding of freedom.

Where does this story begin?

The Wagner Act and Taft-Hartley

In 1935, Congress passed and President Franklin Roosevelt signed into law the National Labor Relations Act (NLRA), commonly referred to as the Wagner Act after its Senate sponsor, New York Democrat Robert Wagner. Section 7 of the Wagner Act states:

Employees shall have the right to self-organization, to form, join, or assist labor organizations, to bargain collectively through representatives of their own choosing, and to engage in other concerted activities for the purpose of collective bargaining or other mutual aid or protection.

Union officials such as William Green, president of the American Federation of Labor (AFL), and John L. Lewis, principal founder of the Congress of Industrial Organizations (CIO), hailed this legislation at the time as the “Magna Carta of Labor.” But in fact it was far from a charter of liberty for working Americans.

Section 8(3) of the Wagner Act allowed for “agreements” between employers and officers of a union requiring union membership “as a condition of employment” if the union was certified or recognized as the employees’ “exclusive” bargaining agent on matters of pay, benefits, and work rules. On its face, this violates the clear principle that the freedom to associate necessarily includes the freedom not to associate. In other words, the Wagner Act didn’t protect the freedom of workers because it didn’t allow for them to decide against union membership. To be sure, the Wagner Act left states the prerogative to protect employees from compulsory union membership. But federal law was decidedly one-sided: Firing or refusing to hire a worker because he or she had joined a union was a federal crime, whereas firing or refusing to hire a worker for not joining a union with “exclusive” bargaining privileges was federally protected. The National Labor Relations Board was created by the Wagner Act to enforce these policies.

During World War II, FDR’s War Labor Board aggressively promoted compulsory union membership. By the end of the war, the vast majority of unionized workers in America were covered by contracts requiring them to belong to a union in order to keep their jobs. But Americans were coming to see compulsory union membership—euphemistically referred to as “union security”—as a violation of the freedom of association. Furthermore, the nonchalance with which union bosses like John L. Lewis paralyzed the economy by calling employees out on strike in 1946 hardened public support for the right to work as opposed to compulsory unionism. As Gilbert J. Gall, a staunch proponent of the latter, acknowledged in a monograph chronicling legislative battles over this issue from the 1940s on, “the huge post-war strike wave and other problems of reconversion gave an added impetus to right-to-work proposals.”

When dozens of senators and congressmen who backed compulsory unionism were ousted in the 1946 election, the new Republican leaders of Congress had a clear opportunity to curb the legal power of union bosses to force workers to join unions. Instead, they opted for a compromise that they thought would have enough congressional support to override a presidential veto by President Truman. Thus Section 7 of the revised National Labor Relations Act of 1947—commonly referred to as the Taft-Hartley Act—only appears at first to represent an improvement over Section 7 of the Wagner Act. It begins:

Employees shall have the right to self-organization, to form, join, or assist labor organizations, to bargain collectively through representatives of their own choosing, and to engage in other concerted activities for the purpose of collective bargaining or other mutual aid or protection, and shall also have the right to refrain from any and all such activities. . . .

Had this sentence ended there, forced union membership would have been prohibited, and at the same time voluntary union membership would have remained protected. Unfortunately, the sentence continued:

…except to the extent that such right may be affected by an agreement requiring membership in a labor organization as a condition of employment as authorized in section 158(a)(3) of this title.

This qualification, placing federal policy firmly on the side of compulsory union membership, left workers little better off than they were under the Wagner Act. Elsewhere, Taft-Hartley did, for the most part, prohibit “closed shop” arrangements that forced workers to join a union before being hired. But they could still be forced to join, on threat of being fired, within a few weeks after starting on the job.

Boeing’s Interest, and Ours

It cannot be overemphasized that compulsory unionism violates the first principle of the original labor union movement in America. Samuel Gompers, founder and first president of the AFL, wrote that the labor movement was “based upon the recognition of the sovereignty of the worker.” Officers of the AFL, he explained in the American Federationist, can “suggest” or “recommend,” but they “cannot command one man in America to do anything.” He continued: “Under no circumstances can they say, ‘you must do so and so, or, ‘you must desist from doing so and so.’” In a series of Federationist editorials published during World War I, Gompers opposed various government mandate measures being considered in the capitals of industrial states like Massachusetts and New York that would have mandated certain provisions for manual laborers and other select groups of workers:

The workers of America adhere to voluntary institutions in preference to compulsory systems which are held to be not only impractical but a menace to their rights, welfare and their liberty.

This argument applies as much to compulsory unionism—or “union security”—as to the opposite idea that unions should be prohibited. And in a December 1918 address before the Council on Foreign Relations, Gompers made this point explicitly:

There may be here and there a worker who for certain reasons unexplainable to us does not join a union of labor. This is his right no matter how morally wrong he may be. It is his legal right and no one can dare question his exercise of that legal right.

Compare Gompers’s traditional American view of freedom to the contemptuous view toward workers of labor leaders today. Here is United Food and Commercial Workers union strategist Joe Crump advising union organizers in a 1991 trade journal article: “Employees are complex and unpredictable. Employers are simple and predictable. Organize employers, not employees.” And in 2005, Mike Fishman, head of the Service Employees International Union, was even more blunt. When it comes to union organizing campaigns, he told the Wall Street Journal, “We don’t do elections.”

Under a decades-old political compromise, federal labor policies promoting compulsory unionism persist side by side with the ability of states to curb such compulsion with right-to-work laws. So far, as I said, 22 states have done so. And when we compare and contrast the economic performance in these 22 states against the others, we find interesting things. For example, from 1999 to 2009 (the last such year for which data are available), the aggregate real all-industry GDP of the 22 right-to-work states grew by 24.2 percent, nearly 40 percent more than the gain registered by the other 28 states as a group.

Even more dramatic is the contrast if we look at personal income growth. From 2000 to 2010, real personal incomes grew by an average of 24.3 percent in the 22 right-to-work states, more than double the rate for the other 28 as a group. But the strongest indicator is the migration of young adults. In 2009, there were 20 percent more 25- to 34-year-olds in right-to-work states than in 1999. In the compulsory union states, the increase was only 3.3 percent—barely one-sixth as much.

In this context, the decision by Boeing to open a plant in South Carolina may be not only in its own best interest, but in ours as well. So in whose interest is the National Labor Relations Board acting? And more importantly, with a view to what understanding of freedom?

Public Sector Unionism

As more and more workers and businesses have obtained refuge from compulsory unionism in right-to-work states in recent decades, the rationality of the free market has been showing itself. But the public sector is another and a grimmer story.

The National Labor Relations Act affects only private-sector workers. Since the 1960s, however, 21 states have enacted laws authorizing the collection of forced union dues from at least some state and local public employees. More than a dozen additional states have granted union officials the monopoly power to speak for all government workers whether they consent to this or not. Thus today, government workers are more than five times as likely to be unionized as private sector workers. This represents a great danger for taxpayers and consumers of government services. For as Victor Gotbaum, head of the Manhattan-based District 37 of the American Federation of State, County and Municipal Employees union, said 36 years ago: “We have the ability, in a sense, to elect our own boss.”

How this works is simple, and explains the inordinate power of union officials in so many states that have not adopted right-to-work laws. Union officials funnel a huge portion of the compulsory dues and fees they collect into efforts to influence the outcomes of elections. In return, elected officials are afraid to anger them even in the face of financial crisis. This explains why states with the heaviest tax burdens and the greatest long-term fiscal imbalances (in many cases due to bloated public employee pension funds) are those with the most unionized government workforces. California, Illinois, Massachusetts, Michigan, Nevada, New Jersey, New York, Ohio and Wisconsin represent the worst default risks among the 50 states. In 2010, an average of 59.2 percent of the public employees in these nine worst default-risk states were unionized, 19.2 percentage points higher than the national average of 40 percent. All of these states except Nevada authorize compulsory union dues and fees in the public sector.

* * *

Fortunately, there are signs that taxpayers are recognizing the negative consequences of compulsory unionism in the public sector. Just this March, legislatures in Wisconsin and Ohio revoked compulsory powers of government union bosses, and similar efforts are underway in several other states. Furthermore, the NLRB’s blatantly political and un-constitutional power play with regard to Boeing’s South Carolina production line is sure to strike fair-minded Americans as beyond the pale. Now more than ever, it is time to push home the point that all American workers in all 50 states should be granted the full freedom of association—which includes the freedom not to associate—in the area of union membership.

Reprinted by permission from Imprimis, a publication of Hillsdale College.

Comments Off more...

It’s Never Just the Economy, Stupid

Brian T. Kennedy

BRIAN T. KENNEDY is president of the Claremont Institute and publisher of the Claremont Review of Books. He has written on national security affairs and California public policy issues in National Review, the Wall Street Journal, Investor’s Business Daily, and other national newspapers. He sits on the Independent Working Group on Missile Defense and is a co-author of the recent book Shariah: The Threat to America.

The following is adapted from a speech delivered on January 7, 2011, in the “First Principles on First Fridays” lecture series sponsored by Hillsdale’s Kirby Center for Constitutional Studies and Citizenship in Washington, D.C.

WE ARE OFTEN TOLD that we possess the most powerful military in the world and that we will face no serious threat for some time to come. We are comforted with three reassurances aimed at deflecting any serious discussion of national security: (1) that Islam is a religion of peace; (2) that we will never go to war with China because our economic interests are intertwined; and (3) that America won the Cold War and Russia is no longer our enemy. But these reassurances are myths, propagated on the right and left alike. We believe them at our peril, because serious threats are already upon us.

Let me begin with Islam. We were assured that it was a religion of peace immediately following September 11. President Bush, a good man, believed or was persuaded that true Islam was not that different from Judaism or Christianity. He said in a speech in October 2001, just a month after the attacks on the Twin Towers and the Pentagon: “Islam is a vibrant faith. . . . We honor its traditions. Our enemy does not. Our enemy doesn’t follow the great traditions of Islam. They’ve hijacked a great religion.” But unfortunately, Mr. Bush was trying to understand Islam as we would like it to be rather than how countless devout Muslims understand it.

Organizationally, Islam is built around a belief in God or Allah, but it is equally a political ideology organized around the Koran and the teachings of its founder Muhammad. Whereas Christianity teaches that we should render unto Caesar what is Caesar’s and unto God what is God’s—allowing for a non-theocratic political tradition to develop in the West, culminating in the principles of civil and religious liberty in the American founding—Islam teaches that to disagree with or even reinterpret the Koran’s 6000 odd verses, organized into 114 chapters or Suras and dealing as fully with law and politics as with matters of faith, is punishable by death.

Islamic authorities of all the major branches of Islam hold that the Koran must be read so that the parts written last override the others. This so-called theory of abrogation means that the ruling parts of the Koran are those written after Muhammad went to Medina in 622 A.D. Specifically, they are Suras 9 and 5, which are not the Suras containing the verses often cited as proof of Islam’s peacefulness.

Sura 9, verse 5, reads: “Fight and slay the unbelievers wherever ye find them, and lie in wait for them in every stratagem of war. But if they repent, and establish regular prayers and practice regular charity, then open the way for them . . . .”

Sura 9, verse 29, reads: “Fight those who believe not in Allah nor the Last Day, nor hold that forbidden which hath been forbidden by Allah and His Apostle, nor acknowledge the religion of truth, even if they are of the 40 people of the Book, until they pay the jizya with willing submission, and feel themselves subdued.”

Sura 5, verse 51, reads: “Oh ye who believe! Take not the Jews and the Christians for your friends and protectors; they are but friends and protectors to each other. And he amongst you that turns to them for friendship is of them. Verily Allah guideth not the unjust.”

And Sura 3, verse 28, introduces the doctrine of taqiyya, which holds that Muslims should not be friends with the infidel except as deception, always with the end goal of converting, subduing, or destroying him.

It is often said that to point out these verses is to cherry pick unfairly the most violent parts of the Koran. In response, I assert that we must try to understand Muslims as they understand themselves. And I hasten to add that the average American Muslim does not understand the Koran with any level of detail. So I am not painting a picture here of the average Muslim. I am trying to understand those Muslims, both here in the U.S. and abroad, who actively seek the destruction of America.

Here at home, the threat is posed by the Muslim Brotherhood and its organizational arms, such as the Council on American Islamic Relations, the Islamic Society of North America, and the various Muslim student associations. These groups seek to persuade Americans that Islam is a religion of peace. But let me quote to you from a document obtained during the 2007 Holy Land Trial investigating terrorist funding. It is a Muslim Brotherhood Strategic Memorandum on North American Affairs that was approved by the Shura Council and the Organizational Conference in 1987. It speaks of “Enablement of Islam in North America, meaning: establishing an effective and a stable Islamic Movement led by the Muslim Brotherhood which adopts Muslims’ causes domestically and globally, and which works to expand the observant Muslim base, aims at unifying and directing Muslims’ efforts, presents Islam as a civilization alternative, and supports the global Islamic State wherever it is.”

Elsewhere this document says:

The process of settlement is a “Civilization-Jihadist Process” with all the means. The Ikhwan [the Muslim Brotherhood] must understand that their work in America is a kind of grand Jihad in eliminating and destroying the Western civilization from within and “sabotaging” its miserable house by their hands and the hands of the believers so that it is eliminated and Allah’s religion is made victorious over all other religions. Without this level of understanding, we are not up to this challenge and have not prepared ourselves for Jihad yet. It is a Muslim’s destiny to perform Jihad and work wherever he is and wherever he lands until the final hour comes . . . .

Now during the Bush Administration, the number of Muslims in the U.S. was typically estimated to be around three million. The Pew Research Center in 2007 estimated it to be 2.35 million. In 2000, the Council on American Islamic Relations put the number at five million. And President Obama in his Cairo speech two years ago put it at seven million.

In that light, consider a 2007 survey of American Muslim opinion conducted by the Pew Research Center. Eight percent of American Muslims who took part in this survey said they believed that suicide bombing can sometimes be justified in defense of Islam. Even accepting a low estimate of three million Muslims in the U.S., this would mean that 240,000 among us hold that suicide bombing in the name of Islam can be justified. Among American Muslims 18-29 years old, 15 percent agreed with that and 60 percent said they thought of themselves as Muslim first and Americans second. Among all participants in the survey, five percent—and five percent of the low estimate of three million Muslims in America is 150,000—said they had a favorable view of al Qaeda.

Given these numbers, it is not unreasonable to suggest that the political aims and ideology of the Muslim Brotherhood represent a domestic threat to national security. It is one thing to have hundreds of terrorist sympathizers within our borders, but quite another if that number is in the hundreds of thousands. Consider the massacre at Fort Hood: Major Nidal Malik Hasan believed that he was acting as a devout Muslim—indeed, he believed he was obeying a religious mandate to wage war against his fellow soldiers. Yet even to raise the question of whether Islam presents a domestic threat today is to invite charges of bigotry or worse.

And as dangerous as it potentially is, this domestic threat pales in comparison to the foreign threat from the Islamic Republic of Iran and its allies—a threat that is existential in nature. The government in Tehran, of course, is enriching uranium to convert to plutonium and place in a nuclear warhead. Iran has advanced ballistic missiles such as the Shahab-3, which can be launched from land or sea and is capable of destroying an American city. Even worse, if the Iranians were able to deliver the warhead as an electromagnetic pulse weapon from a ship off shore—a method they have been practicing, by the way—they could destroy the electronic infrastructure of the U.S. and cause the deaths of tens of millions or more. And let me be perfectly clear: We do not today have a missile defense system in place that is capable of defending against either a ship-launched missile attack by Iran or a ballistic missile attack from China or Russia. We do not yet today have such a system in place, even though we are capable of building one.

Since I have mentioned China and Russia, let me turn to them briefly in that order. The U.S. trades with China and the Chinese buy our debt. Currently they have $2 trillion in U.S. reserves, about half of which is in U.S. treasuries. Their economy and ours are intimately intertwined. For this reason it is thought that the Chinese will not go to war with us. Why, after all, would they want to destroy their main export market?

On the other hand, China is building an advanced army, navy, air force, and space-based capability that is clearly designed to limit the U.S. and its ability to project power in Asia. It has over two million men under arms and possesses an untold number of ICBMs—most of them aimed at the U.S.—and hundreds of short- and medium-range nuclear missiles. China’s military thinking is openly centered on opposing American supremacy, and its military journals openly discuss unrestricted warfare, combining traditional military means with cyber warfare, economic warfare, atomic warfare, and terrorism. China is also working to develop a space-based military capability and investing in various launch vehicles, including manned spaceflight, a space station, and extensive anti-satellite weaponry aimed at negating U.S. global satellite coverage.

Absent a missile defense capable of intercepting China’s ballistic missiles, the U.S. would be hard pressed to maintain even its current security commitments in Asia. The U.S. Seventh Fleet, however capable, cannot withstand the kinds of nuclear missiles and nuclear-tipped cruise missiles that China could employ against it. The Chinese have studied American capabilities, and have built weapons meant to negate our advantages. The destructive capability of the recently unveiled Chinese DF-21D missile against our aircraft carriers significantly raises the stakes of a conflict in the South China Sea. And the SS-N-22 cruise missile—designed by the Russians and deployed by the Chinese and Iranians—presents a daunting challenge due to its enormous size and Mach 3 speed.

China has for some time carried out a policy that has been termed “peaceful rise.” But in recent years we have seen the coming to power of what scholars like Tang Ben call the “Red Guard generation”—generals who grew up during the Cultural Revolution, who are no longer interested in China remaining a secondary power, and who appear eager to take back Taiwan, avenge past wrongs by Japan and replace the U.S. as the preeminent military power in the region and ultimately in the world.

However far-fetched this idea may seem to American policymakers, it is widely held in China that America is on the decline, with economic problems that will limit its ability to modernize its military and maintain its alliances. And indeed, as things stand, the U.S. would have to resort to full-scale nuclear war to defend its Asian allies from an attack by China. This is the prospect that caused Mao Tse Tung to call the U.S. a “Paper Tiger.” Retired Chinese General Xiong Guong Kai expressed much the same idea in 1995, when he said that the U.S. would not trade Los Angeles for Taipei—that is, that we would have to stand by if China attacks Taiwan, since China has the ability to annihilate Los Angeles with a nuclear missile. In any case, current Chinese aggression against Japan in the Senkaku Islands and their open assistance of the Iranian nuclear program, not to mention their sale of arms to the Taliban in Afghanistan, would suggest that China is openly playing the role that the Soviet Union once played as chief sponsor of global conflict with the West.

Which brings us to Russia and to the degradation of American strategic thinking during and after the Cold War. This thinking used to be guided by the idea that we must above all prevent a direct attack upon the U.S. homeland. But over the past 50 years we have been taught something different: that we must accept a balance of power between nations, especially those possessing nuclear ballistic missiles; and that we cannot seek military superiority—including defensive superiority, as with missile defense—lest we create strategic instability. This is now the common liberal view taught at universities, think tanks and schools of foreign service. Meanwhile, for their part, conservatives have been basking in the glow of “winning the Cold War.” But in what sense was it won, it might be asked, given that we neither disarmed Russia of its nuclear arsenal nor put a stop to its active measures to undermine us. The transformation of some of the former captive nations into liberal democracies is certainly worth celebrating, but given the Russian government’s brutally repressive domestic policies and strengthened alliances with America’s enemies abroad over the past 20 years, conservatives have overdone it.

Perhaps it is not surprising, then, that our policy toward Russia has been exceedingly foolish. For the past two decades we have paid the Russians to dismantle nuclear warheads they would have dismantled anyway, while they have used those resources to modernize their ballistic missiles. On our part, we have not even tested a nuclear warhead since 1992—which is to say that we aren’t certain they work anymore. Nor have we maintained any tactical nuclear weapons. Nor, to repeat, have we built the missile defense system first proposed by President Reagan.

Just last month, with bipartisan backing from members of the foreign policy establishment, the Senate ratified the New Start Treaty, which will further reduce our nuclear arsenal and will almost certainly cause further delays in building missile defenses—and this with a nation that engages in massive deception against us, supports our enemies, and builds ever more advanced nuclear weapons.

At the heart of America’s strategic defense policy today is the idea of launching a retaliatory nuclear strike against whatever nuclear power attacks us. But absent reliable confidence in the lethality of forces, such a deterrent is meaningless. In this light, deliberating about the need for a robust modernization program, rather than arms reductions through New Start, would have been a better way for Congress to spend the days leading up to Christmas—which is to say, it would have been supportive of our strategic defense policy, rather than undercutting it.

But what about that strategic policy? Some of New Start’s supporters argued that reducing rather than modernizing our nuclear arsenal places us on the moral high ground in our dealings with other nations. But can any government claim to occupy the moral high ground when it willingly, knowingly, and purposely keeps its people nakedly vulnerable to nuclear missiles? The Russians understand well the intellectual and moral bankruptcy of the American defense establishment, and have carefully orchestrated things for two decades so that we remain preoccupied with threats of North Korean and now Iranian ballistic missiles. We spend our resources developing modest defense systems to deal, albeit inadequately, with these so-called rogue states, and meanwhile forego addressing the more serious threat from Russia and China, both of which are modernizing their forces. Who is to say that there will never come a time when the destruction or nuclear blackmail of the U.S. will be in the interest of the Russians or the Chinese? Do we imagine that respect for human life or human rights will stop these brutal tyrannies from acting on such a determination?

If I sound pessimistic, I don’t mean to. Whatever kind of self-deception has gripped the architects of our current defense policies, the American people have proved capable of forcing a change in direction when they learn the facts. Americans do not wish to be subjected to Sharia law, owe large sums of money to the Chinese, or be kept vulnerable to nuclear missiles. Having responded resoundingly to the economic and constitutional crisis represented by Obamacare, it is now time for us to remind our representatives of the constitutional requirement to provide for a common defense—in the true sense of the word.

“Reprinted by permission from Imprimis, a publication of Hillsdale College.

Comments Off more...

The Tea Parties and the Future of Liberty

Stephen Hayes

July/August 2010

Stephen F. Hayes

Senior Writer
The Weekly Standard

The Tea Parties and the Future of Liberty

Stephen F. Hayes  is a senior writer at The Weekly Standard and a FOX News Contributor. His work has been featured in the Wall Street Journal, the Los Angeles Times, Reason, National Review and many other publications. He is the author of two New York Times bestsellers: The Connection: How al Qaeda’s Collaboration with Saddam Hussein Has Endangered America and Cheney: The Untold Story of America’s Most Powerful and Controversial Vice President. His great-great uncle was a president of Hillsdale College and many of his relatives have attended Hillsdale, including two grandparents.

The following is adapted from a speech delivered on June 6, 2010, during a Hillsdale College cruise from Rome to Dover aboard the Crystal Symphony.

Barack Obama was inaugurated on January 20, 2009. Within a month he signed a $787 billion “stimulus package” with virtually no Republican support. It was necessary, we were told, to keep unemployment under eight percent. Overnight, the federal government had, as one of its highest priorities, weatherizing government buildings and housing projects. Streets and highways in no need of repair would be broken up and repaved. The Department of Transportation and other government agencies would spend millions on signs advertising the supposed benefits of the American Recovery and Reinvestment Act. I saw one of them on Roosevelt Island in Washington, D.C. It boasted that the federal park would be receiving a generous grant to facilitate the involvement of local youth in the removal of “non-indigenous plants.” In other words, kids would be weeding. We need a sign to announce that? And this was going to save the economy?

Then there was American Recovery and Reinvestment Act project number 1R01AA01658001A, a study entitled: “Malt Liquor and Marijuana: Factors in their Concurrent Versus Separate Use.” I’m not making this up. This is a $400,000 project being directed by a professor at the State University of New York at Buffalo. The following is from the official abstract: “We appreciate the opportunity to refocus this application to achieve a single important aim related to our understanding of young adults’ use of male [sic] liquor (ML), other alcoholic beverages, and marijuana (MJ), all of which confer high risks for experiencing negative consequences, including addiction. As we have noted, reviews of this grant application have noted numerous strength [sic], which are summarized below.”

So what were those strengths? “This research team has previous [sic] been successful in recruiting a large (>600) sample of regular ML drinkers.” Also, “the application is well-written.” Well-written? With three spelling mistakes? But who am I to judge? As for the other strength, there is no question that the team’s recruitment had been strong. But is that really a qualification for federal money? After all, they were paying people to drink beer!

These same scholars were behind a groundbreaking 2007 study that used regression analysis to discover that subjects who got drunk and high were more intoxicated than those who only abused alcohol. The new study pays these pot-smoking malt-liquor drinkers at least $45 to participate. They can buy four beers per day for the three-week project—all of it funded, at least indirectly, by the American taxpayer.

Perhaps not surprisingly, when President Obama visited Buffalo in May, he chose to highlight other stimulus grants. On the other hand, he could have pointed out that the beer money goes right back into the economy. Think of all those saved or created jobs! In any case, the findings of this new study are expected to echo those of the first study, which found: “Those who concurrently use both alcohol and marijuana are more likely to report negative consequences of substance use compared with those who use alcohol only.” Reading results like this, I tend to think that those who concurrently get drunk and high are also far more likely to believe the stimulus is working.

And have I mentioned that the estimated cost of the stimulus was later increased from $787 billion to $862 billion? That’s a cost underestimate of nearly ten percent. Anyone in private business who suddenly had to come up with ten percent more in outgoing funds than previously anticipated would likely go out of business.

All of this set the stage for a revolt. The accidental founding of the Tea Party movement took place in February 2009, when CNBC commentator Rick Santelli let loose a rant against the stimulus package, and in particular the proposal to subsidize what he called “the losers’ mortgages.” He proposed a ceremonial dump of derivative securities into Lake Michigan, and a few hours later a website popped up calling for a Chicago Tea Party. The video clip raced around the Internet, and it was soon clear that many average Americans were furious about the massive new spending bill and the plan to subsidize bad mortgages.

The stimulus was bad, but by itself it was probably not enough to sustain an entire movement. This is why the larger context matters: Under President Obama, federal spending has been growing at an unprecedented pace. We are adding $4.8 billion to the national debt every day. The long-term viability of Medicare and Social Security isn’t merely uncertain—as so many analysts would have us believe. In fact, their failure is a sure thing without structural changes. By adding a massive new entitlement with the health care bill, we are simply going to go broke faster. Americans understood much of this even before Mr. Obama was elected.

Consider this story from the recent presidential campaign: In July 2008, Republican nominee John McCain stopped in Belleville, Michigan, to par-ticipate in a town hall. After several friendly questions, he took one from Rich Keenan. Wearing a shirt with an American flag embroidered over his left breast, Keenan told McCain that he would not be voting for Obama. But then he said: “What I’m trying to do is get to a situation where I’m excited about voting for you.”

The audience laughed, and many in the crowd nodded their heads. Keenan explained that he was “concerned” about some of McCain’s views, such as his opposition to the Bush tax cuts and his views on the environment. Keenan allowed that he was grateful that McCain had begun taking more conservative positions. But he concluded: “I guess the question I have, and that people like me in this country have, is what can you say to us to make us believe that you actually came to the right positions? We want to take you to the dance, we’re just concerned about who you’re going to go home with.” The audience laughed again. McCain laughed, too, but then he grew serious: “I have to say, and I don’t mean to disappoint you, but I haven’t changed positions.” He defended his vote against the Bush tax cuts and, at some length, reiterated his concerns about global warming. Later, he went out of his way to emphasize his respect for Hillary Clinton and boast about his work with Joe Lieberman, Russ Feingold and Ted Kennedy.

I talked with Rich Keenan after the town hall. He described himself as a conservative independent. He said he often votes Republican but does not consider himself one. He added, “I do think that there are millions of Americans out there like me who are fairly conservative, probably more conservative than John McCain, and I think a lot of them are concerned about what’s going to happen if he does get elected.” Keenan was right. There were millions of people out there like him—conservatives, independents, disaffected Republicans, and many of them stayed home on election day. These people form the heart of the Tea Party movement.

In recent years, the Republican Party has seen its approval levels sink to new lows. In 2005, 33 percent of registered voters told Gallup they considered themselves Republican. By 2009, that number was 27 percent. The number of voters who identified themselves as independent showed a corresponding rise. But what’s interesting is that over that same time-frame, the number of voters self-identified as conservative stayed relatively constant: 39 percent in 2005 and 40 percent in 2009. (Self-identified liberals constituted 20 percent of respondents in both 2005 and 2009.) So even as the number of self-identified Republicans declined and the number of self-identified independents grew, the number of self-identified conservatives was constant. Of course, it’s too simple to postulate a one-for-one swap, but the trend seems clear. The Tea Party movement arose in an environment in which a growing number of Americans believed neither party was voicing its concerns.

All of this has liberals in the mainstream media and elsewhere flummoxed. At first they were dismissive. Think of the footage of Susan Roesgen of CNN going after Tea Party enthusiasts at a Chicago rally, suggesting they were irrational and stupid. And consider a few of the many other examples:

Eugene Robinson of the Washington Post wrote: “The danger of political violence in this country comes overwhelmingly from one direction—the right, not the left. The vitriolic, anti-government hate speech that is spewed on talk radio every day—and, quite regularly, at Tea Party rallies—is calibrated not to inform but to incite.”

MSNBC’s Ed Schultz said: “I believe that the Tea Partiers are misguided. I think they are racist, for the most part. I think that they are afraid. I think that they are clinging to their guns and their religion. And I think in many respects, they are what’s wrong with America.”

Actress Janeane Garofalo: “This is about hating a black man in the White House. This is racism straight up. These are nothing but a bunch of tea-bagging rednecks.”

Comedian Bill Maher: “The teabaggers, they’re not a movement, they’re a cult.”

Perhaps the most stunning comment came from prominent Democratic strategist Steve McMahon: “The reason people walk into schools and open fire is because of rhetoric like this and because of attitudes like this. The reason people walk into military bases and open fire is because of rhetoric like this and attitudes like this. Really, what they’re doing is not that much different than what Osama bin Laden is doing in recruiting people and encouraging them to hate America.”

We’ve seen this before. On November 7, 1994, the Washington Post ran an article about the loud, hateful fringe on the right: “Hate seems to be drifting through the air like smoke from autumn bonfires. It isn’t something that can be quantified. No one can measure whether it has grown since last year, the 1980s, or the 1880s. But a number of people who make their living taking the public’s temperature are convinced it’s swelling beyond the perennial level of bad manners and random insanity. It’s fueled, they say, by such forces as increasingly harsh political rhetoric, talk radio transmissions, and an increasing sense of not-so-quiet desperation.” The next day, Republicans took Congress.

Are today’s Tea Party supporters on the radical fringe? In a National Review/McLaughlin Associates poll conducted in February, six percent of 1,000 likely voters said that they had participated in a Tea Party rally. An additional 47 percent said they generally agree with the reasons for those protests. Nor is the Tea Party movement “monochromatic” and “all white,” as Chris Matthews claimed. Quite the contrary: the National Review poll found that it was five percent black and 11 percent Hispanic.

Perhaps that poll could be dismissed as the work of a right-leaning polling firm and a conservative magazine. You can’t say that about the New York Times and CBS. Their poll, which has a long history of oversampling Democrats, found that Tea Partiers are wealthier and better educated than average voters. It also found that 20 percent of Americans—one in five—supports Tea Parties. That’s an awfully big fringe.

Other polls confirmed these findings: a Washington Post/ABC poll found that 14 percent of voters say the Tea Party is “most in synch” with their values; 20 percent say Tea Parties are “most in tune with economic problems Americans are now facing.” The most interesting poll, in my view, came from TargetPoint Consulting, which interviewed nearly 500 attendees at the April 15, 2010, Tax Day rally in Washington, D.C. Here are some results:

Tea Partiers are united on the issues of debt, the growth of government, and health care reform.

They are socially conservative on the one hand and libertarian on the other, split roughly down the middle.

They are older, more educated, and more conservative than average voters, and they are “distinctly not Democrat.”

This new information complicated the mainstream media’s narrative about the Tea Party movement. This was not a fringe. Nancy Pelosi, who had earlier dismissed Tea Parties as “Astroturf”—meaning fake grassroots activism—revised that assessment, telling reporters that, in fact, she was just like the Tea Partiers.

This brings us to the present day. The president’s approval ratings are low, and Congressional Democrats’ are even worse. Members of the president’s party are not only running away from him in swing districts, but even in some relatively safe ones. Many analysts are suggesting that control of the House of Representatives is in play, and perhaps even that of the Senate.

This dissatisfaction flows directly from the president’s policies and those of his party. It is not simply “anti-incumbent,” as many of my press colleagues would have it. This voter outrage—and it is outrage, not hate—is specific and focused: Americans are fed up with big government and deeply concerned about the long-term economic health of their country. The stimulus was unpopular, and most Americans do not believe it’s working. Obama’s health care plan was unpopular when it passed. The American people understood the rather obvious point that it wouldn’t be possible to cover 30 million additional people, improve the care of those with insurance, and save taxpayers money, all at the same time.

Does all of this add up to big Republican gains in November? Not according to the mainstream media. The Boston Globe’s Susan Milligan recently wrote: “The Tea Party movement is energizing elements of the Republican Party and fanning an anti-Washington fervor, but the biggest beneficiaries in the mid-term elections, pollsters and political analysts say, could be the main target of their anger: Democrats.” CBS News reported the same thing just a few days later. What nonsense! I think there is little question that the Tea Parties—and the enthusiasm and energy they bring—will contribute to major Republican gains in November.

One final point: For many Tea Partiers, the massive and unconstitutional growth of government is the fundamental issue. But I think there’s something deeper, too. After her husband had won several primaries in a row in the spring of 2008, Michelle Obama proclaimed that for the first time in her life she was proud of her country. It was a stunning statement. It also foreshadowed what was to come: Since Barack Obama took office in January 2009, he has devoted much of his time to criticizing his own country. He apologizes for the policy decisions of his predecessors. He worries aloud that the U.S. has become too powerful. He has explicitly rejected the doctrine of American exceptionalism.

And this is not mere rhetoric. For the first time ever, the U.S. is participating in the Universal Periodic Review—a United Nations initiative in which member countries investigate their own nation’s human rights abuses. The State Department has held ten “listening sessions” around the U.S. during which an alphabet soup of left-wing groups aired their numerous grievances. These complaints are to be included in a report that the U.S. will submit to the United Nations Human Rights Council. It will be evaluated by such paragons of human rights as Burkina Faso, Saudi Arabia, Pakistan, China, and Cuba.

When President Obama spoke before the United Nations General Assembly in September 2009, he declared that a world order that elevates one country or group of countries over others is bound to fail. So he’s changing that order. If his domestic policy priority is the redistribution of wealth, his foreign policy priority seems to be the redistribution of power.

Most Americans don’t agree with the president’s priorities. And many of these Americans are now active in the Tea Party movement, a movement that has succeeded in starting a serious national conversation about a return to limited government.


Copyright © 2010 Hillsdale College. The opinions expressed in Imprimis are not necessarily the views of Hillsdale College. Permission to reprint in whole or in part is hereby granted, provided the following credit line is used: “Reprinted by permission from Imprimis, a publication of Hillsdale College.” SUBSCRIPTION FREE UPON REQUEST. ISSN 0277-8432. Imprimis trademark registered in U.S. Patent and Trade Office #1563325.

Comments Off more...

How Detroit's Automakers Went from Kings of the Road to Roadkill

February 2009

Joseph B. White
Senior Editor, The Wall Street Journal

Downloadable PDF

How Detroit’s Automakers Went from Kings of the Road to Roadkill

JOSEPH B. WHITE is a senior editor in the Washington, D.C., bureau of The Wall Street Journal. A graduate of Harvard University, he has worked for the Journal since 1987, and for most of that time he covered the auto industry, serving as Detroit bureau chief from 1998-2007. He writes a weekly column on the car business and the regulatory and social issues that surround it for the Journal’s online and print editions, and contributes new-car reviews to SmartMoney magazine. Mr. White is co-author (with Paul Ingrassia) of Comeback: The Fall and Rise of the American Automobile Industry, and won the Pulitzer Prize for reporting in 1993.

The following is adapted from a speech delivered at Hillsdale College on January 26, 2009, at a seminar on the topic, “Cars and Trucks, Markets and Governments,” co-sponsored by the Center for Constructive Alternatives and the Ludwig von Mises Lecture Series.

I’D LIKE to start by congratulating all of you. You are all now in the auto business, the Sport of Kings—or in our case, presidents and members of Congress. Without your support—and I assume that most of you are fortunate enough to pay taxes—General Motors and Chrysler would very likely be getting measured by the undertakers of the bankruptcy courts. But make no mistake. What has happened to GM is essentially bankruptcy by other means, and that is an extraordinary event in the political and economic history of our country.

GM is an institution that survived in its early years the kind of management turbulence we’ve come to associate with particularly chaotic Internet startups. But with Alfred P. Sloan in charge, GM settled down to become the very model of the modern corporation. It navigated through the Great Depression, and negotiated the transition from producing tanks and other military materiel during World War II to peacetime production of cars and trucks. It was global before global was cool, as its current chairman used to say. By the mid-1950s the company was the symbol of American industrial power—the largest industrial corporation in the world. It owned more than half the U.S. market. It set the trends in styling and technology, and even when it did not it was such a fast and effective follower that it could fairly easily hold its competitors in their places. And it held the distinction as the world’s largest automaker until just a year or so ago.

How does a juggernaut like this become the basket case that we see before us today? I will oversimplify matters and touch on five factors that contributed to the current crisis—a crisis that has been more than 30 years in the making.
First, Detroit underestimated the competition—in more ways than one.

Second, GM mismanaged its relationship with the United Auto Workers, and the UAW in its turn did nothing to encourage GM (or Ford or Chrysler) to defuse the demographic time bomb that has now blown up their collective future.

Third, GM, Ford, and Chrysler handled failure better than success. When they made money, they tended to squander it on ill-conceived diversification schemes. It was when they were in trouble that they often did their most innovative work—the first minivans at Chrysler, the first Ford Taurus, and more recently the Chevy Volt were ideas born out of crisis.

Fourth, GM (and Ford and Chrysler) relied too heavily on a few, gas-hungry truck and SUV lines for all their profits—plus the money they needed to cover losses on many of their car lines. They did this for a good reason: When gas was cheap, big gas-guzzling trucks were exactly what their customers wanted—until they were not.

Fifth, GM refused to accept that to survive it could not remain what it was in the 1950s and 1960s—with multiple brands and a dominant market share. Instead, it used short-term strategies such as zero percent financing to avoid reckoning with the consequences of globalization and its own mistakes.

Competition from Overseas

In hindsight, it’s apparent that the gas shocks of the 1970s hit Detroit at a time when they were particularly vulnerable. They were a decadent empire—Rome in the reign of Nero. The pinnacles of the Detroit art were crudely engineered muscle cars. The mainstream products were large, V8-powered, rear-wheel-drive sedans and station wagons. The Detroit marketing and engineering machinery didn’t comprehend the appeal of cars like the Volkswagen Beetle or the Datsun 240Z.

But it took the spike in gas prices—and the economic disruptions it caused—to really open the door for the Japanese automakers.
Remember, Toyota and Honda were relative pipsqueaks in those days. They did not have much more going for them in the American market prior to the first Arab oil embargo than Chinese automakers have today, or Korean automakers did 15 years ago. The oil shocks, however, convinced a huge and influential cohort of American consumers to give fuel-efficient Japanese cars a try. Equally important, the oil shocks persuaded some of the most aggressive of America’s car dealers to try them.

The Detroit automakers believed the Japanese could be stopped by import quotas. They initially dismissed reports about the high quality of Japanese cars. They later assumed the Japanese could never replicate their low-cost manufacturing systems in America. Plus they believed initially that the low production cost of Japanese cars was the result of automation and unfair trading practices. (Undoubtedly, the cheap yen was a big help.) In any case, they figured that the Japanese would be stuck in a niche of small, economy cars and that the damage could be contained as customers grew out of their small car phase of life.

They were wrong on all counts.

There were Cassandras—plenty of them. At GM, an executive named Alex Mair gave detailed presentations on why Japanese cars were superior to GM’s—lighter, more fuel efficient, and less costly to build. He set up a war room at GM’s technical center with displays showing how Honda devised low-cost, high-quality engine parts, and how Japanese automakers designed factories that were roughly half the size of a GM plant but produced the same number of vehicles.

Mair would hold up a connecting rod—-the piece of metal in an engine that connects the piston to the crankshaft. The one made by GM was bulky and crudely shaped with big tabs on the ends. Workers assembling the engines would grind down those tabs so that the weight of the piston and rod assembly would be balanced. By contrast, the connecting rod made by Honda was smaller, thinner, and almost like a piece of sculpture. It didn’t have ugly tabs on the end, because it was designed to be properly balanced right out of the forge. Mair’s point was simple: If you pay careful attention to designing an elegant, lightweight connecting rod, then the engine will be lighter and quieter, the car around the engine can be more efficient, the brakes will have less mass to stop, and the engine will feel more responsive because it has less weight to move.

Another person who warned GM early on about the nature of the Japanese challenge was Jim Harbour. In the early 1980s, he took it into his head to try to tell GM’s executives just how much more efficient Japanese factories really were, measured by hours of labor per car produced. The productivity gap was startling—the Japanese plants were about twice as efficient. GM’s president at the time responded by barring Jim Harbour from company property.

By the late 1980s, GM’s chairman, Roger Smith, had figured out that his company had something to learn from the Japanese. He just didn’t know what it was. He poured billions into new, heavily automated U.S. factories—including an effort to build an experimental “lights out” factory that had almost no hourly workers. He entered a joint venture with Toyota to reopen an old GM factory in California, called New United Motor Manufacturing, Inc., or NUMMI. The idea was that GM managers could go to NUMMI to see up close what the “secret” of Toyota’s assembly system was. Smith also launched what he promoted as an entirely new car company, Saturn, which was meant to pioneer both a more cooperative relationship with UAW workers and a new way of selling cars.

None of these was a bad idea. But GM took too long to learn the lessons from these experiments—good or bad. The automation strategy fell on its face because the robots didn’t work properly, and the cars they built struck many consumers as blandly styled and of poor quality. NUMMI did give GM managers valuable information about Toyota’s manufacturing and management system, which a team of MIT researchers would later call “lean production.” But too many of the GM managers who gained knowledge from NUMMI were unable to make an impact on GM’s core North American business.

Why? I believe it was because the UAW and GM middle managers quite understandably focused on the fact that Toyota’s production system required only about half the workers GM had at a typical factory at the time. That was an equation the union wouldn’t accept. The UAW demanded that GM keep paying workers displaced by new technology or other shifts in production strategy, which led to the creation of what became known as the Jobs Bank. That program discouraged GM from closing factories and encouraged efforts to sustain high levels of production even when demand fell.

GM and the UAW

This brings me to the relationship between Detroit management and the UAW.

It is likely that if no Japanese or European manufacturers had built plants in the U.S.—in other words if imports were still really imports—the Detroit carmakers would not be in their current straits, although we as consumers would probably be paying more for cars and have fewer choices than we do. The fact is that the Detroit Three’s post-World War II business strategies were doomed from the day in 1982 when the first Honda Accord rolled off a non-union assembly line in Ohio. After that it soon became clear that the Japanese automakers—and others—could build cars in the U.S. with relatively young, non-union labor forces that quickly learned how to thrive in the efficient production systems those companies operated.

Being new has enormous advantages in a capital-intensive, technology-intensive business like automaking. Honda, Toyota, Nissan, and later BMW, Mercedes, and Hyundai, had new factories, often subsidized by the host state, that were designed to use the latest manufacturing processes and technology. And they had new work forces. This was an advantage not because they paid them less per hour—generally non-union autoworkers receive about what UAW men and women earn in GM assembly plants—but because the new, non-union companies didn’t have to bear additional costs for health care and pensions for hundreds of thousands of retirees.

Moreover, the new American manufacturers didn’t have to compensate workers for the change from the old mass production methods to the new lean production approach. GM did—which is why GM created the Jobs Bank. The idea was that if UAW workers believed they wouldn’t be fired if GM got more efficient, then they might embrace the new methods. Of course, we know how that turned out. The Jobs Bank became little more than a welfare system for people who had nothing more to contribute because GM’s dropping market share had made their jobs superfluous.

Health care is a similar story. GM’s leaders—and the UAW’s—knew by the early 1990s that the combination of rising health care costs and the longevity of GM’s retired workers threatened the company. But GM management backed away from a confrontation with the UAW over health care in 1993, and in every national contract cycle afterwards until 2005—when the company’s nearness to collapse finally became clear to everyone.

In testimony before Congress this December, GM’s CEO Rick Wagoner said that GM has spent $103 billion during the past 15 years funding its pension and retiree health-care obligations. That is nearly $7 billion a year—more than GM’s capital spending budget for new models this year. Why wasn’t Rick Wagoner making this point in 1998, or 1999, or even 2003? Even now, GM doesn’t seem willing to treat the situation like the emergency it is. Under the current contract, the UAW will pay for retiree health-care costs using a fund negotiated in last year’s contract—but that won’t start until 2010. GM is on the hook to contribute $20 billion to that fund over the next several years—unless it can renegotiate that deal under federal supervision.

Quality is Job One

Rick Wagoner told Congress: “Obviously, if we had the $103 billion and could use it for other things, it would enable us to be even farther ahead on technology or newer equipment in our plants, or whatever.” Whatever, indeed.

This is a good place to talk about the Detroit mistake that matters most to most people: quality. By quality, I mean both the absence of defects and the appeal of the materials, design, and workmanship built into a car. I believe most people who buy a car also think of how durable and reliable a car is over time when they think of quality.

The failure of the Detroit automakers to keep pace with the new standards of reliability and defect-free assembly set by Toyota and Honda during the 1980s is well known, and still haunts them today. The really bad Detroit cars of the late 1970s and early to mid-1980s launched a cycle that has proven disastrous for all three companies. Poor design and bad reliability records led to customer dissatisfaction, which led to weaker demand for new Detroit cars as well as used ones. Customers were willing to buy Detroit cars—but only if they received a discount in advance for the mechanical problems they assumed they would have.

During the 1990s and the 2000s, a number of the surveys that industry executives accept as reliable guides to new vehicle quality began to show that the best of GM’s and Ford’s new models were almost as good—and in some cases better—in terms of being free of defects than comparable Toyotas, Hondas, or Nissans. But the Detroit brands still had a problem: They started $2,000 or more behind the best Japanese brands in terms of per-car costs, mainly because of labor and legacy costs, with a big helping of inefficient management thrown in. To overcome that deficit, GM and Ford (and Chrysler) resorted to aggressive cost-cutting and low-bid purchasing strategies with their materials suppliers.

Unfortunately, customers could see the low-bid approach in the design and materials used for Detroit cars. So even though objective measures of defects and things gone wrong showed new Detroit cars getting better and better, customers still demanded deep discounts for both new and used Detroit models. This drove down the resale value of used Detroit cars, which in turn made it harder for the Detroit brands to charge enough for the new vehicles to overcome their cost gap.

GM, Ford, and Chrysler compounded this problem by trying to generate the cash to cover their health care and pension bills by building more cars than the market demanded, and then “selling” them to rental car fleets. When those fleet cars bounced back to used car lots, where they competed with new vehicles that were essentially indistinguishable except for the higher price tag, they helped drive down resale values even more.

So the billions spent on legacy costs are matched by billions more in revenue that the Detroit automakers never saw because of the way they mismanaged supply and demand. This is why the Detroit brands appear to be lagging behind not just in hybrids—and it remains to be seen how durable that market is—but also in terms of the refinement and technology offered in their conventional cars.

What to Build?

The recent spectacle of the Diminished Three CEOs and the UAW president groveling before Congress has us focused now on how Detroit has mishandled adversity. A more important question is why they did so badly when times were good.

Consider GM. In 2000 Rick Wagoner, his senior executive team, and a flock of auto journalists jetted off to a villa in Italy for a seminar on how the GM of the 21st century was going to look. Wagoner and his team talked a lot about how GM was going to gain sales and profit from a “network” of alliances with automakers such as Subaru, Suzuki, Isuzu, and Fiat—automakers into which GM had invested capital. They talked about how they were going to use the Internet to turbocharge the company’s performance. And so on. But five years later, all of this was in tatters. Much of the capital GM invested in its alliance partners was lost when the company was forced to sell out at distressed prices. Fiat was the worst of all. GM had to pay Fiat $2 billion to get out of the deal—never mind getting back the $2 billion it had invested up front to buy 20 percent of Fiat Auto. GM said it saved $1 billion a year thanks to the Fiat partnership. Obviously, whatever those gains were, they didn’t help GM become profitable.

At least GM didn’t use the cash it rolled up during the 1990s boom to buy junkyards, as Ford did. But GM did see an opportunity in the money to be made from selling mortgages, and plunged its GMAC financing operation aggressively into that market. Of course, GM didn’t see the crash in subprime mortgages coming, either, and now GMAC is effectively bankrupt.

GM’s many critics argue that what they should have done with the money they spent on UAW legacy costs and bad diversification schemes was to develop electric cars and hybrids, instead of continuing to base their U.S. business on the same large, V8-powered, rear-wheel-drive formula they used in the 60s—except that now these vehicles were sold as SUVs instead of muscle cars. And indeed, Detroit did depend too heavily on pickup trucks and SUVs for profits. But they did so for understandable reasons. These were the vehicles that consumers wanted to buy from them. Also, these were the vehicles that government policy encouraged them to build.

When gas was cheap, big gas-guzzling trucks were exactly what GM customers wanted. Consumers didn’t want Detroit’s imitation Toyota Camrys. Toyota was building more than enough real Camrys down in Kentucky. GM made profits of as much as $8,000 per truck—and lost money on many of its cars. Federal fuel economy rules introduced in 1975 forced GM to shrink its cars so that they could average 27.5 miles per gallon. GM did this poorly. (Remember the Chevy Citation or the Cadillac Cimarron?) But federal laws allowed “light trucks” to meet a lower mileage standard. This kink in federal law allowed GM, Ford, and Chrysler to design innovative products that Americans clamored to buy when gas was cheap: SUVs. When Ford launched the Explorer, and GM later launched the Tahoe and the upgraded Suburban, it was the Japanese companies that were envious. In fact, one reason why Toyota is on its way to a loss for 2008—its first annual loss in 70 years—is that it built too many factories in the U.S. in order to build more SUVs and pickups.

One irony of the current situation is that the only vehicles likely to generate the cash GM and the others need right now to rebuild are the same gas-guzzlers that Washington no longer wants them to build. Even New York Times columnist Thomas Friedman has now come to realize that you can’t ask Detroit to sell tiny, expensive hybrids when gasoline is under $2 a gallon. We have two contradictory energy policies: The first demands cheap gas at all costs. The second demands that Detroit should substantially increase the average mileage of its cars to 35 or even 40 miles per gallon across the board. How the Obama administration will square this circle, I don’t know.

Thinking Anew

So now, where are we? GM has become Government Motors. With the U.S. Treasury standing in for the DuPonts of old, GM is going to try to reinvent itself. One challenge among many for GM in this process will be coming to terms with the reality that the U.S. market is too fractured, and has too many volume manufacturers, for any one of them to expect to control the kind of market share and pricing power GM had in its heyday. Today, according to Wardsauto.com, there are ten foreign-owned automakers with U.S. factories that assembled 3.9 million cars, pickups, and SUVs in 2007, before auto demand began to collapse. That’s more than Ford’s and Chrysler’s U.S. production combined.

GM’s efforts to cling to its 1950s self—with the old Sloanian ladder brands of Chevy, Pontiac, Buick, and Cadillac, plus Saturn, Saab, Hummer, and GMC—have led its management into one dark wood of error after another. Since 2001, GM’s marketing strategy has come down to a single idea: zero percent financing. This was the automotive version of the addictive, easy credit that ultimately destroyed the housing market. Cut-rate loans, offered to decreasingly credit-worthy buyers, propped up sales and delayed the day of reckoning. But it didn’t delay it long enough. The house of cards began tumbling in 2005, and I would say it has now collapsed fully.

Between 1995 and 2007, GM managed to earn a cumulative total of $13.5 billion. That’s three-tenths of one percent of the total revenues during that period of more than $4 trillion-and those are nominal dollars, not adjusted for inflation. Between 1990 and 2007, GM lost a combined total of about $33 billion. The six unprofitable years wiped out the gains from 12 profitable years, and then some. But old habits die hard. Within hours of clinching a $6 billion government bailout last month, GMAC and GM were back to promoting zero-interest loans.
During the 1980s and 1990s, GM’s leaders refused—and I believe some still refuse—to accept the reality of the presence of so many new automakers in the U.S. market, more than at any time since the 1920s. This hard truth means the company’s U.S. market share going forward isn’t going to return to the 40 percent levels of the mid-1980s, or the 30 percent levels of the 1990s, or even the mid-20 percent levels we have seen more recently. One thing to watch as GM tries to restructure now will be what assumptions the company makes about its share of the U.S. market going forward. If they call for anything higher than 15 percent, I would be suspicious.

Since all of you are now part owners of this enterprise, I would urge all of you to pay close attention, since what’s about to unfold has no clear precedent in our nation’s economic history. The closest parallels I can see are Renault in France, Volkswagen in Germany, and the various state-controlled Chinese automakers. But none of these companies is as large as GM, and none of these companies is exactly a model for what GM should want to become.

As I have tried to suggest, it’s hard enough for professional managers and technicians—who have a clear profit motive—to run an enterprise as complex as a global car company. What will be the fate of a quasi-nationalized enterprise whose “board of directors” will now include 535 members of Congress, plus various agencies of the Executive Branch? As a property owner in suburban Detroit, I can only hope for the best.

Copyright © 2010 Hillsdale College. The opinions expressed in Imprimis are not necessarily the views of Hillsdale College. Permission to reprint in whole or in part is hereby granted, provided the following credit line is used: “Reprinted by permission from Imprimis, a publication of Hillsdale College.” SUBSCRIPTION FREE UPON REQUEST. ISSN 0277-8432. Imprimis trademark registered in U.S. Patent and Trade Office #1563325.

Comments Off more...

  • Catagories

  • RSS Judicial Watch

  • Copyright © 1996-2014 Skewed Horizon. All rights reserved.