Industrial espionage has been going on for centuries, but experts agree China’s espionage campaign is on a different scale from anything we’ve seen in history.
It has been going on at least since the 1990s and there is no sign it is letting up. Targets include an incredibly broad range of US companies, embracing civilian as well as military technology, with a special focus on the telecom and Internet sector. In 2009, National Security Agency Director General Keith Alexander called Chinese IP theft “the greatest transfer of wealth in history.” He put the value of cyber-theft of US trade secrets and intellectual property (IP) at a stunning $250 billion a year and called it “our future disappearing in front of us.”
Rather than describe general trends (such as Chinese military officers whose job consists of hacking American companies from 9 to 5 daily, 5 days a week), we thought it would be more evocative to describe ten of the most flagrant cases of IP theft. They range across many industries. What they have in common is that these cases involve gains for Chinese companies—even when the industrial spies are caught and imprisoned. For the US victim companies, they involve loss of markets, loss of jobs, and even loss of life.
1. The Wind Turbine Case
A decade ago, American Superconductor Corporation (AMSC) was a high-tech, high-growth software success story. Spun out of MIT and headquartered in Ayer, Massachusetts, AMSC developed world-class technology for software to control the wind turbines.
In 2005, the Chinese government made wind power and large-scale wind farms a key strategic objective for China. AMSC partnered with a Chinese maker of the wind turbine hardware, Sinovel, to sell into the Chinese market. AMSC sales rose rapidly into the hundreds of millions of dollars. In 2011, AMSC discovered that Sinovel had an illegal copy of the entire AMSC software code on one of their windmills.
In 2011, AMSC filed the largest-ever IP theft case in a Chinese court, seeking $1.2 billion compensation for their losses. Sinovel cancelled all its business with AMSC and refused to pay the $800 million it owed AMSC. As a result, AMSC had to lay off 600 employees at its Massachusetts headquarters (over 60% of its workforce), its stock market capitalization fell by half, and the company went into survival mode. The China case has gone nowhere. However, the US government filed a criminal case in 2013 charging Sinovel, two Chinese Sinovel executives, and Karabasevic with charges including conspiracy to commit trade secret theft. In 2018, the company and three individuals were convicted. But all are out of the reach of US justice.
Today, Sinovel still uses stolen AMSC software to power its wind turbines. It is the world’s number two, and China’s number one, provider of wind turbines. AMSC estimates that 20% of the wind turbines deployed in China today use illegal AMSC software.
2. The Oreo White Case
In 2014, federal prosecutors launched an industrial espionage case by showing the jury an Oreo, the famous Nabisco cookie. The white Oreo cream filling uses the chemical titanium dioxide (TiO2) to achieve that brilliant white color. Automotive paint and hundreds of other industrial products use TiO2, making it a highly valuable chemical. American manufacturer Dupont has long had a world-leading proprietary multi-stage process for producing titanium dioxide.
According to a detailed report in Bloomberg Business Week, in 1991, Liew met senior Chinese Communist Party official Luo Gan, who thanked him for being a “patriotic overseas Chinese” and began to send him “directives” describing China’s industrial aims. Liew was born in Malaysia of Chinese parentage, moved to the US to study at University of Oklahoma in the early 1980s, and became a US citizen in the 1990s. According to federal prosecutors, “with Mr. Luo’s directives to Mr. Liew, so began a 20-year course of conduct of lying, cheating, and stealing.”
In 1997, Liew found two retired disgruntled American Dupont engineers, Tim Spitler and Robert Maegerle. He won their confidence through charm and gifts. They provided him with information, sketches, and blueprints of Dupont’s titanium dioxide facilities and processes. In 2004, Liew used this information to win a series of contracts totaling $28 million from China’s Pangang Group to build a TiO2 production facility.
In 2012, Dupont found out about Liew’s activities and called the FBI. The conspirators were arrested. In his interrogation, Tim Spitler admitted to receiving a $15,000 payment from Liew. Shortly after, he committed suicide. In 2014, Liew was convicted and sentenced to 15 years for economic espionage, possession of trade secrets, and tax fraud. Maegerle got two and a half years for conspiring to sell trade secrets. Liew’s wife Christina got probation for evidence tampering.
3. The Motorola Case
Jin was a successful engineer working on Motorola’s cellular technology at a time when Motorola was one of the world’s top wireless companies (and a substantial supplier to the Pentagon). Investigations revealed that after eight years with Motorola, Jin had in 2006 taken medical leave, gone to China, and in violation of the terms of her Motorola employment, pursued a job with Sun Kaisens, a Chinese telecom company that does work for the Chinese military. In 2007, she returned to Chicago and resumed work briefly for Motorola, during which time she was seen leaving the office with shopping bags full of documents in the evenings. Born in China, Jin had gone to the US where she received a master’s degree in physics from Notre Dame, and obtained US citizenship.
In 2012, she was sentenced to four years in prison and a fine of $20,000. At the trial, the judge said: “The most important thing this country can do is protect its trade secrets.”
4. The Iowa Seed Corn Case In 2016, Mo Hailong was sentenced to 36 months in federal prison. The government also confiscated two farms, one in Iowa and the other in Illinois, purchased by the conspirators. It’s unclear what happened to the other five conspirators.
In 2014, six Chinese nationals were arrested for attempting to steal genetically modified corn seeds from Dupont and Monsanto experimental farms in Iowa.
In 2016, Mo Hailong was sentenced to 36 months in federal prison. The government also confiscated two farms, one in Iowa and the other in Illinois, purchased by the conspirators. It’s unclear what happened to the other five conspirators.The conspirators were employed by Chinese conglomerate DBN and its corn seed subsidiary, Kings Nower Seed. The Chinese government puts a high priority on agricultural development to feed its large and growing population.
One of the six conspirators, Mo Yun, was the wife of the founder of DBN, and a second, Mo Hailong aka Robert Mo, was her brother. The US prosecutors charged the conspirators with stealing samples of the “parent” seeds that produced the genetically modified seeds and attempting to smuggle them to China, including one attempt that involved hiding the seeds in a bag of microwave popcorn. Monsanto said it has spent billions of dollars developing advanced corn seed.
5. The Tappy the Robot Case
When a telecom company allows engineers from its suppliers into its carefully guarded testing labs, those engineers are expected to obey all the rules laid down by their customer. Two engineers from Chinese supplier Huawei used a 2014 visit to T-Mobile’s labs in Seattle to steal information and even a piece of confidential T-Mobile equipment, Tappy the Robot. T-Mobile used Tappy’s fast-moving fingers to test the performance of the smartphones it sold. Not only did they take photos of Tappy, the Huawei engineers stole one of his fingertips.
Huawei apologized and said it fired the two engineers. However T-Mobile pursued its case and in 2017 a Seattle jury decided that Huawei misappropriated T-Mobile trade secrets and awarded the wireless operator damages of $4.8 million.
Huawei has a long track record in intellectual property theft. In 2004 Cisco Systems, the market leader in routers, took Huawei to court for stealing its core router software code and using it in Huawei routers. The case was settled confidentially. More recently, when Huawei public statements claimed that the 2004 case did not involve stolen Cisco code, Cisco in 2012 replied by describing the essence of their original complaint this way: “this litigation involved allegations by Cisco of direct, verbatim copying of our source code, to say nothing of our command line interface, our help screens, our copyrighted manuals and other elements of our products.” Routers are the core hardware technology at the heart of the Internet. Huawei routers, widely used in China and Europe, have played a key role in Huawei’s growth into a $95 billion global telecom equipment giant.
6. The CLIFBAW case
In 2015, the federal government charged six Chinese citizens with stealing wireless communications technology from two Silicon Valley microchip makers, Avago and Skyworks, and launching their own company to sell that technology in China. (Avago is now known as Broadcom.)
According to the indictment, after leaving their employers and taking the stolen technology, one of the six co-conspirators was so cocky that he suggested in an email to his partners that they should name their new company CLIFBAW for “China Lifts Bulk Acoustic Wave.” One of the other co-conspirators wrote in an email: “My work is to make every possible effort to find out about the process’s every possible detail and copy directly to China.”
The six alleged IP thieves were former employees of the two American chipmakers. Three of them had met studying electrical engineering at USC in Los Angeles. The technology they stole, thin-film bulk acoustic wave resonator technology, is used to clean up wireless signals, allowing cellular phone service to work better at greater distances from cell towers. It’s a critical piece of successful wireless systems.
One of the accused, Zhang Hao, was arrested at LA Airport in 2015 and spent eight weeks in Santa Clara jail. As of a 2016 report, he was fighting the case in a federal court in San Jose.
According to the federal government, Avago spent 20 years and $50 million developing this technology. Avago only learned about the theft in 2011, at least four years after the six thieves started seeking backing from Chinese universities for their new business.
7. The Allen Ho TVA/Nuclear Power case
8. The File Storage and China National Health case
According to a May 2017 Department of Justice press release, Chinese spy Xu Jiaqiang stole data storage technology from a US storage technology company for the benefit of China’s health system, the National Health and Family Planning Commission. He then communicated with two undercover FBI agents and offered to sell them the so-called clustered file storage technology from the unnamed victim company. He explained to the undercover cops how to set up a network of servers and uploaded the proprietary storage software onto the servers. He offered to show them how to edit the software to eliminate any trace of the name of the victim company from the screen prompts. At a meeting in a hotel room on Dec. 7, 2015, Xu showed the undercover cops the proprietary software on his laptop and boasted of multiple other “customers” to whom he had provided the stolen software. He was arrested.
In 2017, Xu pleaded guilty to three counts of economic espionage. In January 2018, Xu was sentenced to five years in prison. “Xu, a Chinese national, is being held accountable for engaging in economic espionage against an American company,” said Acting Assistant Attorney General Dana Boente. “Xu not only stole high tech trade secrets from his U.S. employer – a federal crime – he did so both for his own profit and intending to benefit the Chinese government.”
9. The Unit 61398 Case
In May 2014, federal prosecutors charged five members of the People’s Liberation Army (PLA) of China with cyberhacking their way into the confidential computer files of four US companies and one labor union. The five military men were allegedly members of Unit 61398, a unit of the PLA dedicated to cyberhacking. The companies that were hacked included aluminum producer Alcoa, nuclear power plant producer Westinghouse, solar cell manufacturer SolarWorld, Allegheny Technologies Inc., and labor union United Steel Workers. SolarWorld said that a key proprietary technology for making solar cells more efficient was stolen in this hack and turned over to a Chinese competitor.
Attorney General Eric Holder said the case was “the first ever charges against a state actor for this type of hacking…The range of trade secrets and other sensitive business information stolen in this case is significant and demands an aggressive response."
Since none of the PLA hackers were in the US, the indictment was not followed by trial. One cyber-sleuth has said that after the exposure of Unit 61398, China moved its dedicated cyber-hacking unit out of the PLA and into a division of Chinese intelligence.
10. The Great Firewall Case
The Great Wall of China was built to keep out invaders. The so-called Great Firewall is a network of software tools China uses to control what Internet information and websites Chinese citizens have access to. You would think that if a totalitarian Communist nation wanted to control what its citizens could see and read, it would carefully construct its own software.
But even here, China enjoys stealing American IP. In 2009, Solid Oak Software of California announced that parts of China’s Green Dam-Youth Escort software, which China required be loaded onto every PC sold in China to control access to pornography and other sites deemed unsuitable by the government was actually stolen directly from Solid Oak source code.
According to Solid Oak CEO Brian Milgrim, days after he announced his intent to sue Green Dam for stealing source code, unknown hackers began attacking the Solid Oak computer network with denial-of-service attacks, forcing the Solid Oak team to abandon their own network and use Dropbox to exchange files. “It felt like they had a plan...if they could just put the company out of business, the lawsuit goes away. They didn’t need guys with guns or someone to break my kneecaps,” Milgrim told Bloomberg News. After three years of litigation, the case was settled out of court and the cyberattacks stopped as mysteriously as they had begun.Jeff Ferry is Chief Economist at the Coalition for a Prosperous America, responsible for building CPA's resources of original research and analysis regarding trade and industrial policies to re-establish America's prosperity and world-leading economic growth.
Today in history, (on July 2, 1776), the Second Continental Congress passed the Richard Henry Lee resolution, which called for ties between Great Britain and the American states to be “totally dissolved.”
The monumental vote was unanimous among the colonies, with New York abstaining. Each state had one vote, regardless of size.
Initially reluctant to support this measure, South Carolina pledged to join with the majority provided that there would be no dissent from a vote of independence. New York had not yet received instructions from its legislature to vote in favor, and British Admiral Richard Howe amassed a huge fleet of ships around Long Island. Still, New York’s state government gave its delegates permission to support this cause a week later. However, several states had already declared independence prior to this point.
By July 2, some still clung to hope for a peaceful reconciliation with Great Britain, and delegates such as John Dickinson, one of the most famous Americans of the era, refused to sign the document. Others felt that if they authorized a vote of independence, their state would be treated as Massachusetts had.
Following the agitation in Boston, Britain dissolved the autonomy of the Massachusetts government, closed Boston’s port, imposed a quartering law, and besieged the colony.
Upon the decision to secede from the British crown, a somber mood swept through Independence Hall as the delegates realized their actions would be considered treasonous. Despite the prospect of death by hanging, the decision had been made. Historian Mercy Otis Warren wrote this account of the event:
“Their transactions might have been legally styled treasonable, but loyalty had lost its influence, and power its terrors. Firm and disinterested, intrepid and united, they stood ready to submit to the chances of war, and to sacrifice their devoted lives to preserve inviolate, and to transmit to posterity, the inherent rights of men, conferred on all by the God of nature, and the privileges of Englishmen, claimed by Americans from the sacred sanctions of compact.”
In response to the resolution, John Adams believed that July 2nd would be celebrated as the most extraordinary holiday in America, and wrote to his wife Abigail:
“The second day of July 1776, will be the most memorable epoch in the history of America. I am apt to believe that it will be celebrated by succeeding generations as the great anniversary festival. It ought to be commemorated as the day of deliverance, by solemn acts of devotion to God Almighty. It ought to be solemnized with pomp and parade, with shows, games, sports, guns, bells, bonfires, and illuminations, from one end of this continent to the other, from this time forward forever more.”
While Adams did not realize the date of the Declaration of Independence’s acceptance two days later would be the more recognizable and celebrated occasion, he fully grasped the significance of the day and its eternal importance.
Ironically, Adams would later attempt to diminish the significance of the moment, while Jefferson received much adulation for its philosophical influence as he became the lead opponent of the Adams administration.
Text of Lee’s resolution
Resolved, That these United Colonies are, and of right ought to be, free and independent States, that they are absolved from all allegiance to the British Crown, and that all political connection between them and the State of Great Britain is, and ought to be, totally dissolved.
That it is expedient forthwith to take the most effectual measures for forming foreign Alliances.
That a plan of confederation be prepared and transmitted to the respective Colonies for their consideration and approbation.Dr. Dave Benner speaks and writes on topics related to the United States Constitution, founding principles, and the early republic. Dave is also the author of Compact of the Republic: The League of States and the Constitution and The 14th Amendment and the Incorporation Doctrine. He writes articles for The Tenth Amendment Center and the Abbeville Institute. He also produces a historical content at DaveBenner.com and Youtube.
Most Americans abhor the violence that spilled over from the weeks-long protests sparked by the death of Gorge Floyd at the hands of Minneapolis policemen late last month.
While sympathetic to the legitimate desire to prosecute those responsible, they prefer the orderly processes of law rather than a rush to judgment. In that telling difference of opinion, we gain insight into what has inspired groups like Black Lives Matter and Antifa, both of which either advocate or carry out violence against both the police and innocent bystanders.
Last week BLM spokespersons sought to justify the nationwide looting and burning of persons and property by referencing the Boston Tea Party of Dec. 16, 1773. That claim deserves to be taken seriously insofar as Americans have for generations looked for guidance to those who won our freedom and independence. But it does not hold up.
As every schoolchild knows, or should know, on that night Americans dressed as Native Americans threw multiple tea chests (342 to be exact) from three British ships into Boston Harbor, which led to two momentous consequences: The British closed down the harbor and Americans rallied to the revolutionary cause. No one was hurt as they refused to pay a tax levied on tea without their consent.
How anyone can see a parallel between that highly focused act of defiance 247 years ago and the interminable rioting that continues to destroy the property of thousands and shattered the lives of 150, is beyond me.
And consider this: while we have been subjected for three months to soothing public service advertisements on TV urging us to stay indoors, wear masks and practice socialist distancing in public, we might have benefited more from thoughtful admonitions to be careful and protect what is dear to us during the seemingly endless rioting.
It is also helpful to remark that years of fighting the British army did not result in a reign of terror against those dissenting from the patriot cause. In those days, persons loyal to the crown actually moved to Canada!
But the French Revolution of 1789, which began as a movement to establish the rule of law while retaining the monarchy, ultimately descended into mass murder as the royal family, and hordes of aristocrats and clergy, were forced to the guillotine. The so-called Committee for Public Safety sought to “cleanse” French society of all reactionary elements but wound up killing many of its own and soon was overthrown by Napoleon Bonaparte.
That sobering contrast of two revolutions cautions us to appreciate how our ancestors managed to combine revolution with the rule of law. Americans had a decisive advantage over the French. They had been governing themselves already for nearly a century and a half, with a rising middle class that abhorred tyranny but also respected human life, liberty and property. France was a rigidly stratified society with no history of self-government and no middle class.
Today the American middle class, marked by its industrious way of life and its adherence to constitutional government, remains the glue that holds our society together. That makes the ongoing attacks on small businesses, owned by persons of all colors who are neither racists nor rioters, all the more painful and unjust.
Another difference between the American and French revolutions was the former’s commitment to freedom of religion and the latter’s frontal assault on religion, culminating in the official establishment of the Religion of Reason. The hard left in America today regard religion as the main source of bigotry, notwithstanding many Christians’ battle against slavery and racial segregation, most notably the Rev. Dr. Martin Luther King Jr. of the Southern Christian Leadership Conference.
In a mockery of religion, BLM recently called upon white citizens in Bethesda, Maryland and elsewhere to get down on their knees and apologize for their alleged “privilege,” while others were told to wash black persons’ feet in a ritual of repentance for their “original sin” of racism. Last week leading congressional Democrats took a knee for Floyd while wearing African garb, which was admired by some but denounced by others as “cultural appropriation.”
In a little-noticed provision of BLM’s Mission Statement, the “nuclear family” of two parents and children is attacked as incompatible with the communal society it seeks, now exemplified, however absurdly, by the Capitol Hill Autonomous Zone (CHAZ), or Organized Protest (CHOP), in Seattle, Washington, a six-block area casually dismissed as “the summer of love,” by the city’s mayor.
There is both less and more to BLM and Antifa than meets the eye with their fraudulent claims to the American Revolution and a substitute religion that seeks to de-legitimize a whole race of people.Richard Reeb is a retired political science, philosophy and journalism instructor at Barstow Community College, and the author of the book Taking Journalism Seriously: ‘Objectivity’ as a Partisan Cause.
The most famous work of twentieth-century political philosophy is John Rawls’ A Theory of Justice (1971). The most controversial part of this book is the “difference principle”: “Social and economic inequalities are to satisfy two conditions: (a) They are to be attached to positions and offices open to all under conditions of fair equality of opportunity; and (b), they are to be to the greatest benefit of the least advantaged members of society” More exactly, it’s the second part of the principle that has generated controversy. This says that inequalities are allowed only if they help the least well-off.
Rawls defends "difference principle" in this way. People do not deserve to benefit from their superior talents or social opportunities more than those less talented or with fewer opportunities. Why should LeBron James’s natural talent for basketball enable him to get much more money than those players who practice as much as he does but aren’t as talented? James is lucky to be so talented, and, Rawls maintains, luck should not determine the distribution of goods in society Instead, inequalities of income and wealth should be allowed only to the extent that they benefit the least well- off class in a given society.
Rawls is talking here about modern nation states, considered as separate societies. Doesn’t this raise a problem for his theory? Why has Rawls restricted the difference principle to the least well-off class in a given society? Why not extend the scope of the principle to cover the least well-off class in the entire world? Even the worst-off in a prosperous society like the United States are much better off than those in most other countries. Isn’t birth in the United States and not in a poor country also a matter of luck? If so, don’t anti-luck arguments require us to extend the difference principle worldwide?
Rawls rejects this extension. In The Law of Peoples, he says that the scope of justice is confined to individual societies. He thinks that countries have only limited humanitarian obligations to less well-off nations; those lucky enough to be born in prosperous national circumstances need not share their gains with others. Why not? Is his restriction arbitrary?
The philosopher Thomas Nagel has written an influential article, “The Problem of Global Justice, that defends Rawls’ restriction of the difference principle to national societies. The article has a valuable but unintentional consequence. It makes clear that the entire basis of Rawls’ political philosophy rests on an unfounded premise.
Nagel gives the rationale for extending the difference principle worldwide, against which he will be arguing, in this way: "The accident of being born in a poor rather than a rich country is as arbitrary a determinant of one’s fate as the accident of being born in a poor rather than a rich family in the same country".
Nagel counters this rationale by asking a fundamental question: why should inequality mandate any redistribution at all, once people have risen above the level of absolute deprivation? Even if you do not deserve to benefit from your superior talents, that fact in itself conveys no claim on these talents to others. Libertarians will agree; but Nagel now makes his Rawlsian move. Members of certain groups can sometimes have stricter obligations to each other than they owe to strangers. You owe more to your parents than you do to a next-door neighbor who isn’t related to you.
In a similar way, Nagel says, citizens of a nation are bound together. They have an obligation to obey their country’s laws; and, if they live in a democracy, they share responsibility for enacting these laws. In Rousseau’s language, they form the "general will." Too much inequality interferes with these common bonds; thus we have egalitarian obligations to our fellow citizens. We don’t owe these obligations to citizens of other countries, because we aren’t bound to them in the same way. Justice, in this view, is not a "cosmopolitan" virtue, owed to anyone in the world; it is a "political" virtue that applies only to those subject to a common sovereignty. "The important point for our purposes is that Rawls believes that this moral principle against arbitrary inequalities is not a principle of universal application. . . Rather, in his theory the objection to arbitrary inequalities gets a foothold only because of the societal context. What is objectionable is that we should be fellow participants in a collective enterprise of coercively imposed legal and political institutions that generate such arbitrary inequalities. . . One might even say that we are all participants in the general will. A sovereign state is not just a cooperative enterprise for mutual advantage"
If people are bound to each other in the way that Rawls and Nagel suggest, then they might very well have stronger obligations toward their fellow members than to others. But a gap remains in the argument. Nagel has not shown that people have reason to establish the type of coercive enterprise that he describes. Indeed, from a libertarian standpoint, we can go further. People who wish to establish themselves as a tightly knitted together social body have no right to compel the unwilling to enlist in their enterprise; nor are they at liberty to prevent people in their society from seceding.
If this restriction is kept in mind, Rawls’s difference principle becomes harmless. All that it amounts to is that if people wish to establish a certain kind of egalitarian institution, they are free to do so. So what?
Nagel fails to see this obvious point because he regards the sovereign state as the only way, under modern conditions, for people to escape from a Hobbesian war of all against all and he also regards the modern democratic state as the best type of sovereign state. But why should we accept these contentions? Couldn’t people establish a strictly limited state or rely on private protection agencies to ensure social peace? If not, why not?David Gordon is a senior fellow at the Ludwig von Mises Institute. He was educated at UCLA, where he earned his PhD in intellectual history. He is the author of Resurrecting Marx: The Analytical Marxists on Exploitation, Freedom, and Justice, The Philosophical Origins of Austrian Economics, An Introduction to Economic Reasoning, and Critics of Marx. He is also editor of Secession, State, and Liberty and co-editor of H.B. Acton's Morals of Markets and Other Essays.
Fake recounts in Florida, fires leaving many homeless or dead in California, the House of Representatives invaded by entitled whelps born after 1980 — it's too much when considered with all the corruption and greed in government.
Depressing sewage at every level of government.
Along comes futurist George Gilder to show us that change is coming. Not the filth that Obama bequeathed us, but real, positive technological change. Ripe for change is the old Internet. Within the next ten years we can throw away the old message-based I'net for a new one based on the security and privacy of block-chain architecture. The 'net is the fountain of many of our woes. The spread of misinformation, fake news and hypnosis by smartphone are a few of the abuses of the I'net. Good news, everyone: Google, Facebook and Twitter will shrink in a more secure environment.
So what's going to happen to Google?  Google became the 800-pound gorilla (of your dreams) by giving everything away. Free searching of a massive global database, free email, free text, pictures, Artificial Intelligence — all free. So how do they make their profits?  You. They know YOU — your interests, your friends, what you eat and when you eat. Google knows these things because they gave you some of their stuff. They sell YOU (your searches, etc.) to anyone willing to pay them. Think of it: billions upon billions of bytes of goop on any of the billions of Internet users. Google doesn't need to hack your laptop or your CitiBank account. You've given them enough personal goop already. It's all in a huge hopper (database) with other tidbits and goop from other Google users across the planet. Your passwords on various web sites don't really protect you forever from hackers bent on stealing whatever they can from you: bank deposits, credit card numbers, birth dates, mother's maiden name, etc. Maybe you'll find out that Experian or Facebook or Wendy's has been hacked. Maybe you won't find out for another 6 months. Passwords, knowledge of family names, and so on, are needed for authentication of who you are on the mail-address-driven Internet of today. For a computer network to “know” that it's really YOU punching that keyboard somewhere in the cloud, you must “prove” it. Gilder quotes the work of one man. Made public in 1931, Kurt Gödel showed proof that testing the truth of any mathematical system required a view from oustide that system. Noting that even the much-vaunted Artificial Intelligence processes cannot reach beyond what the human mind programs into them — no matter how ingenious the programmer nor how great the computer's power, Gilder realized that only the human mind can create what he called a “surprise.” With Gilders' help (and 87 years down the road) we begin to see the implications of Gödel's insight: fear of AI is like fear of 'climate change'; the Internet is a souped-up messaging tool whose days are numbered — by weak security. Spam-ridden and leaky, it's dominated by a few giants like Google. Now we come to the uplifting part: When we (like George Gilder) apply Gödel's proof to known economies, we find that only Capitalism offers a path outside its box. That path is called “invention,” “innovation,” “creativity,” or “free enterprise.”
Along comes futurist George Gilder to show us that change is coming. Not the filth that Obama bequeathed us, but real, positive technological change. Ripe for change is the old Internet. Within the next ten years we can throw away the old message-based I'net for a new one based on the security and privacy of block-chain architecture.
The 'net is the fountain of many of our woes. The spread of misinformation, fake news and hypnosis by smartphone are a few of the abuses of the I'net. Good news, everyone: Google, Facebook and Twitter will shrink in a more secure environment.
So what's going to happen to Google?  Google became the 800-pound gorilla (of your dreams) by giving everything away. Free searching of a massive global database, free email, free text, pictures, Artificial Intelligence — all free. So how do they make their profits?  You. They know YOU — your interests, your friends, what you eat and when you eat. Google knows these things because they gave you some of their stuff. They sell YOU (your searches, etc.) to anyone willing to pay them. Think of it: billions upon billions of bytes of goop on any of the billions of Internet users.
Google doesn't need to hack your laptop or your CitiBank account. You've given them enough personal goop already. It's all in a huge hopper (database) with other tidbits and goop from other Google users across the planet.
Your passwords on various web sites don't really protect you forever from hackers bent on stealing whatever they can from you: bank deposits, credit card numbers, birth dates, mother's maiden name, etc. Maybe you'll find out that Experian or Facebook or Wendy's has been hacked. Maybe you won't find out for another 6 months.
Passwords, knowledge of family names, and so on, are needed for authentication of who you are on the mail-address-driven Internet of today. For a computer network to “know” that it's really YOU punching that keyboard somewhere in the cloud, you must “prove” it.
Gilder quotes the work of one man. Made public in 1931, Kurt Gödel showed proof that testing the truth of any mathematical system required a view from oustide that system.
Noting that even the much-vaunted Artificial Intelligence processes cannot reach beyond what the human mind programs into them — no matter how ingenious the programmer nor how great the computer's power, Gilder realized that only the human mind can create what he called a “surprise.” With Gilders' help (and 87 years down the road) we begin to see the implications of Gödel's insight: fear of AI is like fear of 'climate change'; the Internet is a souped-up messaging tool whose days are numbered — by weak security. Spam-ridden and leaky, it's dominated by a few giants like Google.
Now we come to the uplifting part: When we (like George Gilder) apply Gödel's proof to known economies, we find that only Capitalism offers a path outside its box. That path is called “invention,” “innovation,” “creativity,” or “free enterprise.”
Thomas Jefferson considered these words, the Tenth Amendment, to be “the foundation of the Constitution.” In reality, the Tenth Amendment is much more than that. It is nothing less than the foundation of the entire American experiment, based on the principles of the American Revolution.
When Jefferson penned the Declaration of Independence in 1776, he wrote that the 13 American colonies were “absolved from all allegiance to the British crown and that all political connection between them and the state of Great Britain is, and ought to be, totally dissolved.” Contrary to popular belief, the erstwhile colonies were not declaring their independence because of new, radical notions, but because their historic right to self-government had been continually violated by the British Parliament with support from the king.
When the colonies seceded from Great Britain, they did not announce that they were forming a new country. The merely asserted that the colonies were now “free and independent states.” In today’s usage, a state typically refers to a subordinate political unit within a larger government, but at the time of the Declaration, the word state literally meant “nation.” Additionally, the term “congress” at that time signified a meeting of ambassadors from independent nations. So, when the colonies met “in Congress,” they were meeting as representatives of sovereign states.
The states continued to jealously guard their independence while writing and adopting the Articles of Confederation, officially uniting them politically. Article II of that document, coming second only to the article that officially named of the union, asserted that “Each state retains its sovereignty, freedom, and independence, and every power, jurisdiction, and right, which is not by this Confederation expressly delegated to the United States.”
After their victory over the British, some believed that the government under the Articles was not sufficiently strong to handle the common business of the states. In order to address these perceived weaknesses, a convention of delegates from the states convened in May, 1787. As the task of revising the Articles quickly evolved into writing an entirely new governing document, the delegates to the Constitutional Convention were challenged with giving the new government enough power to carry out a few, specific functions while maintaining the federal, decentralized nature of the union.
The enduring importance of state sovereignty was clearly communicated at the beginning of the Convention, when the delegates rejected the Virginia Plan, which would have created a powerful national government and reduced the power of the states. In rejecting this nationalist plan, the new states were attempting to ensure that they had not just fought off one oppressive central government only to institute another.
As the Convention progressed, the principle of self-government provided the framework for understanding the new governing document. When the Constitution was approved for recommendation to the states, its proponents went home to ensure their home states that their hard-won independence was not in danger. In one of the most famous addresses in favor of the Constitution, Pennsylvania’s James Wilson explained that every power “which is not given (to the federal government) is reserved (to the states).”
In Federalist 45 James Madison confirmed the continuing independence of the states, saying “The powers delegated by the proposed Constitution to the federal government are few and defined. Those which are to remain in the State governments are numerous and indefinite.” The Constitution’s very nature, its proponents argued, was one of limited central power and the expansive autonomy of the state governments.
Despite these assurances, many of the Constitution’s opponents were unconvinced. James Lincoln, a delegate to South Carolina’s ratification convention, charged that the Constitution “changes, totally changes, the form of your present government.” Lincoln continued, “What have you been contending for these ten years past? Liberty! What is liberty? The power of governing yourselves.” Lincoln and his fellow opponents to the Constitution concluded that, in its unamended state, the document left too much room for the federal government to interfere with this historical right of the states.
The solution to this disagreement came from the ratification conventions of several states, which suggested the inclusion of amendments that would clearly delineate the restrictions on the federal government’s power. Massachusetts was the first state to recommend amendments and, as the right to self-government was the keystone of the American political tradition, an amendment protecting that right was placed first in its list of recommendations. The convention advised, “First, that it be explicitly declared that all Powers not expressly delegated by the aforesaid Constitution are reserved to the several States to be by them exercised.”
The conventions of New Hampshire, New York, North Carolina, Rhode Island, South Carolina and Virginia included similar language in their ratification documents. These were clearly not idle suggestions and it is highly doubtful that the Constitution would have been ratified in several of these states, including the influential state of Virginia, had this clarification of the states’ powers not been agreed upon.
The Constitution was ratified and went into effect in 1789 and the Bill of Rights, the first ten amendments to the Constitution, was ratified in 1791. Within the Bill of Rights lay the Tenth Amendment, which officially codified into the new federal government the most essential historical feature of American politics, the right of unimpeded self-rule through state and local governments.
Through all of this history, the American people had been particularly distrustful of centralized power. Indeed, of all the problematic features of government, Americans viewed intrusions upon their right to self-government as the height of tyranny and the fiercest foe of liberty. As governments in the United States and around the world have become more centralized, the fears of our forefathers have been realized in increasingly terrible ways.
Today, our impulse is to address the problems created by centralized power by using the central power itself, an impulse that is a little like attempting to cure a snake bite by letting the snake bite you again. To recapture the liberties we once possessed we must understand, as our founders did, that the solutions to our current problems will not be found through nationalized power. The only task that centralized power effectively completes is the oppression of its people.
To move forward towards liberty we must look backward to the Tenth Amendment. In its institutional safeguards against centralized power is found the only true protection of our liberties.
An article in the September issue of The New Yorker makes the case that education is a fundamental right guaranteed by the Constitution.
Public schools in Detroit are failing to educate students. Just like they are failing to do so in many large cities throughout the country. A case in the federal court system, Gary B. v. Snyder, filed by Public Counsel and Sidley Austin LLP on behalf of a class of Detroit students, argues that students in Detroit public schools who failed to learn how to read were denied their due process and equal protection rights under the Constitution’s Fourteenth Amendment. The case was dismissed by a federal district court in Michigan in June, but has been appealed to the Sixth Circuit Court of Appeals in Cincinnati.
The plaintiffs, as well as the writer of the piece in The New Yorker (Jill Lepore) cite the Supreme Court case of Plyler v. Doe (1982). In his book The Schoolhouse Gate: Public Education, the Supreme Court, and the Battle for the American Mind (Pantheon, 2018), Justin Driver, a law professor at the University of Chicago, maintains that this case “rests among the most egalitarian, momentous, and efficacious constitutional opinions that the Supreme Court has issued throughout its entire history.”
The case came about after Texas education laws were changed in 1975 to allow the state to withhold funding from local school districts to educate the children of illegal aliens. The Court, by a 5-4 vote, ruled that the revised law violated the equal protection clause of the Fourteenth Amendment. The law “severely disadvantaged the children of illegal aliens” by “denying them the right to an education.”
But of course, the law didn’t deny the children of illegals the right to an education; it denied them the right to an education at taxpayers’ expense. Their parents could have educated them at home, hired a tutor, or sent them to a private school. The fact that the parents didn’t have the ability to educate their children at home and couldn’t afford to hire a tutor or send their children to a private school is immaterial.
But regardless of what the Supreme Court said, education is not a constitutional right. [Emphasis mine — ed.]
The Constitution doesn’t grant rights; the Constitution guarantees rights. The Constitution specifically guarantees certain natural rights, imposes limits on the government’s power, and explicitly declares that all powers not delegated to the federal government by the Constitution are reserved to the states or the people.
The United States was set up as a federal system of government where the states, through the Constitution, granted a limited number of powers to a central government. As James Madison succinctly explained in Federalist No. 45:
The powers delegated by the proposed Constitution to the Federal Government, are few and defined. Those which are to remain in the State Governments are numerous and indefinite. The former will be exercised principally on external objects, as war, peace, negotiation, and foreign commerce; with which last the power of taxation will for the most part be connected. The powers reserved to the several States will extend to all the objects, which, in the ordinary course of affairs, concern the lives, liberties and properties of the people; and the internal order, improvement, and prosperity of the State.
There are about thirty enumerated congressional powers listed throughout the Constitution. Most of those powers are found in the eighteen paragraphs of Article I, Section 8. One concerns commerce. One concerns naturalization and bankruptcies. One concerns post offices and post roads. One concerns copyrights and patents. One concerns federal courts. One concerns maritime crimes. One concerns the governance of the District of Columbia. Four of them concern taxes and money. Six concern the militia and the military. The last one—the “elastic” clause—gives Congress the power “to make all Laws which shall be necessary and proper for carrying into Execution the foregoing Powers.”
Elsewhere in the Constitution we read that Congress may also admit new states into the Union, propose amendments to the Constitution, regulate national elections, establish courts inferior to the Supreme Court, direct the location of the place for the trial of a crime not committed within a state, declare the punishment for treason, provide the manner in which the public acts and records in each state are accepted by the others, dispose of and regulate the territory or other property of the United States, give the states consent to lay imposts or duties on imports or exports, and provide for the case of the removal, death, resignation, or inability of the president or vice president.
Everything else is reserved to the states—even without the addition of the Bill of Rights and its Tenth Amendment.
But what about the Fourteenth Amendment?
The Fourteenth Amendment, ratified in 1868, says that “no State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.”
I would ask the same question: What about the Fourteenth Amendment? What does the Fourteenth Amendment have to do with the education of the children of illegal aliens? Absolutely nothing, of course. They are not citizens, they are not being deprived of life, liberty, or property, and they are not being denied the equal protection of the laws.
Although all states have provisions in their constitutions for public education, they do not have to have such provisions. Education is not a natural right. But whether they do or don’t have such provisions, education is strictly and entirely a state matter.
The Constitution doesn’t mention education, public schools, teachers, teachers’ unions, private schools, tutors, students, student loans, FAFSA forms, Pell Grants, Title IX, classrooms, desks, special education, curriculum, Head Start, Common Core, math and science initiatives, the Higher Education Act, the Elementary and Secondary Education Act, school breakfast or lunch programs, teacher education, teacher certification, research grants to colleges and universities, special-education mandates, school buses, bilingual-education mandates, school accreditation, charter schools, educational vouchers, mandatory attendance laws, graduation rates, the No Child Left Behind Act, busing to achieve racial desegregation, Race to the Top funds, or a Department of Education.
And neither does the Constitution authorize the federal government to spend one penny on education.
If there is no constitutional right to receive basic necessities like housing, clothing, and food, then there is certainly no constitutional right to receive a government-provided or government-funded education.Laurence M. Vance is a policy advisor for the Future of Freedom Foundation. He is the author of over a dozen books, including The Revolution That Wasn’t. Visit his website: www.vancepublications.com.
Perhaps the most astonishing thing about modern medicine is just how very modern it is. More than 90 percent of the medicine being practiced today did not exist in 1950. Two centuries ago medicine was still an art, not a science at all. As recently as the 1920s, long after the birth of modern medicine, there was usually little the medical profession could do, once disease set in, other than alleviate some of the symptoms and let nature take its course. It was the patient’s immune system that cured him—or that didn’t.
It was only around 1930 that the power of the doctor to cure and ameliorate disease began to increase substantially, and that power has continued to grow nearly exponentially ever since. This new power to extend life, interacting with the deepest instinctual impulse of all living things—to stay alive—has had consequences that our society is only beginning to comprehend and address. Since ancient times, for example, doctors have fought death with all the power at their disposal and for as long as life remained. Today, the power to heal has become so mighty that we increasingly have the technical means to extend indefinitely the shadow, while often not the substance, of life. When doctors should cease their efforts and allow death to have its inevitable victory is an issue that will not soon be settled, but it cannot be much longer evaded.
Then there is the question of how to pay for modern medicine, the costs of which are rising faster than any other major national expenditure. In 1930, Americans spent $2.8 billion on health care—$23 per person and 3.5 percent of the Gross Domestic Product. In 2015 we spent about $3 trillion—$9,536 per person and 15 percent of GDP. Adjusted for inflation, this means that per capita medical costs in the United States have risen by a factor of 30 in 90 years.
Consider the 1980s, when medical expenses in the U.S. increased 117 percent. Forty-three percent of the rise was due to general inflation. Ten percent can be attributed to the American population growing both larger and older (as it still is). Twenty-three percent went to pay for technology, treatments, and pharmaceuticals that had not been available when the decade began—a measure of how fast medicine has been advancing. But that still leaves 24 percent of the increase unaccounted for, and that 24 percent is due solely to an inflation peculiar to the American medical system itself.
Whenever one segment of an economy exhibits, year after year, inflation above the general rate, and when there is no constraint on supply, then either a cartel is in operation or there is a lack of price transparency—or both, as is the case with American medical care.
So it is clear that there is something terribly wrong with how health care is financed in our country. And a consensus on how to fix the problem—how to provide Americans the best medicine money can buy for the least amount of money that will buy it—has proved elusive. But the history of American medical care, considered in the light of some simple but ineluctable economic laws, can help point the way. For it turns out that the engines of medical inflation were deeply, and innocently, inserted into the health care system just as the medical revolution began.
It was the Greeks—the inventors of the systematic use of reason that 2,000 years later gave rise to modern science—who first recognized that disease is caused by natural, not supernatural, forces. They reduced medicine to a set of principles, usually ascribed to Hippocrates but actually a collective work. In the second century, the Greek physician Galen, a follower of the Hippocratic School, wrote extensively on anatomy and medical treatment. Many of these texts survived and became almost canonical in their influence during the Middle Ages. So it is fair to say that after classical times, the art of medicine largely stagnated. Except for a few drugs—such as quinine and digitalis—and an improved knowledge of gross anatomy, the physicians practicing in the U.S. at the turn of the nineteenth century had hardly more at their disposal than the Greeks had in ancient times.
In 1850 the U.S. had 40,755 people calling themselves physicians, more per capita than the country would have in 1970. Few of this legion had formal medical education, and many were unabashed charlatans. This is not to say that medical progress was standing still. The stethoscope was invented in 1816. The world’s first dental school opened in Baltimore in 1839. The discovery of anesthesia in the 1840s was immensely important—although while it made extended operations possible, overwhelming postoperative infections killed many patients, so most surgery remained a last-ditch effort. Another major advance was the spread of clean water supplies in urban areas, greatly reducing epidemics of waterborne diseases, such as typhoid and cholera, which had ravaged cities for centuries.
Then finally, beginning in the 1850s and 1860s, it was discovered that many diseases were caused by specific microorganisms, as was the infection of wounds, surgical and other. The germ theory of disease, the most powerful idea in the history of medicine, was born, and medicine as a science was born with it. Still, while there was a solid scientific theory underpinning medicine, most of its advances in the late nineteenth and early twentieth centuries were preventive rather than curative. Louis Pasteur and others, using their new knowledge of microorganisms, could begin developing vaccines. Rabies fell in 1885, and several diseases that were once the scourge of childhood, such as whooping cough and diphtheria, followed around the turn of the century. Vitamin deficiency diseases, such as pellagra, began to decline a decade later. When the pasteurization of milk began to be widely mandated around that time, the death rate among young children plunged. In 1891, the death rate for American children in the first year of life was 125.1 per 1,000. By 1925 it had been reduced to 15.8 per 1,000, and the life expectancy of Americans as a whole began a dramatic rise.
One of the most fundamental changes caused by the germ theory of disease, one not foreseen at all, was the spread of hospitals for treating the sick. Hospitals have an ancient history, but for most of that history they were intended for the very poor, especially those who were mentally ill or blind or who suffered from contagious diseases such as leprosy. Anyone who could afford better was treated at home or in nursing facilities operated by a private physician. Worse, until rigorous antiseptic and later aseptic procedures were adopted, hospitals were a prime factor in spreading, not curing, disease. Thus, until the late nineteenth century, hospitals were little more than a place for the poor and the desperate to die. In 1873, there were only 149 hospitals in the entire U.S. A century later there were over 7,000, and they had become the cutting edge of both clinical medicine and medical research.
But hospitals had a financial problem from the very beginning of scientific medicine. By their nature they are extremely labor intensive and expensive to operate. Moreover, their costs are relatively fixed and not dependent on the number of patients being served. To help solve this problem, someone in the late 1920s had a bright idea: hospital insurance. The first hospital plan was introduced in Dallas, Texas, in 1929. The subscribers, some 1,500 schoolteachers, paid six dollars a year in premiums, and Baylor University Hospital agreed to provide up to 21 days of hospital care to any subscriber who needed it.
While this protected schoolteachers from unexpected hospital costs in exchange for a modest fee, the driving purpose behind the idea was to improve the cash flow of the hospital. Thus the scheme had an immediate appeal to other medical institutions, and it quickly spread. Before long, groups of hospitals were banding together to offer plans that were honored at all participating institutions, giving subscribers a choice of which hospital to use. This became the model for Blue Cross, which first operated in Sacramento, California, in 1932.
Although called insurance, these hospital plans were unlike any other insurance policies. Previously, insurance had always been used to protect only against large, unforeseeable losses, and came with a deductible. But the first hospital plans didn’t work that way. Instead of protecting against catastrophe, they paid all costs up to a certain limit. The reason, of course, is that they were instituted not by insurance companies, but by hospitals, and were primarily designed to generate steady demand for hospital services and guarantee a regular cash flow.
In the early days of hospital insurance, this fundamental defect was hardly noticeable. Twenty-one days was a very long hospital stay, even in 1929, and with the relatively primitive medical technology then available, the daily cost of hospital care per patient was roughly the same whether the patient had a baby, a bad back, or a brain tumor. Today, on the other hand, this “front-end” type of hospital insurance simply would not cover what most of us need insurance against: the serious, long-term, expensive-to-cure illness. In the 1950s, major medical insurance, which does protect against catastrophe rather than misfortune, began to provide that sort of coverage. Unfortunately it did not replace the old plans in most cases, but instead supplemented them.
The original hospital insurance also contained the seeds of two other major economic dislocations, unnoticed in the beginning, that have come to loom large. The first dislocation is that while people purchased hospital plans to be protected against unpredictable medical expenses, the plans only paid off if the medical expenses were incurred in a hospital. As a result, cases that could be treated on an outpatient basis instead became much more likely to be treated in the hospital—the most expensive form of medical care.
The second dislocation was that hospital insurance did not provide indemnity coverage, which is when the insurance company pays for a loss and the customer decides how best to deal with it. Rather than indemnification, the insurance company provided service benefits. In other words, it paid the bill for services covered by the policy, whatever the bill was. As a result, there was little incentive for the consumer of medical services to shop around. With someone else paying, patients quickly became relatively indifferent to the cost of medical care.
These dislocations perfectly suited the hospitals, which wanted to maximize the amount of services they provided and thereby maximize their cash flow. If patients are indifferent to the costs of medical services they buy, they are much more likely to buy more of them and the cost of each service is likely to go up. There is no price competition to keep prices in check.
Predictably, the medical profession began to lobby in favor of retaining this system. In the mid-1930s, as Blue Cross plans spread rapidly around the country, state insurance departments moved to regulate them and force them to adhere to the same standards as regular insurance plans. Had hospital insurance come to be regulated like other insurance, those offering it would have begun acting more like insurance companies, and the economic history of modern American medicine might have taken a very different turn. But that didn’t happen, largely because doctors and hospitals, by and for whom the plans had been devised in the first place, moved to prevent it from happening. The American Hospital Association and the American Medical Association worked hard to exempt Blue Cross from most insurance regulation, offering in exchange to enroll anyone who applied and to operate on a nonprofit basis.
The Internal Revenue Service, meanwhile, ruled that these companies were charitable organizations and thus exempt from federal taxes. Freed from taxes and from the regulatory requirement to maintain large reserve funds, Blue Cross and Blue Shield (a plan that paid physicians’ fees on the same basis as Blue Cross paid hospital costs) came to dominate the market in health care insurance, holding about half of the policies outstanding by 1940. In order to compete, private insurance companies were forced to model their policies along Blue Cross and Blue Shield lines. Thus hospitals came to be paid almost always on a cost-plus basis, receiving the cost of the services provided plus a percentage to cover the costs of invested capital. Any incentive for hospitals to be efficient and reduce costs vanished.
In recent years, hospital use has been falling steadily as the population has gotten ever more healthy and surgical procedures have become far less traumatic. The result is a steady increase in empty beds. There were over 7,000 hospitals in the U.S. in 1975, compared to about 5,500 today. But that reduction has not been nearly enough. Because of the cost-plus way hospitals are paid, they don’t compete for patients by means of price, which would force them to retrench and specialize. Instead they compete for doctor referrals, and doctors want lots of empty beds to ensure immediate admission and lots of fancy equipment, even if the hospital just down the block has exactly the same equipment. The inevitable result, of course, is that hospital costs on a per-patient per-day basis have skyrocketed.
Doctors, meanwhile, were paid for their services according to “reasonable and customary” charges. In other words, doctors could bill whatever they wanted to as long as others were charging roughly the same. The incentive to tack a few dollars on to the fee became strong. The incentive to take a few dollars off, in order to increase market share, ceased to exist. As more and more Americans came to be covered by health insurance, doctors were no longer even able to compete with one another.
During World War II, another feature of the American health care system with large financial implications for the future developed: employer-paid health insurance. With twelve million working-age men in the armed forces and the economy in overdrive, the American labor market was tight in the extreme. But wartime wage and price controls prevented companies from competing for available talent by means of increased wages and salaries. They had to compete with fringe benefits instead, and free health insurance was tailor-made for this purpose.
The IRS ruled that the cost of employee health care insurance was a tax-deductible business expense, and in 1948 the National Labor Relations Board ruled that health benefits were subject to collective bargaining. Companies had no choice but to negotiate with unions about them, and unions fought hard to get them.
The problem was that company-paid health insurance further increased the distance between the consumer of medical care and the purchaser of medical care. When individuals have to pay for their own health insurance, they at least have an incentive to buy the most cost-effective plan available, given their particular circumstances. But beginning in the 1940s, a rapidly increasing number of Americans had no rational choice but to take whatever health care plan their employers chose to provide.
There is another aspect of employer-paid health insurance, unimagined when the system first began, that has had pernicious economic consequences in recent years. Insurers base the rates they charge, naturally enough, on the total claims they expect to incur. Auto insurers determine this by looking at what percentage of a community’s population had auto accidents in recent years and how much repairs cost in that community. This is known as community rating. They also look at the individual driver’s record, the so-called experience rating. Most insurance policies are based on a combination of community and experience ratings. And for most forms of insurance, the size of the community that is rated is quite large, eliminating the statistical anomalies that skew small samples. For example, a person isn’t penalized because he happens to live on a block with a lot of lousy drivers. But employer-paid health insurance is an exception. It can be based on the data for each company’s employees, allowing insurance companies to cherry-pick businesses with healthy employees, driving up the cost of insurance for everyone else. The effects of this practice are clear: 65 percent of workers without health insurance work for companies with 25 or fewer employees.
By 1960, as the medical revolution was quickly gaining speed, the economically flawed private health care financing system was fully in place. Then two other events added to the gathering debacle.
In 1965, government entered the medical market with Medicare for the elderly and Medicaid for the poor. Both doctors and hospitals had fought tooth and nail to prevent what they called “socialized medicine” from gaining a foothold in the U.S. As a result of their strident opposition, when the two programs were finally enacted, they were structured much like Blue Cross and Blue Shield, only with government picking up much of the tab. And when Medicare and Medicaid proved a bonanza for health care providers, their vehement opposition quickly faded away. The two new systems greatly increased the number of people who could afford advanced medical care, and the incomes of medical professionals soared, roughly doubling in the 1960s.
But perhaps the most important consequence of these new programs was the power over hospitals they gave to state governments. State governments became the largest single source of funds for virtually every major hospital in the country, giving them the power to influence—or even dictate—the policy decisions made by these hospitals. As a result, these decisions were increasingly made for political, rather than medical or economic, reasons. To take one example, closing surplus hospitals or converting them to specialized treatment centers became much more difficult. Those adversely affected—the local neighborhood and hospital workers unions—would naturally mobilize to prevent it. Society as a whole, which stood to gain, would not.
Finally, there was the litigation explosion of the last 50 years. For every medical malpractice suit filed in the U.S. in 1969, 300 were filed in 1990. While reforms at the state level (notably in Texas) have reduced the number, lawsuits have sharply driven up the cost of malpractice insurance—a cost passed directly on to patients and their insurance companies. Neurosurgeons, even with excellent records, can pay as much as $300,000 a year for coverage. Doctors in less lawsuit-prone specialties are also paying much higher premiums and are forced to order unnecessary tests and perform unnecessary procedures to avoid being second-guessed in court.
Given this short history, it followed as the night follows day that medical costs began to rise over and above inflation, population growth, and the cost of medical advances. The results for the country as a whole are plain to see. In 1930 we spent 3.5 percent of American GDP on health care; in 1950, 4.5 percent; in 1970, 7.3 percent; in 1990, 12.2 percent. Today we spend 15 percent. American medical care over this period has saved the lives of millions who could not have been saved before—life expectancy today is 78.6 years. It has relieved the pain and suffering of tens of millions more. But it has also become a monster that is devouring the American economy.
Is there a way out?
One possible answer, certainly, is a national health care service, such as that pioneered in Great Britain after World War II. But our federal government already runs three single-payer systems—Medicare, the Veterans Health Administration, and the Indian Health Service—each of which is in a shambles, noted for fraud, waste, and corruption. Why would we want to turn over all of American medicine to those who have proved themselves incompetent to run large parts of it?
A far better and cheaper alternative would be to reform the economics of the present system.
The most important thing to do, by far, is to require medical service providers to make public their inclusive prices for all procedures. Most hospitals keep their prices hidden in order to charge more when they can, such as with the uninsured. But some facilities do post their prices. The Surgery Center of Oklahoma, for instance, does so on its website. A knee replacement there will cost you $15,499, a mastectomy $6,505, a rotator cuff repair $8,260.
Once prices are known and can be compared, competition—capitalism’s secret weapon—will immediately drive prices towards the low end, draining hundreds of billions of dollars in excess charges out of the system. Posting prices will also force hospitals to become more efficient and innovative, in order to stay competitive.
Any politician who pontificates about reforming health care without talking about making prices public is carrying water for one or more of the powerful lobbyists that have stymied real reform, such as the American Hospital Association, the American Medical Association, and the health workers unions.
Second, we should reform how malpractice is handled. We should get rid of the so-called American rule, where both sides pay their own legal expenses regardless of outcome, and adopt the English rule—employed in the rest of the common-law world—where the loser pays the expenses of both sides.
Third, we need to ensure that the consumers of medical care—you and me—care about the cost of medical care. Getting patients to shop for lower-cost services is vital.
A generous health insurance policy more or less covers everything from a sniffle to a heart transplant. It shouldn’t. An insurance policy that covers routine care isn’t even an insurance policy, properly speaking—it is a very expensive pre-payment plan that jacks up premiums. Just as oil changes are not covered by automobile insurance, annual flu shots and scraped knees should not be covered by medical insurance. One way to achieve this would be for employers to provide major medical insurance plus a health savings account to take care of routine health care. If the money in the account is not spent on health care, it would be rolled over into the employee’s 401(k) account at the end of the year, giving him an incentive to shop wisely for routine medical care.
Finally, we need to get the practitioners of modern medicine to recognize an age-old reality: there is no cure for old age itself. Maybe someday we’ll be able to 3-D print a new body and have the data in our brain downloaded to it. But for the time being, when the body begins to break down systemically, we should let nature take its course.
There are enormous forces arrayed against these economically sensible reforms. Defenders of the status quo are the most potent lobbyists in Washington and the state capitals. This is not to mention the leftist proponents of single payer, who favor whatever will increase the power and scope of government. So it won’t be an easy fight. But at least we have one thing on our side—Stein’s law, named after the famous economist Herbert Stein: “If something cannot go on forever, it will stop.”John Steele Gordon was educated at Millbrook School and Vanderbilt University. His articles have appeared in numerous publications, including Forbes, National Review, Commentary, the New York Times, and the Wall Street Journal. He is a contributing editor at American Heritage, where he wrote the “Business of America” column for many years, and currently writes “The Long View” column for Barron’s. He is the author of several books, including Hamilton’s Blessing: The Extraordinary Life and Times of Our National Debt, The Great Game: The Emergence of Wall Street as a World Power, and An Empire of Wealth: The Epic History of American Economic Power.
Europe is today in the midst of a debate on the future of the European Union. It is not the first one: back before the Maastricht Treaty was passed in 1992, political leaders were discussing as well about where the EU, or as it was called back then, the European Community, was heading. Should it go the way of the “ever closer union,” or revert back to the fundamental principles? There was a split going through Europe on questions like this.
This was the situation in which the British prime minister Margaret Thatcher found herself on September 20, 1988, when she stepped in front of a crowd at the College of Europe in Bruges. “I decided that the time had come to strike out against what I saw as the erosion of democracy by centralization and bureaucracy, and to set out an alternative view of Europe’s future,” she would later write in her memoirs The Downing Street Years.
The result was today’s infamous yet magnificent ‘Bruges speech,’ which was far from being anti-EU, but a stark warning against Brussels, and an attempt to save the EU in the wake of federalists demanding more and more integration. This week, we are celebrating the thirtieth anniversary of this speech. And, as it turns out, it has stood the test of time shockingly well. Indeed, many of the warnings that Thatcher put forth are even truer today (which you can read in our new study).The European Heritage In Thatcher’s vision, the European heritage is of crucial importance. She tries to teach us that Europe can be proud of its history. While wars did play too big of a role in the past, Europe is still the continent in which the ideal of individual liberty prevailed before anywhere else. It is the continent which brought forth many of the greatest innovations, artistic pieces, literary works, and intellectuals the world has ever seen.
Great Britain has played an instrumental part in the European story, Thatcher makes clear: “Our links to the rest of Europe, the continent of Europe, have been the dominant factor in our history.” Britain has contributed mightily to European history and its values with the Magna Carta, the Glorious Revolution, and many other major steps on the path to freedom. But so has Britain benefitted from its link to mainland Europe, for instance having “borrowed that concept of the rule of law which marks out a civilized society from barbarism.”
This special relationship, says Thatcher, must be retained. Today this is even truer: on the eve of Brexit, it is of the utmost importance to keep this mutual understanding between the two sides intact, regardless of whether Britain is in- or outside of the EU.
Despite the millennia-long European history of (much) success, we need to remember that it is the long history of Europe that is important, not the EU (the latter being only sixty years old): “Europe is not the creation of the Treaty of Rome. Nor is the European idea the property of any group or institution.” Not everyone who criticizes the EU is automatically anti-European — an important point in today’s world in which Europe and the EU are most of the time used synonymously.
Rather, the European Union is a tool which can be used to promote the values Europeans defended so often in the twentieth century: the EU “is not an end in itself,” but rather “a practical means by which Europe can ensure the future prosperity and security of its people.”A Europe of Free Enterprise and Free Trade What is the way to future prosperity? For Thatcher, it is “to deregulate and remove the constraints on trade.” It means “action to free markets, action to widen choice, action to reduce government intervention.” Instead of increasing centralization and regulatory efforts, Europe should remain a champion of free enterprise. History — and the Soviet Union, should be enough proof that centralized decision-making doesn’t work.
The EU should not only be pro-trade to the inside, however. Instead, it should be globally oriented: “Europe never would have prospered and never will prosper as a narrow-minded, inward-looking club,” she warned. Free trade with the outside world — something that the EU is lacking to this day (while forcing all member states to comply with its trade policy), is one of the most important competences of Brussels: “we must ensure that our approach to world trade is consistent with the liberalisation we preach at home.”
For this, a strong relationship with America is needed. For Margaret Thatcher, the US was indeed to a certain extent part of Europe, “in the sense that she shares a common heritage of civilised values and a love of liberty.” It is a natural fit between the two sides of the Atlantic, since the core values are shared with one another. In the face of today’s trade wars and aggressions on both sides, it would be all the worse if this relationship would be squandered in just a few months’ time.Against Eurotopia If there is any argument with which the prime minister hit home continuously, it was her stark opposition to a centralized federal state, ruled by the Brussels bureaucracy. The idea of a United States of Europe is a utopia that “never comes, because we know we should not like it if it did.” Instead of politicians trying to create a single European identity, the mantra should be unity in diversity: “Europe will be stronger precisely because it has France as France, Spain as Spain, Britain as Britain, each with its own customs, traditions and identity. It would be folly to try to fit them into some sort of identikit European personality.”
In Margaret Thatcher’s opinion, the EU should stay a supranational organization which is based on voluntary cooperation between sovereign states, rather than one federal state. She perhaps felt alone with this opinion when she presented it thirty years ago. But today, with another debate on the future of the European Union — and even farther down the road of the “ever closer union,” we should keep in mind what Lady Thatcher said, and “to raise the flag of national sovereignty, free trade and free enterprise — and fight.” Indeed, as the prime minister wrote in her memoirs, “if there was ever an idea whose time had come and gone it was surely that of the artificial mega-state.”Kai Weiss is an International Relations student and works for the Austrian Economics Center and the Hayek Institute. He is a Mises University alumnus.
How, when, and why has the United States now arrived at the brink of a veritable civil war?
Almost every cultural and social institution — universities, the public schools, the NFL, the Oscars, the Tonys, the Grammys, late-night television, public restaurants, coffee shops, movies, TV, stand-up comedy — has been not just politicized but also weaponized.
Donald Trump’s election was not so much a catalyst for the divide as a manifestation and amplification of the existing schism.
We are now nearing a point comparable to 1860, and perhaps past 1968. Left–Right factionalism is increasingly fueled by geography — always history’s force multiplier of civil strife. Red and blue states ensure that locale magnifies differences that were mostly manageable during the administrations of Ford, Carter, Reagan, the Bushes, and Clinton.
What has caused the United States to split apart so rapidly?
Globalization had an unfortunate effect of undermining national unity. It created new iconic billionaires in high tech and finance, and their subsidiaries of coastal elites, while hollowing out the muscular jobs largely in the American interior.
Ideologies and apologies accumulated to justify the new divide. In a reversal of cause and effect, losers, crazies, clingers, American “East Germans,” and deplorables themselves were blamed for driving industries out of their neighborhoods (as if the characters out of Duck Dynasty or Ax Men turned off potential employers). Or, more charitably to the elites, the muscular classes were too racist, xenophobic, or dense to get with the globalist agenda, and deserved the ostracism and isolation they suffered from the new “world is flat” community. London and New York shared far more cultural affinities than did New York and Salt Lake City.
Meanwhile, the naturally progressive, more enlightened, and certainly cooler and hipper transcended their parents’ parochialism and therefore plugged in properly to the global project. And they felt that they were rightly compensated for both their talent and their ideological commitment to building a better post-American, globalized world.
One cultural artifact was that as our techies and financiers became rich, as did those who engaged in electric paper across time and space (lawyers, academics, insurers, investors, bankers, bureaucratic managers), the value of muscularity and the trades was deprecated. That was a strange development. After all, prestige cars, kitchen upgrades, gentrified home remodels, and niche food were never more in demand by the new elite. But who exactly laid the tile, put the engine inside the cars, grew the arugula, or put slate on the new hip roof?
In this same era, a series of global financial shocks, from the dot-com bust to the more radical 2008 near–financial meltdown, reflected a radical ongoing restructuring in American middle-class life, characterized by stagnant net income, family disintegration, and eroding consumer confidence.
No longer were youth so ready to marry in their early twenties, buy a home, and raise a family of four or five. Compensatory ideology made the necessary adjustments to explain the economic doldrums and began to characterize what was impossible first as undesirable and later as near toxic. Pajama Boy sipping hot chocolate in his jammies, and the government-subsidized Life of Julia profile, became our new American Gothic.
What has caused the United States to split apart so rapidly?High Tech
But such electronic narcotics did not hide the fact that in terms of economics the lifestyles of their ancestors were eroding. The new normal was two parents at work, none at home; renting as often as buying; an eight-year rather than three-year car loan; fewer grandparents around the corner for babysitting or to assist when ill; and consumer service defined as hearing taped messages of an hour before reaching a helper in India or Vietnam.
Higher education surely helped split the country in two. In the 1980s, the universities embraced two antithetical agendas, both costly and reliant on borrowed money. On the one hand, campuses competed for scarcer students by styling themselves as Club Med–type resorts with costly upscale dorms, tony student-union centers, lavish gyms, and an array of in loco parentis social services. The net effect was to make colleges responsible not so much for education, but more for shielding now-fragile youth from the supposed reactionary forces that would buffet them after graduation.
History became a melodramatic game of finding sinners and saints, rather than shared tragedy. Standards fell to accommodate poorly prepared incoming students.
But if campus materialism was at odds with classroom socialism, few seemed to notice. Instead, the idea grew up that one had no need to follow concretely the consequences of his abstract ideology. Or even worse, one’s hard-left politics — the louder and more strident the better — became a psychological means of squaring the circle of denouncing the West while being affluent and enjoying the material comforts of the good life.
Universities grew not just increasingly left-wing but far more intolerant than they were during the radicalism of the Sixties — but again in an infantile way. Speakers were shouted down to prove social-justice fides. “Studies” courses squeezed out philosophy and Latin. …old norms were arbitrary and discriminatory constructs anyway.
The curriculum now was recalibrated as therapeutic; it no longer aimed to challenge students by demanding wide reading, composition skills, and mastery of the inductive method. The net result was the worst of all possible worlds: An entire generation of students left college with record debt, mostly ignorant of the skills necessary to read, write, and argue effectively, lacking a general body of shared knowledge — and angry. They were often arrogant in their determination to actualize the ideologies of their professors in the real world. A generation ignorant, arrogant, and poor is a prescription for social volatility.
Frustration and failure were inevitable, more so when marriage and home-owning in a stagnant economy were now encumbered by $1 trillion in student loans. New conventional wisdom recalibrated the nuclear family and suburban life as the font of collective unhappiness. The result was the rise of the stereotypical single 28-year-old — furious at an unfair world that did not appreciate his unique sociology or environmental-studies major, stuck in his parents’ basement or garage, working enough at low-paying jobs to pay for entertainments, if his room, board, and car were subsidized by his aging and retired parents.
Immigration was recalibrated hand-in-glove by progressives who wanted a new demographic to vote for leftist politicians and by Chamber of Commerce conservatives who wished an unlimited pool of cheap unskilled labor. The result was waves of illegal, non-diverse immigrants who arrived at precisely the moment when the old melting pot was under cultural assault.
The old black–white dichotomy in the United States was being recalibrated as “diversity,” or in racialist terms as a coalition now loosely and often grossly inexactly framed as non-white versus the (supposedly shrinking) white majority. Compensatory politics redefined illegal immigration once it was clear that not just a few million but perhaps one day 20 million potential new voters would remake the Electoral College.
Difference was now no longer a transitory prelude to assimilation but a desirable permanent and separatist tribalism, even as it became harder to define exactly what ethnic and racial difference really was in an increasingly intermarried society. We soon went from the buffoonery of a wannabe Native American Ward Churchill to the psychodrama of an Islamist, anti-Semitic Linda Sarsour.
The Obama Project
We forget especially the role of Barack Obama. He ran as a Biden Democrat renouncing gay marriage, saying, “I believe marriage is between a man and a woman. I am not in favor of gay marriage.” Then he “evolved” on the question and created a climate in which to agree with this position could get one fired.
He promised to close the border and reduce illegal immigration: “We will try to do more to speed the deportation of illegal aliens who are arrested for crimes, to better identify illegal aliens in the workplace. We are a nation of immigrants. But we are also a nation of laws.”
Then he institutionalized the idea that to agree with that now-abandoned agenda was a career-ender.
Obama vowed to “work across the aisle” and was elected on the impression that he was a “bridge builder” who would heal racial animosity, restore U.S. prestige abroad, and reignite the economy after the September 2008 meltdown. [Emphasis mine — ed.]
Instead, he weaponized the IRS, the FBI, the NSC, the CIA, and the State Department and redefined the deep state as if it were the Congress, but with the ability to make and enforce laws all at once. “Hope and Change” became “You didn’t build that!”
President Obama, especially in his second term, soon renounced much of what he had run on. He raised taxes, stagnated what would have been a natural recovery, weighed in on hot-button racialized criminal cases, advanced a radical social agenda, and polarized the country along lines of difference.
Again, Obama most unfortunately redefined race as a white-versus-nonwhite binary, in an attempt to build a new coalition of progressives, on the unspoken assumption that the clingers were destined to slow irrelevance and with them their retrograde and obstructionist ideas.
In other words, the Left could win most presidential elections of the future, as Obama did, by writing off the interior and hyping identity politics on the two coasts.
The Obama administration hinged on leveraging these sociocultural, political, and economic schisms even further. The split pitted constitutionalism and American exceptionalism and tradition on the one side versus globalist ecumenicalism and citizenry of the world on the other.
Of course, older divides — big government, high taxes, redistributionist social-welfare schemes, and mandated equality of result versus limited government, low taxes, free-market individualism, and equality of opportunity — were replayed, but sharpened in these new racial, cultural, and economic landscapes.
What Might Bring the United States Together Again?
A steady 3 to 4 percent growth in annual GDP would trim a lot of cultural rhetoric. Four percent unemployment will make more Americans valuable and give them advantages with employers. Measured, meritocratic, diverse, and legal immigration would help to restore the melting pot.
Reforming the university would help too, mostly by abolishing tenure, requiring an exit competence exam for the BA degree (a sort of reverse, back-end SAT or ACT exam), and ending government-subsidized student loans that promote campus fiscal irresponsibility and a curriculum that ensures future unemployment for too many students.
We need to develop a new racial sense that we are so intermarried and assimilated that cardboard racial cutouts are irrelevant.
Religious and spiritual reawakening is crucial. The masters of the universe of Silicon Valley did not, as promised, bring us new-age tranquility, but rather only greater speed and intensity to do what we always do. Trolling, doxing, and phishing were just new versions of what Jesus warned about in the Sermon on the Mount.
Spiritual transcendence is the timeless water of life; technology is simply the delivery pump. We confused the two. That water can be delivered ever more rapidly does not mean it ever changes its essence. High tech has become the great delusion.
Finally, …our new racialism must be seen as a reactionary and dangerous return to 19th-century norm of judging our appearance on the outside as more valuable than who we are on the inside.
As we pass through life, we learn by experience. We look back on our behaviour when we were young and think how foolish we were. In the same way our family, our community and our town endeavour to avoid the mistakes made by our predecessors.
The experiences of the human race have been recorded, in more or less detail, for some four thousand years. If we attempt to study such a period of time in as many countries as possible, we seem to discover the same patterns constantly repeated under widely differing conditions of climate, culture and religion. Surely, we ask ourselves, if we studied calmly and impartially the history of human institutions and development over these four thousand years, should we not reach conclusions which would assist to solve our problems today? For everything that is occurring around us has happened again and again before.
No such conception ever appears to have entered into the minds of our historians. In general, historical teaching in schools is limited to this small island. We endlessly mull over the Tudors and the Stewarts, the Battle of Crecy, and Guy Fawkes. Perhaps this narrowness is due to our examination system, which necessitates the careful definition of a syllabus which all children must observe.
I remember once visiting a school for mentally- handicapped children. "Our children do not have to take examinations," the headmaster told me, "and so we are able to teach them things which will be really useful to them in life."
However this may be, the thesis which I wish to propound is that priceless lessons could be learned if the history of the past four thousand years could be thoroughly and impartially studied. In these two articles, which first appeared in Blackwood's Magazine, I have attempted briefly to sketch some of the kinds of lessons which I believe we could learn. My plea is that history should be the history of the human race, not of one small country or period.
The Fate of Empires
"The only thing we learn from history," it has been said, "is that men never learn from history", a sweeping generalisation perhaps, but one which the chaos in the world today goes far to confirm. What then can be the reason why, in a society which claims to probe every problem, the bases of history are still so completely unknown? Several reasons for the futility of our historical studies may be suggested. First, our historical work is limited to short periods — the history of our own country, or that of some past age which, for some reason, we hold in respect.
Second, even within these short periods, the slant we give to our narrative is governed by our own vanity rather than by objectivity. If we are considering the history of our own country, we write at length of the periods when our ancestors were prosperous and victorious, but we pass quickly over their shortcomings or their defeats. Our people are represented as patriotic heroes, their enemies as grasping imperialists, or subversive rebels. In other words, our national histories are propaganda, not well-balanced investigations.
Third, in the sphere of world history, we study certain short, usually unconnected, periods, which fashion at certain epochs has made popular. Greece 500 years before Christ, and the Roman Republic and early Roman Empire are cases in point. The intervals between the 'great periods' are neglected. Recently Greece and Rome have become largely discredited, and history tends to become increasingly the parochial history of our own countries.
To derive any useful instruction from history, it seems to me essential first of all to grasp the principle that history, to be meaningful, must be the history of the human race. For history is a continuous process, gradually developing, changing and turning back, but in general moving forward in a single mighty stream. Any useful lessons to be derived must be learned by the study of the whole flow of human development, not by the selection of short periods here and there in one country or another. Every age and culture is derived from its predecessors, adds some contribution of its own, and passes it on to its successors. If we boycott various periods of history, the origins of the new cultures which succeeded them cannot be explained.
Physical science has expanded its knowledge by building on the work of its predecessors, and by making millions of careful experiments, the results of which are meticulously recorded. Such methods have not yet been employed in the study of world history. Our piecemeal historical work is still mainly dominated by emotion and prejudice.
If we desire to ascertain the laws which govern the rise and fall of empires, the obvious course is to investigate the imperial experiments recorded in history, and to endeavour to deduce from them any lessons which seem to be applicable to them all. The word ‘empire’, by association with the British Empire, is visualised by some people as an organisation consisting of a home-country in Europe and ‘colonies’ in other continents. In this essay, the term ‘empire’ is used to signify a great power, often called today a superpower. Most of the empires in history have been large landblocks, almost without overseas possessions. We possess a considerable amount of information on many empires recorded in history, and of their vicissitudes and the lengths of their lives, for example:
The nation Duration, years Dates of Rise and Fall Assyria 247 859-612 B.C. Persia (Cyrus & his descendants) 208 538-330 B.C. Greece (Alexander & his successors) 231 331-100 B.C. Roman Republic 233 260-27 B.C. Roman Empire 207 27 B.C.-180 A.D. Arab Empire 246 634-880 Mameluke Empire 267 1250-1517 Ottoman Empire 250 1320-1570 Spain 250 1500-1750 Romanov Russia 234 1682-1916 Britain 250 1700-1950
This list calls for certain comments.
(1) The present writer is exploring the facts, not trying to prove anything. The dates given are largely arbitrary. Empires do not usually begin or end on a certain date. There is normally a gradual period of expansion and then a period of decline. The resemblance in the duration of these great powers may be queried. Human affairs are subject to many chances, and it is not to be expected that they Dates of rise and fall could be calculated with mathematical accuracy.
(2) Nevertheless, it is suggested that there is sufficient resemblance between the life periods of these different empires to justify further study.
(3) The division of Rome into two periods may be thought unwarranted. The first, or republican, period dates from the time when Rome became the mistress of Italy, and ends with the accession of Augustus. The imperial period extends from the accession of Augustus to the death of Marcus Aurelius. It is true that the empire survived nominally for more than a century after this date, but it did so in constant confusion, rebellions, civil wars and barbarian invasions.
(4) Not all empires endured for their full life-span. The Babylonian Empire of Nebuchadnezzar, for example, was overthrown by Cyrus, after a life duration of only some seventy-four years.
(5) An interesting deduction from the figures seems to be that the duration of empires does not depend on the speed of travel or the nature of weapons. The Assyrians marched on foot and fought with spears and bow and arrows. The British used artillery, railways and ocean-going ships. Yet the two empires lasted for approximately the same periods. There is a tendency nowadays to say that this is the jet age, and consequently there is nothing for us to learn from past empires. Such an attitude seems to be erroneous.
(6) It is tempting to compare the lives of empires with those of human beings. We may choose a figure and say that the average life of a human being is seventy years. Not all human beings live exactly seventy years. Some die in infancy, others are killed in accidents in middle life, some survive to the age of eighty or ninety. Nevertheless, in spite of such exceptions, we are justified in saying that seventy years is a fair estimate of the average person's expectation of life.
(7) We may perhaps at this stage be allowed to draw certain conclusions:(a) In spite of the accidents of fortune, and the apparent circumstances of the human race at different epochs, the periods of duration of different empires at varied epochs show a remarkable similarity. (b) Immense changes in the technology of transport or in methods of warfare do not seem to affect the life-expectation of an empire. (c) The changes in the technology of transport and of war have, however, affected the shape of empires. The Assyrians, marching on foot, could only conquer their neighbours, who were accessible by land — the Medes, the Babylonians, the Persians and the Egyptians.
The British, making use of ocean-going ships, conquered many countries and sub-continents, which were accessible to them by water — North America, India, South Africa, Australia and New Zealand — but they never succeeded in conquering their neighbours, France, Germany and Spain.
But, although the shapes of the Assyrian and the British Empires were entirely different, both lasted about the same length of time.
III. The Human Yardstick
What then, we may ask, can have been the factor which caused such an extraordinary similarity in the duration of empires, under such diverse conditions, and such utterly different technological achievements?
One of the very few units of measurement which have not seriously changed since the Assyrians is the human ‘generation’, a period of about twenty-five years. Thus a period of 250 years would represent about ten generations of people. A closer examination of the characteristics of the rise and fall of great nations may emphasise the possible significance of the sequence of generations.
Let us then attempt to examine the stages in the lives of such powerful nations.
Again and again in history we find a small nation, treated as insignificant by its contemporaries, suddenly emerging from its homeland and overrunning large areas of the world. Prior to Philip (359-336 B.C.), Macedon had been an insignificant state to the north of Greece. Persia was the great power of the time, completely dominating the area from Eastern Europe to India. Yet by 323 B.C., thirty-six years after the accession of Philip, the Persian Empire had ceased to exist, and the Macedonian Empire extended from the Danube to India, including Egypt.
This amazing expansion may perhaps he attributed to the genius of Alexander the Great, but this cannot have been the sole reason; for although after his death everything went wrong — the Macedonian generals fought one another and established rival empires — Macedonian pre-eminence survived for 231 years.
In the year A.D. 600, the world was divided between two superpower groups as it has been for the past fifty years between Soviet Russia and the West. The two powers were the eastern Roman Empire and the Persian Empire. The Arabs were then the despised and backward inhabitants of the Arabian Peninsula. They consisted chiefly of wandering tribes, and had no government, no constitution and no army. Syria, Palestine, Egypt and North Africa were Roman provinces, Iraq was part of Persia.
The Prophet Mohammed preached in Arabia from A.D. 613 to 632, when he died. In 633, the Arabs burst out of their desert peninsula, and simultaneously attacked the two super-powers. Within twenty years, the Persian Empire had ceased to exist. Seventy years after the death of the Prophet, the Arabs had established an empire extending from the Atlantic to the plains of Northern India and the frontiers of China.
At the beginning of the thirteenth century, the Mongols were a group of savage tribes in the steppes of Mongolia. In 1211, Genghis Khan invaded China. By 1253, the Mongols had established an empire extending from Asia Minor to the China Sea, one of the largest empires the world has ever known.
The Arabs ruled the greater part of Spain for 780 years, from 712 A.D. to 1492. (780 years back in British history would take us to 1196 and King Richard Coeur de Lion.) During these eight centuries, there had been no Spanish nation, the petty kings of Aragon and Castile alone holding on in the mountains.
The agreement between Ferdinand and Isabella and Christopher Columbus was signed immediately after the fall of Granada, the last Arab kingdom in Spain, in 1492. Within fifty years, Cortez had conquered Mexico, and Spain was the world's greatest empire.
Examples of the sudden outbursts by which empires are born could be multiplied indefinitely. These random illustrations must suffice.
These sudden outbursts are usually characterised by an extraordinary display of energy and courage. The new conquerors are normally poor, hardy and enterprising and above all aggressive. The decaying empires which they overthrow are wealthy but defensive-minded. In the time of Roman greatness, the legions used to dig a ditch round their camps at night to avoid surprise. But the ditches were mere earthworks, and between them wide spaces were left through which the Romans could counter-attack. But as Rome grew older, the earthworks became high walls, through which access was given only by narrow gates. Counterattacks were no longer possible. The legions were now passive defenders.
But the new nation is not only distinguished by victory in battle, but by unresting enterprise in every field. Men hack their way through jungles, climb mountains, or brave the Atlantic and the Pacific oceans in tiny cockle-shells. The Arabs crossed the Straits of Gibraltar in A.D. 711 with 12,000 men, defeated a Gothic army of more than twice their strength, marched straight over 250 miles of unknown enemy territory and seized the Gothic capital of Toledo. At the same stage in British history. Captain Cook discovered Australia. Fearless initiative characterises such periods.
Other peculiarities of the period of the conquering pioneers are their readiness to improvise and experiment. Untrammelled by traditions, they will turn anything available to their purpose. If one method fails, they try something else. Uninhibited by textbooks or book learning, action is their solution to every problem. Poor, hardy, often half-starved and ill-clad, they abound in courage, energy and initiative, overcome every obstacle and always seem to be in control of the situation.
The modern instinct is to seek a reason for everything, and to doubt the veracity of a statement for which a reason cannot be found. So many examples can be given of the sudden eruption of an obscure race into a nation of conquerors that the truth of the phenomenon cannot be held to be doubtful. To assign a cause is more difficult. Perhaps the easiest explanation is to assume that the poor and obscure race is tempted by the wealth of the ancient civilisation, and there would undoubtedly appear to be an element of greed for loot in barbarian invasions.
Such a motivation may be divided into two classes. The first is mere loot, plunder and rape, as, for example, in the case of Attila and the Huns, who ravaged a great part of Europe from A.D. 450 to 453. However, when Attila died in the latter year, his empire fell apart and his tribes returned to Eastern Europe.
Many of the barbarians who founded dynasties in Western Europe on the ruins of the Roman Empire, however, did so out of admiration for Roman civilisation, and themselves aspired to become Romans.
VII. A Providential Turnover?
Whatever causes may be given for the overthrow of great civilisations by barbarians, we can sense certain resulting benefits. Every race on earth has distinctive characteristics. Some have been distinguished in philosophy, some in administration, some in romance, poetry or religion, some in their legal system. During the pre-eminence of each culture, its distinctive characteristics are carried by it far and wide across the world.
If the same nation were to retain its domination indefinitely, its peculiar qualities would permanently characterise the whole human race. Under the system of empires each lasting for 250 years, the sovereign race has time to spread its particular virtues far and wide. Then, however, another people, with entirely different peculiarities, takes its place, and its virtues and accomplishments are likewise disseminated. By this system, each of the innumerable races of the world enjoys a period of greatness, during which its peculiar qualities are placed at the service of mankind.
To those who believe in the existence of God, as the Ruler and Director of human affairs, such a system may appear as a manifestation of divine wisdom, tending towards the slow and ultimate perfection of humanity.
VIII. The Course of Empire
The first stage of the life of a great nation, therefore, after its outburst, is a period of amazing initiative, and almost incredible enterprise, courage and hardihood. These qualities, often in a very short time, produce a new and formidable nation. These early victories, however, are won chiefly by reckless bravery and daring initiative.
The ancient civilisation thus attacked will have defended itself by its sophisticated weapons, and by its military organisation and discipline. The barbarians quickly appreciate the advantages of these military methods and adopt them. As a result, the second stage of expansion of the new empire consists of more organised, disciplined and professional campaigns.
In other fields, the daring initiative of the original conquerors is maintained — in geographical exploration, for example: pioneering new countries, penetrating new forests, climbing unexplored mountains, and sailing uncharted seas. The new nation is confident, optimistic and perhaps contemptuous of the “decadent” races which it has subjugated.
The methods employed tend to be practical and experimental, both in government and in warfare, for they are not tied by centuries of tradition, as happens in ancient empires. Moreover, the leaders are free to use their own improvisations, not having studied politics or tactics in schools or in textbooks.
In the case of the United States of America, the pioneering period did not consist of a barbarian conquest of an effete civilisation, but of the conquest of barbarian peoples. Thus, viewed from the outside, every example seems to be different. But viewed from the standpoint of the great nation, every example seems to be similar.
The United States arose suddenly as a new nation, and its period of pioneering was spent in the conquest of a vast continent, not an ancient empire. Yet the subsequent life history of the United States has followed the standard pattern which we shall attempt to trace — the periods of the pioneers, of commerce, of affluence, of intellectualism and of decadence.
The conquest of vast areas of land and their subjection to one government automatically acts as a stimulant to commerce. Both merchants and goods can be exchanged over considerable distances. Moreover, if the empire be an extensive one, it will include a great variety of climates, producing extremely varied products, which the different areas will wish to exchange with one another.
The speed of modern methods of transportation tends to create in us the impression that far-flung commerce is a modern development, but this is not the case. Objects made in Ireland, Scandinavia and China have been found in the graves or the rains of the Middle East, dating from 1,000 years before Christ. The means of transport were slower, but, when a great empire was in control, commerce was freed from the innumerable shackles imposed upon it today by passports, import permits, customs, boycotts and political interference.
The Roman Empire extended from Britain to Syria and Egypt, a distance, in a direct line, of perhaps 2,700 miles. A Roman official, transferred from Britain to Syria, might spend six months on the journey. Yet, throughout the whole distance, he would be travelling in the same country, with the same official language, the same laws, the same currency and the same administrative system. Today, some twenty independent countries separate Britain from Syria, each with its own government, its own laws, politics, customs fees, passports and currencies, making commercial co-operation almost impossible. And this process of disintegration is still continuing. Even within the small areas of the modern European nations, provincial movements demanding secession or devolution tend further to splinter the continent.
The present fashion for ‘independence’ has produced great numbers of tiny states in the world, some of them consisting of only one city or of a small island. This system is an insuperable obstacle to trade and co-operation. The present European Economic Community is an attempt to secure commercial cooperation among small independent states over a large area, but the plan meets with many difficulties, due to the mutual jealousies of so many nations.
Even savage and militaristic empires promoted commerce, whether or not they intended to do so. The Mongols were some of the most brutal military conquerors in history, massacring the entire populations of cities. Yet, in the thirteenth century, when their empire extended from Peking to Hungary, the caravan trade between China and Europe achieved a remarkable degree of prosperity — the whole journey was in the territory of one government.
In the eighth and ninth centuries, the caliphs of Baghdad achieved fabulous wealth owing to the immense extent of their territories, which constituted a single trade bloc. The empire of the caliphs is now divided into some twenty-five separate ‘nations’.
In discussing the life-story of the typical empire, we have digressed into a discussion of whether empires are useful or injurious to mankind. We seem to have discovered that empires have certain advantages, particularly in the field of commerce, and in the establishment of peace and security in vast areas of the globe. Perhaps we should also include the spread of varied cultures to many races. The present infatuation for independence for ever smaller and smaller units will eventually doubtless be succeeded by new international empires.
The present attempts to create a European community may be regarded as a practical endeavour to constitute a new super-power, in spite of the fragmentation resulting from the craze for independence. If it succeeds, some of the local independencies will have to be sacrificed. If it fails, the same result may be attained by military conquest, or by the partition of Europe between rival super-powers. The inescapable conclusion seems, however, to be that larger territorial units are a benefit to commerce and to public stability, whether the broader territory be achieved by voluntary association or by military action.
XII. Sea Power
One of the more benevolent ways in which a super-power can promote both peace and commerce is by its command of the sea.
From Waterloo to 1914, the British Navy commanded the seas of the world. Britain grew rich, but she also made the Seas safe for the commerce of all nations, and prevented major wars for 100 years.
Curiously enough, the question of sea power was never clearly distinguished, in British politics during the last fifty years, from the question of imperial rule over other countries. In fact, the two subjects are entirely distinct. Sea power does not offend small countries, as does military occupation. If Britain had maintained her navy, with a few naval bases overseas in isolated islands, and had given independence to colonies which asked for it, the world might well be a more stable place today. In fact, however, the navy was swept away in the popular outcry against imperialism.
XIII. The Age of Commerce
Let us now, however, return to the life-story of our typical empire. We have already considered the age of outburst, when a little-regarded people suddenly bursts on to the world stage with a wild courage and energy. Let us call it the Age of the Pioneers.
Then we saw that these new conquerors acquired the sophisticated weapons of the old empires, and adopted their regular systems of military organisation and training. A great period of military expansion ensued, which we may call the Age of Conquests. The conquests resulted in the acquisition of vast territories under one government, thereby automatically giving rise to commercial prosperity. We may call this the Age of Commerce.
The Age of Conquests, of course, overlaps the Age of Commerce. The proud military traditions still hold sway and the great armies guard the frontiers, but gradually the desire to make money seems to gain hold of the public. During the military period, glory and honour were the principal objects of ambition. To the merchant, such ideas are but empty words, which add nothing to the bank balance.
XIV. Art and Luxury
The wealth which seems, almost without effort, to pour into the country enables the commercial classes to grow immensely rich. How to spend all this money becomes a problem to the wealthy business community. Art, architecture and luxury find rich patrons. Splendid municipal buildings and wide streets lend dignity and beauty to the wealthy areas of great cities. The rich merchants build themselves palaces, and money is invested in communications, highways, bridges, railways or hotels, according to the varied patterns of the ages.
The first half of the Age of Commerce appears to be peculiarly splendid. The ancient virtues of courage, patriotism and devotion to duty are still in evidence. The nation is proud, united and full of self-confidence. Boys are still required, first of all, to be manly — to ride, to shoot straight and to tell the truth. (It is remarkable what emphasis is placed, at this stage, on the manly virtue of truthfulness, for lying is cowardice — the fear of facing up to the situation.)
Boys‘ schools are intentionally rough. Frugal eating, hard living, breaking the ice to have a bath and similar customs are aimed at producing a strong, hardy and fearless breed of men. Duty is the word constantly drummed into the heads of young people.
The Age of Commerce is also marked by great enterprise in the exploration for new forms of wealth. Daring initiative is shown in the search for profitable enterprises in far comers of the earth, perpetuating to some degree the adventurous courage of the Age of Conquests.
There does not appear to be any doubt that money is the agent which causes the decline of this strong, brave and self-confident people. The decline in courage, enterprise and a sense of duty is, however, gradual.
The first direction in which wealth injures the nation is a moral one. Money replaces honour and adventure as the objective of the best young men. Moreover, men do not normally seek to make money for their country or their community, but for themselves. Gradually, and almost imperceptibly. the Age of Affluence silences the voice of duty. The object of the young and the ambitious is no longer fame, honour or service, but cash.
Education undergoes the same gradual transformation. No longer do schools aim at producing brave patriots ready to serve their country. Parents and students alike seek the educational qualifications which will command the highest salaries. The Arab moralist, Ghazali (1058-1111), complains in these very same words of the lowering of objectives in the declining Arab world of his time. Students, he says, no longer attend college to acquire learning and virtue, but to obtain those qualifications which will enable them to grow rich. The same situation is everywhere evident among us in the West today.
XVI. High Noon
That which we may call the High Noon of the nation covers the period of transition from the Age of Conquests to the Age of Affluence: the age of Augustus in Rome, that of Harun al-Rashid in Baghdad, of Sulaiman the Magnificent in the Ottoman Empire, or of Queen Victoria in Britain. Perhaps we might add the age of Woodrow Wilson in the United States.
All these periods reveal the same characteristics. The immense wealth accumulated in the nation dazzles the onlookers. Enough of the ancient virtues of courage, energy and patriotism survive to enable the state successfully to defend its frontiers. But, beneath the surface, greed for money is gradually replacing duty and public service. Indeed the change might be summarised as being from service to selfishness.
Another outward change which invariably marks the transition from the Age of Conquests to the Age of Affluence is the spread of defensiveness. The nation, immensely rich, is no longer interested in glory or duty, but is only anxious to retain its wealth and its luxury. It is a period of defensiveness, from the Great Wall of China, to Hadrian's Wall on the Scottish Border, to the Maginot Line in France in 1939.
Money being in better supply than courage, subsidies instead of weapons are employed to buy off enemies. To justify this departure from ancient tradition, the human mind easily devises its own justification. Military readiness, or aggressiveness, is denounced as primitive and immoral. Civilised peoples are too proud to fight. The conquest of one nation by another is declared to be immoral. Empires are wicked. This intellectual device enables us to suppress our feeling of inferiority, when we read of the heroism of our ancestors, and then ruefully contemplate our position today. “It is not that we are afraid to fight,&lrquo we say, $ldquo;but we should consider it immoral.&lrquo This even enables us to assume an attitude of moral superiority.
The weakness of pacifism is that there are still many peoples in the world who are aggressive. Nations who proclaim themselves unwilling to fight are liable to be conquered by peoples in the stage of militarism — perhaps even to see themselves incorporated into some new empire, with the status of mere provinces or colonies.
When to be prepared to use force and when to give way is a perpetual human problem, which can only be solved, as best we can, in each successive situation as it arises. In fact, however, history seems to indicate that great nations do not normally disarm from motives of conscience, but owing to the weakening of a sense of duty in the citizens, and the increase in selfishness and the desire for wealth and ease.
XVIII. The Age of Intellect
We have now, perhaps arbitrarily, divided the life-story of our great nation into four ages. The Age of the Pioneers (or the Outburst), the Age of Conquests, the Age of Commerce, and the Age of Affluence. The great wealth of the nation is no longer needed to supply the mere necessities, or even the luxuries of life. Ample funds are available also for the pursuit of knowledge.
The merchant princes of the Age of Commerce seek fame and praise, not only by endowing works of art or patronising music and literature. They also found and endow colleges and universities. It is remarkable with what regularity this phase follows on that of wealth, in empire after empire, divided by many centuries.
In the eleventh century, the former Arab Empire, then in complete political decline, was ruled by the Seljuk sultan, Malik Shah. The Arabs, no longer soldiers, were still the intellectual leaders of the world. During the reign of Malik Shah, the building of universities and colleges became a passion. Whereas a small number of universities in the great cities had sufficed the years of Arab glory, now a university sprang up in every town.
In our own lifetime, we have witnessed the same phenomenon in the U.S.A. and Britain. When these nations were at the height of their glory. Harvard, Yale, Oxford and Cambridge seemed to meet their needs. Now almost every city has its university.
The ambition of the young, once engaged in the pursuit of adventure and military glory, and then in the desire for the accumulation of wealth, now turns to the acquisition of academic honours.
It is useful here to take note that almost all the pursuits followed with such passion throughout the ages were in themselves good. The manly cult of hardihood, frankness and truthfulness, which characterised the Age of Conquests, produced many really splendid heroes.
The opening up of natural resources, and the peaceful accumulation of wealth, which marked the age of commercialism, appeared to introduce new triumphs in civilisation, in culture and in the arts. In the same way, the vast expansion of the field of knowledge achieved by the Age of Intellect seemed to mark a new high-water mark of human progress. We cannot say that any of these changes were ‘good’ or ‘bad’.
The striking features in the pageant of empire are:
(a) the extraordinary exactitude with which these stages have followed one another, in empire after empire, over centuries or even millennia; and
(b) the fact that the successive changes seem to represent mere changes in popular fashion — new fads and fancies which sweep away public opinion without logical reason. At first, popular enthusiasm is devoted to military glory, then to the accumulation of wealth and later to the acquisition of academic fame.
Why could not all these legitimate, and indeed beneficent, activities be carried on simultaneously, each of them in due moderation? Yet this never seemed to happen.
XIX. The Effects of Intellectualism
There are so many things in human life which are not dreamt of in our popular philosophy. The spread of knowledge seems to be the most beneficial of human activities, and yet every period of decline is characterised by this expansion of intellectual activity. ‘All the Athenians and strangers which were there spent their time in nothing else, but either to tell or to hear some new thing’ is the description given in the Acts of the Apostles of the decline of Greek intellectualism.
The Age of Intellect is accompanied by surprising advances in natural science. In the ninth century, for example, in the age of Mamun, the Arabs measured the circumference of the earth with remarkable accuracy. Seven centuries were to pass before Western Europe discovered that the world was not flat. Less than fifty years after the amazing scientific discoveries under Mamun, the Arab Empire collapsed. Wonderful and beneficent as was the progress of science, it did not save the empire from chaos.
The full flowering of Arab and Persian intellectualism did not occur until after their imperial and political collapse. Thereafter the intellectuals attained fresh triumphs in the academic field, but politically they became the abject servants of the often illiterate rulers. When the Mongols conquered Persia in the thirteenth century, they were themselves entirely uneducated and were obliged to depend wholly on native Persian officials to administer the country and to collect the revenue. They retained as wazeer, or Prime Minister, one Rashid al-Din, a historian of international repute. Yet the Prime Minister, when speaking to the Mongol II Khan, was obliged to remain throughout the interview on his knees. At state banquets, the Prime Minister stood behind the Khan‘s seat to wait upon him. If the Khan were in a good mood, he occasionally passed his wazeer a piece of food over his shoulder.
As in the case of the Athenians, intellectualism leads to discussion, debate and argument, such as is typical of the Western nations today. Debates in elected assemblies or local committees, in articles in the Press or in interviews on television — endless and incessant talking.
Men are interminably different, and intellectual arguments rarely lead to agreement. Thus public affairs drift from bad to worse, amid an unceasing cacophony of argument. But this constant dedication to discussion seems to destroy the power of action. Amid a Babel of talk, the ship drifts on to the rocks.
Perhaps the most dangerous by-product of the Age of Intellect is the unconscious growth of the idea that the human brain can solve the problems of the world. Even on the low level of practical affairs this is patently untrue. Any small human activity, the local bowls club or the ladies‘ luncheon club, requires for its survival a measure of self-sacrifice and service on the part of the members. In a wider national sphere, the survival of the nation depends basically on the loyalty and self-sacrifice of the citizens. The impression that the situation can be saved by mental cleverness, without unselfishness or human self-dedication, can only lead to collapse.
Thus we see that the cultivation of the human intellect seems to be a magnificent ideal, but only on condition that it does not weaken unselfishness and human dedication to service. Yet this, judging by historical precedent, seems to be exactly what it does do. Perhaps it is not the intellectualism which destroys the spirit of self-sacrifice — the least we can say is that the two, intellectualism and the loss of a sense of duty, appear simultaneously in the life-story of the nation.
Indeed it often appears in individuals, that the head and the heart are natural rivals. The brilliant but cynical intellectual appears at the opposite end of the spectrum from the emotional self-sacrifice of the hero or the martyr. Yet there are times when the perhaps unsophisticated self-dedication of the hero is more essential than the sarcasms of the clever.
XXI. Civil Dissensions
Another remarkable and unexpected symptom of national decline is the intensification of internal political hatreds. One would have expected that, when the survival of the nation became precarious, political factions would drop their rivalry and stand shoulder-to-shoulder to save their country.
In the fourteenth century, the weakening empire of Byzantium was threatened, and indeed dominated, by the Ottoman Turks. The situation was so serious that one would have expected every subject of Byzantium to abandon his personal interests and to stand with his compatriots in a last desperate attempt to save the country. The reverse occurred. The Byzantines spent the last fifty years of their history in fighting one another in repeated civil wars, until the Ottomans moved in and administered the coup de grace.
Britain has been governed by an elected parliament for many centuries. In former years, however, the rival parties observed many unwritten laws. Neither party wished to eliminate the other. All the members referred to one another as honourable gentlemen. But such courtesies have now lapsed. Booing, shouting and loud noises have undermined the dignity of the House, and angry exchanges are more frequent. We are fortunate if these rivalries are fought out in Parliament, but sometimes such hatreds are carried into the streets, or into industry in the form of strikes, demonstrations, boycotts and similar activities. True to the normal course followed by nations in decline, internal differences are not reconciled in an attempt to save the nation. On the contrary, internal rivalries become more acute, as the nation becomes weaker.
XXII. The Influx of Foreigners
One of the oft-repeated phenomena of great empires is the influx of foreigners to the capital city. Roman historians often complain of the number of Asians and Africans in Rome. Baghdad, in its prime in the ninth century, was international in its population — Persians, Turks, Arabs, Armenians, Egyptians, Africans and Greeks mingled in its streets.
In London today, Cypriots, Greeks, Italians, Russians, Africans, Germans and Indians jostle one another on the buses and in the underground, so that it sometimes seems difficult to find any British. The same applies to New York, perhaps even more so. This problem does not consist in any inferiority of one race as compared with another, but simply in the differences between them.
In the age of the first outburst and the subsequent Age of Conquests, the race is normally ethnically more or less homogeneous. This state of affairs facilitates a feeling of solidarity and comradeship. But in the Ages of Commerce and Affluence, every type of foreigner floods into the great city, the streets of which are reputed to be paved with gold. As, in most cases, this great city is also the capital of the empire, the cosmopolitan crowd at the seat of empire exercises a political influence greatly in excess of its relative numbers.
Second- or third-generation foreign immigrants may appear outwardly to be entirely assimilated, but they often constitute a weakness in two directions. First, their basic human nature often differs from that of the original imperial stock. If the earlier imperial race was stubborn and slow-moving, the immigrants might come from more emotional races, thereby introducing cracks and schisms into the national policies, even if all were equally loyal.
Second, while the nation is still affluent, all the diverse races may appear equally loyal. But in an acute emergency, the immigrants will often be less willing to sacrifice their lives and their property than will be the original descendants of the founder race.
Third, the immigrants are liable to form communities of their own, protecting primarily their own interests, and only in the second degree that of the nation as a whole.
Fourth, many of the foreign immigrants will probably belong to races originally conquered by and absorbed into the empire. While the empire is enjoying its High Noon of prosperity, all these people are proud and glad to be imperial citizens. But when decline sets in, it is extraordinary how the memory of ancient wars, perhaps centuries before, is suddenly revived, and local or provincial movements appear demanding secession or independence. Someday this phenomenon will doubtless appear in the now apparently monolithic and authoritarian Soviet empire. It is amazing for how long such provincial sentiments can survive.
Historical examples of this phenomenon are scarcely needed. The idle and captious Roman mob, with its endless appetite for free distributions of food — bread and games — is notorious, and utterly different from that stern Roman spirit which we associate with the wars of the early republic.
In Baghdad, in the golden days of Harun al-Rashid, Arabs were a minority in the imperial capital. Istanbul, in the great days of Ottoman rule, was peopled by inhabitants remarkably few of whom were descendants of Turkish conquerors. In New York, descendants of the Pilgrim Fathers are few and far between.
This interesting phenomenon is largely limited to great cities. The original conquering race is often to be found in relative purity in rural districts and on far frontiers. It is the wealth of the great cities which draws the immigrants. As, with the growth of industry, cities nowadays achieve an ever greater preponderance over the countryside, so will the influence of foreigners increasingly dominate old empires.
Once more it may be emphasised that I do not wish to convey the impression that immigrants are inferior to older stocks. They are just different, and they thus tend to introduce cracks and divisions.
As the nation declines in power and wealth, a universal pessimism gradually pervades the people, and itself hastens the decline. There is nothing succeeds like success, and, in the Ages of Conquest and Commerce, the nation was carried triumphantly onwards on the wave of its own self-confidence. Republican Rome was repeatedly on the verge of extinction — in 390 B.C. when the Gauls sacked the city and in 216 B.C. after the Battle of Cannae. But no disasters could shake the resolution of the early Romans. Yet, in the later stages of Roman decline, the whole empire was deeply pessimistic, thereby sapping its own resolution.
Frivolity is the frequent companion of pessimism. Let us eat, drink and be merry, for tomorrow we die. The resemblance between various declining nations in this respect is truly surprising. The Roman mob, we have seen, demanded free meals and public games. Gladiatorial shows, chariot races and athletic events were their passion. In the Byzantine Empire the rivalries of the Greens and the Blues in the hippodrome attained the importance of a major crisis.
Judging by the time and space allotted to them in the Press and television, football and baseball are the activities which today chiefly interest the public in Britain and the United States respectively.
The heroes of declining nations are always the same — the athlete, the singer or the actor. The word ‘celebrity’ today is used to designate a comedian or a football player, not a statesman, a general, or a literary genius.
XXIV. The Arab Decline
In the first half of the ninth century, Baghdad enjoyed its High Noon as the greatest and the richest city in the world. In 861, however, the reigning Khalif (caliph), Mutawakkil, was murdered by his Turkish mercenaries, who set up a military dictatorship, which lasted for some thirty years. During this period the empire fell apart, the various dominions and provinces each assuming virtual independence and seeking its own interests. Baghdad, lately the capital of a vast empire, found its authority limited to Iraq alone.
The works of the contemporary historians of Baghdad in the early tenth century are still available. They deeply deplored the degeneracy of the times in which they lived, emphasising particularly the indifference to religion, the increasing materialism and the laxity of sexual morals. They lamented also the corruption of the officials of the government and the fact that politicians always seemed to amass large fortunes while they were in office.
The historians commented bitterly on the extraordinary influence acquired by popular singers over young people, resulting in a decline in sexual morality. The ‘pop’ singers of Baghdad accompanied their erotic songs on the lute, an instrument resembling the modern guitar. In the second half of the tenth century, as a result, much obscene sexual language came increasingly into use, such as would not have been tolerated in an earlier age. Several khalifs issued orders banning ‘pop’ singers from the capital, but within a few years they always returned.
An increase in the influence of women in public life has often been associated with national decline. The later Romans complained that, although Rome ruled the world, women ruled Rome. In the tenth century, a similar tendency was observable in the Arab Empire, the women demanding admission to the professions hitherto monopolised by men. “What,” wrote the contemporary historian, Ibn Bessam, “have the professions of clerk, tax-collector or preacher to do with women? These occupations have always been limited to men alone.” Many women practised law, while others obtained posts as university professors. There was an agitation for the appointment of female judges, which, however, does not appear to have succeeded.
Soon after this period, government and public order collapsed, and foreign invaders overran the country. The resulting increase in confusion and violence made it unsafe for women to move unescorted in the streets, with the result that this feminist movement collapsed.
The disorders following the military takeover in 861, and the loss of the empire, had played havoc with the economy. At such a moment, it might have been expected that everyone would redouble their efforts to save the country from bankruptcy, but nothing of the kind occurred. Instead, at this moment of declining trade and financial stringency, the people of Baghdad introduced a five-day week.
When I first read these contemporary descriptions of tenth-century Baghdad, I could scarcely believe my eyes. I told myself that this must be a joke! The descriptions might have been taken out of The Times today. The resemblance of all the details was especially breathtaking — the break-up of the empire, the abandonment of sexual morality, the “pop” singers with their guitars, the entry of women into the professions, the five-day week. I would not venture to attempt an explanation! There are so many mysteries about human life which are far beyond our comprehension .
XXV. Political Ideology
Today we attach immense importance to the ideology of our internal politics. The Press and public media in the U.S.A. and Britain pour incessant scorn on any country the political institutions of which differ in any manner from our own idea of democracy. It is, therefore, interesting to note that the life-expectation of a great nation does not appear to be in any way affected by the nature of its institutions.
Past empires show almost every possible variation of political system, but all go through the same procedure from the Age of Pioneers through Conquest, Commerce, Affluence to decline and collapse.
XXVI. The Mameluke Empire
The empire of the Mamelukes of Egypt provides a case in point, for it was one of the most exotic ever to be recorded in history. It is also exceptional in that it began on one fixed day and ended on another, leaving no doubt of its precise duration, which was 267 years.
In the first part of the thirteenth century, Egypt and Syria were ruled by the Ayoubid sultans, the descendants of the family of Saladin. Their army consisted of Mamelukes, slaves imported as boys from the Steppes and trained as professional soldiers. On 1st May 1250, the Mamelukes mutinied, murdered Turan Shah, the Ayoubid sultan, and became the rulers of his empire.
The first fifty years of the Mameluke Empire were marked by desperate fighting with the hitherto invincible Mongols, the descendants of Genghis Khan, who invaded Syria. By defeating the Mongols and driving them out of Syria, the Mamelukes saved the Mediterranean from the terrible fate which had overtaken Persia. In 1291, the Mamelukes captured Acre, and put an end to the Crusades.
From 1309 to 1341, the Mameluke Empire was everywhere victorious and possessed the finest army in the world. For the ensuing hundred years the wealth of the Mameluke Empire was fabulous, slowly leading to luxury, the relaxation of discipline and to decline, with ever more bitter internal political rivalries. Finally the empire collapsed in 1517, as the result of military defeat by the Ottomans.
The Mameluke government appears to us utterly illogical and fantastic. The ruling class was entirely recruited from young boys, born in what is now Southern Russia. Every one of them was enlisted as a private soldier. Even the sultans had begun life as private soldiers and had risen from the ranks. Yet this extraordinary political system resulted in an empire which passed through all the normal stages of conquest, commercialism, affluence and decline and which lasted approximately the usual period of time.
XXVII. The Master Race
The people of the great nations of the past seem normally to have imagined that their pre-eminence would last forever. Rome appeared to its citizens to be destined to be for all time the mistress of the world. The Abbasid Khalifs of Baghdad declared that God had appointed them to rule mankind until the day of judgement. Seventy years ago, many people in Britain believed that the empire would endure forever. Although Hitler failed to achieve his objective, he declared that Germany would rule the world for a thousand years. That sentiments like these could be publicly expressed without evoking derision shows that, in all ages, the regular rise and fall of great nations has passed unperceived. The simplest statistics prove the steady rotation of one nation after another at regular intervals.
The belief that their nation would rule the world forever, naturally encouraged the citizens of the leading nation of any period to attribute their pre-eminence to hereditary virtues. They carried in their blood, they believed, qualities which constituted them a race of supermen, an illusion which inclined them to the employment of cheap foreign labour (or slaves) to perform menial tasks and to engage foreign mercenaries to fight their battles or to sail their ships.
These poorer peoples were only too happy to migrate to the wealthy cities of the empire, and thereby, as we have seen, to adulterate the close-knit, homogeneous character of the conquering race. The latter unconsciously assumed that they would always be the leaders of mankind, relaxed their energies, and spent an increasing part of their time in leisure, amusement or sport.
In recent years, the idea has spread widely in the West that ‘progress’ will be automatic without effort, that everyone will continue to grow richer and richer and that every year will show a ‘rise in the standard of living’. We have not drawn from history the obvious conclusion that material success is the result of courage, endurance and hard work — a conclusion nevertheless obvious from the history of the meteoric rise of our own ancestors. This self-assurance of its own superiority seems to go hand-in-hand with the luxury resulting from wealth, in undermining the character of the dominant race.
XXVIII. The Welfare State
When the welfare state was first introduced in Britain, it was hailed as a new high-water mark in the history of human development.
History, however, seems to suggest that the age of decline of a great nation is often a period which shows a tendency to philanthropy and to sympathy for other races. This phase may not be contradictory to the feeling described in the previous paragraph, that the dominant race has the right to rule the world. For the citizens of the great nation enjoy the role of Lady Bountiful. As long as it retains its status of leadership, the imperial people are glad to be generous, even if slightly condescending. The rights of citizenship are generously bestowed on every race, even those formerly subject, and the equality of mankind is proclaimed. The Roman Empire passed through this phase, when equal citizenship was thrown open to all peoples, such provincials even becoming senators and emperors.
The Arab Empire of Baghdad was equally, perhaps even more, generous. During the Age of Conquests, pure-bred Arabs had constituted a ruling class, but in the ninth century the empire was completely cosmopolitan.
State assistance to the young and the poor was equally generous. University students received government grants to cover their expenses while they were receiving higher education. The State likewise offered free medical treatment to the poor. The first free public hospital was opened in Baghdad in the reign of Hanin al-Rashid (786-809), and under his son, Mamun, free public hospitals sprang up all over the Arab world from Spain to what is now Pakistan.
The impression that it will always be automatically rich causes the declining empire to spend lavishly on its own benevolence, until such time as the economy collapses, the universities are closed and the hospitals fall into ruin.
It may perhaps be incorrect to picture the welfare state as the high-water mark of human attainment. It may merely prove to be one more regular milestone in the life-story of an ageing and decrepit empire.
Historians of periods of decadence often refer to a decline in religion, but, if we extend our investigation over a period covering the Assyrians (859-612 B.C.) to our own times, we have to interpret religion in a very broad sense. Some such definition as ‘the human feeling that there is something, some invisible Power, apart from material objects, which controls human life and the natural world’.
We are probably too narrow and contemptuous in our interpretation of idol worship. The people of ancient civilisations were as sensible as we are, and would scarcely have been so foolish as to worship sticks and stones fashioned by their own hands. The idol was for them merely a symbol, and represented an unknown, spiritual reality, which controlled the lives of men and demanded human obedience to its moral precepts.
We all know only too well that minor differences in the human visualisation of this Spirit frequently became the ostensible reason for human wars, in which both sides claimed to be fighting for the true God, but the absurd narrowness of human conceptions should not blind us to the fact that, very often, both sides believed their campaigns to have a moral background. Genghis Khan, one of the most brutal of all conquerors, claimed that God had delegated him the duty to exterminate the decadent races of the civilised world. Thus the Age of Conquests often had some kind of religious atmosphere, which implied heroic self-sacrifice for the cause.
But this spirit of dedication was slowly eroded in the Age of Commerce by the action of money. People make money for themselves, not for their country. Thus periods of affluence gradually dissolved the spirit of service, which had caused the rise of the imperial races.
In due course, selfishness permeated the community, the coherence of which was weakened until disintegration was threatened. Then, as we have seen, came the period of pessimism with the accompanying spirit of frivolity and sensual indulgence, by-products of despair. It was inevitable at such times that men should look back yearningly to the days of ‘religion’, when the spirit of self-sacrifice was still strong enough to make men ready to give and to serve, rather than to snatch.
But while despair might permeate the greater part of the nation, others achieved a new realisation of the fact that only readiness for self-sacrifice could enable a community to survive. Some of the greatest saints in history lived in times of national decadence, raising the banner of duty and service against the flood of depravity and despair.
In this manner, at the height of vice and frivolity the seeds of religious revival are quietly sown. After, perhaps, several generations (or even centuries) of suffering, the impoverished nation has been purged of its selfishness and its love of money, religion regains its sway and a new era sets in. “It is good for me that I have been afflicted,” said the psalmist, “that I might learn Thy Statutes.”
XXX. New Combinations
We have traced the rise of an obscure race to fame, through the stages of conquest, commercialism, affluence, and intellectualism, to disintegration, decadence and despair. We suggested that the dominant race at any given time imparts its leading characteristics to the world around, being in due course succeeded by another empire. By this means, we speculated, many successive races succeeded one another as super-powers, and in turn bequeathed their peculiar qualities to mankind at large.
But the objection may here be raised that someday the time will come when all the races of the world will in turn have enjoyed their period of domination and have collapsed again in decadence. When the whole human race has reached the stage of decadence, where will new energetic conquering races be found?
The answer is at first partially obscured by our modern habit of dividing the human race into nations, which we seem to regard as water-tight compartments, an error responsible for innumerable misunderstandings.
In earlier times, warlike nomadic nations invaded the territories of decadent peoples and settled there. In due course, they intermarried with the local population and a new race resulted, though it sometimes retained an old name. The barbarian invasions of the Roman Empire probably provide the example best known today in the West. Others were the Arab conquests of Spain, North Africa and Persia, the Turkish conquests of the Ottoman Empire, or even the Norman Conquest of England.
In all such cases, the conquered countries were originally fully inhabited and the invaders were armies, which ultimately settled down and married, and produced new races.
In our times, there are few nomadic conquerors left in the world, who could invade more settled countries bringing their tents and flocks with them. But ease of travel has resulted in an equal, or probably an even greater, intermixture of populations. The extreme bitterness of modern internal political straggles produces a constant flow of migrants from their native countries to others, where the social institutions suit them better.
The vicissitudes of trade and business similarly result in many persons moving to other countries, at first intending to return, but ultimately settling down in their new countries.
The population of Britain has been constantly changing, particularly in the last sixty years, owing to the influx of immigrants from Europe, Asia and Africa, and the exit of British citizens to the Dominions and the United States. The latter is, of course, the most obvious example of the constant rise of new nations, and of the transformation of the ethnic content of old nations through this modern nomadism.
XXXI. Decadence of a System
It is of interest to note that decadence is the disintegration of a system, not of its individual members. The habits of the members of the community have been corrupted by the enjoyment of too much money and too much power for too long a period. The result has been, in the framework of their national life, to make them selfish and idle. A community of selfish and idle people declines, internal quarrels develop in the division of its dwindling wealth, and pessimism follows, which some of them endeavour to drown in sensuality or frivolity. In their own surroundings, they are unable to redirect their thoughts and their energies into new channels.
But when individual members of such a society emigrate into entirely new surroundings, they do not remain conspicuously decadent, pessimistic or immoral among the inhabitants of their new homeland. Once enabled to break away from their old channels of thought, and after a short period of readjustment, they become normal citizens of their adopted countries. Some of them, in the second and third generations, may attain pre-eminence and leadership in their new communities.
This seems to prove that the decline of any nation does not undermine the energies or the basic character of its members. Nor does the decadence of a number of such nations permanently impoverish the human race. Decadence is both mental and moral deterioration, produced by the slow decline of the community from which its members cannot escape, as long as they remain in their old surroundings. But, transported elsewhere, they soon discard their decadent ways of thought, and prove themselves equal to the other citizens of their adopted country.
XXXII. Decadence is Not Physical
Neither is decadence physical. The citizens of nations in decline are sometimes described as too physically emasculated to be able to bear hardship or make great efforts. This does not seem to be a true picture. Citizens of great nations in decadence are normally physically larger and stronger than those of their barbarian invaders.
Moreover, as was proved in Britain in the first World War, young men brought up in luxury and wealth found little difficulty in accustoming themselves to life in the front-line trenches. The history of exploration proves the same point. Men accustomed to comfortable living in homes in Europe or America were able to show as much endurance as the natives in riding camels across the desert or in hacking their way through tropical forests.
Decadence is a moral and spiritual disease, resulting from too long a period of wealth and power, producing cynicism, decline of religion, pessimism and frivolity. The citizens of such a nation will no longer make an effort to save themselves, because they are not convinced that anything in life is worth saving.
XXXIII. Human Diversity
Generalisations are always dangerous. Human beings are all different. The variety in human life is endless. If this be the case with individuals, it is much more so with nations and cultures. No two societies, no two peoples, no two cultures are exactly the same. In these circumstances, it will be easy for critics to find many objections to what has been said, and to point out exceptions to the generalisations.
There is some value in comparing the lives of nations to those of individuals. No two persons in the world are identical. Moreover their lives are often affected by accidents or by illness, making the divergences even more obvious. Yet, in fact, we can generalise about human life from many different aspects. The characteristics of childhood, adolescence, youth, middle and old age are well known. Some adolescents, it is true, are prematurely wise and serious. Some persons in middle age still seem to be young. But such exceptions do not invalidate the general character of human life from the cradle to the grave.
I venture to submit that the lives of nations follow a similar pattern. Superficially, all seem to be completely different. Some years ago, a suggestion was submitted to a certain television corporation that a series of talks on Arab history would form an interesting sequence. The proposal was immediately vetoed by the director of programmes with the remark, “What earthly interest could the history of medieval Arabs have for the general public today?”
Yet, in fact, the history of the Arab imperial age — from conquest through commercialism, to affluence, intellectualism, science and decadence — is an exact precursor of British imperial history and lasted almost exactly the same time.
If British historians, a century ago, had devoted serious study to the Arab Empire, they could have foreseen almost everything that has happened in Britain down to 1976.
XXXIV. A Variety of Falls
It has been shown that, normally, the rise and fall of great nations are due to internal reasons alone. Ten generations of human beings suffice to transform the hardy and enterprising pioneer into the captious citizen of the welfare state. But whereas the life histories of great nations show an unexpected un0iformity, the nature of their falls depends largely on outside circumstances and thus shows a high degree of diversity.
The Roman Republic, as we have seen, was followed by the empire, which became a super-state, in which all the natives of the Mediterranean basin, regardless of race, possessed equal rights. The name of Rome, originally a city-state, passed from it to an equalitarian international empire.
This empire broke in half, the western half being overrun by northern barbarians, the eastern half forming the East Roman or Byzantine Empire.
The vast Arab Empire broke up in the ninth century into many fragments, of which one former colony, Moslem Spain, ran its own 250-year course as an independent empire. The homelands of Syria and Iraq, however, were conquered by successive waves of Turks to whom they remained subject for 1,000 years.
The Mameluke Empire of Egypt and Syria, on the other hand, was conquered in one campaign by the Ottomans, the native population merely suffering a change of masters.
The Spanish Empire (1500-1750) endured for the conventional 250 years, terminated only by the loss of its colonies. The homeland of Spain fell, indeed, from its high estate of a super-power, but remained as an independent nation until today.
Romanov Russia (1682-1916) ran the normal course, but was succeeded by the Soviet Union.
It is unnecessary to labour the point, which we may attempt to summarise briefly. Any regime which attains great wealth and power seems with remarkable regularity to decay and fall apart in some ten generations. The ultimate fate of its component parts, however, does not depend on its internal nature, but on the other organisations which appear at the time of its collapse and succeed in devouring its heritage. Thus the lives of great powers are surprisingly uniform, but the results of their falls are completely diverse.
XXXV. Inadequacy of Our Historical Studies
In fact, the modern nations of the West have derived only limited value from their historical studies, because they have never made them big enough. For history to have meaning, as we have already stated, it must be the history of the human race.
Far from achieving such an ideal, our historical studies are largely limited to the history of our own country during the lifetime of the present nation. Thus the time factor is too short to allow the longer rhythms of the rise and fall of nations even to be noticed. As the television director indicated, it never even crosses our minds that longer periods could be of any interest.
When we read the history of our own nation, we find the actions of our ancestors described as glorious, while those of other peoples are depicted as mean, tyrannical or cowardly. Thus our history is (intentionally) not based on facts. We are emotionally unwilling to accept that our forbears might have been mean or cowardly.
Alternatively, there are ‘political’ schools of history, slanted to discredit the actions of our past leaders, in order to support modern political movements. In all these cases, history is not an attempt to ascertain the truth, but a system of propaganda, devoted to the furtherance of modern projects, or the gratification of national vanity.
Men can scarcely be blamed for not learning from the history they are taught. There is nothing to learn from it, because it is not time.
XXXVI. Small Nations
The word ‘empires’ has been used in this essay to signify nations which achieve the status of great powers, or super-powers, in the jargon of today — nations which have dominated the international scene for two or three centuries. At any given time, however, there are also smaller states which are more or less self-contained. Do these live the same ‘lives’ as the great nations, and pass through the same phases?
It seems impossible to generalise on this issue. In general, decadence is the outcome of too long a period of wealth and power. If the small country has not shared in the wealth and power, it will not share in the decadence.
XXXVII. The Emerging Pattern
In spite of the endless variety and the infinite complications of human life, a general pattern does seem to emerge from these considerations. It reveals many successive empires covering some 3,000 years, as having followed similar stages of development and decline, and as having, to a surprising degree, ‘lived’ lives of very similar length.
The life-expectation of a great nation, it appears, commences with a violent, and usually unforeseen, outburst of energy, and ends in a lowering of moral standards, cynicism, pessimism and frivolity.
If the present writer were a millionaire, he would try to establish in some university or other a department dedicated solely to the study of the rhythm of the rise and fall of powerful nations throughout the world. History goes back only some 3,000 years, because before that period writing was not sufficiently widespread to allow of the survival of detailed records. But within that period, the number of empires available for study is very great.
At the commencement of this essay, the names of eleven such empires were listed, but these included only the Middle East and the modern nations of the West. India, China and Southern America were not included, because the writer knows nothing about them. A school founded to study the rise and fall of empires would probably find at least twenty-four great powers available for dissection and analysis.
The task would not be an easy one, if indeed the net were cast so wide as to cover virtually all the world's great nations in 3,000 years. The knowledge of language alone, to enable detailed investigations to be pursued, would present a formidable obstacle.
XXXVIII. Would It Help?
It is pleasing to imagine that, from such studies, a regular life-pattern of nations would emerge, including an analysis of the various changes which ultimately lead to decline, decadence and collapse.
It is tempting to assume that measures could be adopted to forestall the disastrous effects of excessive wealth and power, and thence of subsequent decadence. Perhaps some means could be devised to prevent the activist Age of Conquests and Commerce deteriorating into the Age of Intellect, producing endless talking but no action. It is tempting to think so. Perhaps if the pattern of the rise and fall of nations were regularly taught in schools, the general public would come to realise the truth, and would support policies to maintain the spirit of duty and self-sacrifice, and to forestall the accumulation of excessive wealth by one nation, leading to the demoralisation of that nation.
Could not the sense of duty and the initiative needed to give rise to action be retained parallel with intellectual development and the discoveries of natural science?
The answer is doubtful, though we could but try. The weaknesses of human nature, however, are so obvious, that we cannot be too confident of success. Men bursting with courage, energy and self-confidence cannot easily be restrained from subduing their neighbours, and men who see the prospect of wealth open to them will not readily be prevented from pursuing it.
Perhaps it is not in the real interest of humanity that they should be so prevented, for it is in periods of wealth that art, architecture, music, science and literature make the greatest progress.
Moreover, as we have seen where great empires are concerned, their establishment may give rise to wars and tragedies, but their periods of power often bring peace, security and prosperity to vast areas of territory. Our knowledge and our experience (perhaps our basic human intellects) are inadequate to pronounce whether or not the rise and fall of great nations is the best system for the best of all possible worlds.
These doubts, however, need not prevent us from attempting to acquire more knowledge on the rise and fall of great powers, or from endeavouring, in the light of such knowledge, to improve the moral quality of human life.
Perhaps, in fact, we may reach the conclusion that the successive rise and fall of great nations is inevitable and, indeed, a system divinely ordained. But even this would be an immense gain. For we should know where we stand in relation to our human brothers and sisters. In our present state of mental chaos on the subject, we divide ourselves into nations, parties or communities and fight, hate and vilify one another over developments which may perhaps be divinely ordained and which seem to us, if we take a broader view, completely uncontrollable and inevitable. If we could accept these great movements as beyond our control, there would be no excuse for our hating one another because of them.
However varied, confusing and contradictory the religious history of the world may appear, the noblest and most spiritual of the devotees of all religions seem to reach the conclusion that love is the key to human life. Any expansion of our knowledge which may lead to a reduction in our unjustified hates is therefore surely well worthwhile.
As numerous points of interest have arisen in the course of this essay, I close with a brief summary, to refresh the reader's mind.
(a) We do not learn from history because our studies are brief and prejudiced.
(b) In a surprising manner, 250 years emerges as the average length of national greatness.
(c) This average has not varied for 3,000 years. Does it represent ten generations?
(d) The stages of the rise and fall of great nations seem to be:
The Age of Pioneers (outburst)
The Age of Conquests
The Age of Commerce
The Age of Affluence
The Age of Intellect
The Age of Decadence.
(e) Decadence is marked by:
An influx of foreigners
The Welfare State
A weakening of religion.
(f) Decadence is due to:
Too long a period of wealth and power
Love of money
The loss of a sense of duty.
(g) The life histories of great states are amazingly similar, and are due to internal factors.
(h) Their falls are diverse, because they are largely the result of external causes.
(i) History should be taught as the history of the human race, though of course with emphasis on the history of the student's own country.John Bagot Glubb was born in 1897, his father being a regular officer in the Royal Engineers.