Category Archives: Global

Why rising home prices won’t devour the economy

Have you ever tried to rent an apartment in Brooklyn and come to the conclusion that rising home prices are ruining the world? Well, it turns out that may actually be true – at least according to Quartz’s Tim Fernholz.

In a recent article, Fernholz argued that the rising cost of real estate will eat up an ever larger share of the world economy, with potentially devastating consequences. Fernholz essentially lays out MIT economist Matt Rognlie’s critique of Thomas Piketty’s recent blockbuster book “Capital in the 21st Century”.

In his book, Piketty claimed to show that global capital has been accumulating in the hands of a small number of rich people since the 1950s and will continue to do so until inequality reaches dangerous proportions. This trend is inherent to capitalism, Piketty argued, and can only be stopped through government intervention.

Rognlie doesn’t dispute Piketty’s data, but claims the Frenchman misunderstood it. He points out that 80 percent of the wealth accumulation in Piketty’s data is due to real estate. If you take housing out of the equation, the growth in private wealth is quite small, putting Piketty’s argument in doubt (see graph).

housing-and-capital-in-germany-us-and-uk-private-wealth-housing-private-wealth-less-housing_chartbuilder-1

Whether we should believe Rognlie or Piketty essentially boils down to two questions: is capital easily substitutable for labor, and are different forms of capital easily substitutable for each other? Piketty answers both with yes.

Private wealth (or capital) can only eat up a growing share of the economy and heighten inequality if income from labor declines in relative importance. In (very) simplified terms: Apple can only dish out huge dividends to its stockholders if it doesn’t have to spend all its earnings to hire people. So far, technological advances have generally made labor less important and more replaceable. Rognlie believes this trend can only go so far, as capital depreciates over time and labor will always be needed to replace it. Piketty believes the trend can go a lot further. So far, the verdict is out.

The second question is more important in this context. Piketty doesn’t really care that much what the wealthy invest their money in. It could be gold, stocks, oil, real estate – the point is simply that their wealth is growing and that different forms of capital are in theory replaceable. Fernholz (i.e. Rognlie), on the other hand, claims that the accumulation of wealth is only due to a growth in the value of real estate and that other forms of capital could not record a similar growth.

Real estate prices have been growing because population growth and urbanization make land in cities scarcer. The growing inequality noticed by Piketty is thus really a growing gap between income from real estate and income from everything else, Fernholz claims. According to this reading, the rich are getting richer not because of any trend inherent to capitalism, but simply because they happen to own real estate. And as population growth and urbanization are expected to continue, growing home prices will consume an ever larger share of the world economy. This trend will suck capital from other, more productive industries and dampen growth.

The argument is compelling and relatable. It is also highly questionable – mainly because British economist David Ricardo made a very similar claim more than 200 years ago and turned out to be wrong.

Ricardo argued that explosive population growth in the early 19th century would make agricultural land increasingly scarce and valuable. After all, it had to feed a growing population. This, Ricardo argued, meant landowners could command ever higher rents for land to the point that rents would eventually make up an overwhelming portion of national income and stifle growth in other industries.

In hindsight, Ricardo underestimated the effect of technology. As new techniques and pesticides made agriculture more productive and trade globalized, farming land actually lost value compared to other assets.

Private wealth still accumulated in the hands of very few during the 19th century, as Piketty showed, but its composition shifted from rural land to urban real estate and industrial capital. In other words: different forms of capital proved more easily substitutable for each other than Ricardo thought and the rural landowners of the 18th century became the industrial tycoons of the 20th. Either way, private wealth grew tremendously as a share of the overall economy.

It is possible that Rognlie is repeating Ricardo’s mistake to underestimate the effect technology can have on land prices. Population growth and urbanization have driven up urban and suburban real estate prices, but will that trend necessarily continue? New construction techniques and the trend to build higher and denser are already lowering housing costs, as is improved transportation.

Office workers in Manhattan may have to pay a fortune for scarce apartments on the Upper West Side or Brooklyn today, but what if magnetic trains shorten the commute to the countryside to minutes within decades? As workers can move further and further away from their employment, land in urban centers will lose in value. Denser building has the same effect.

History suggests that growth in the value of a certain asset class is never as inevitable as it seems at the time. Ricardo was certain agricultural land would dominate global wealth, but the opposite occurred. 200 years later, Rognlie thinks the same of urban land, and he could end up being just as wrong and real estate’s share of total wealth could start falling.

If Piketty is right, global wealth will then simply shift to a new, growing asset class – as it did in 19th century Britain. The result would be the same: the rich become richer.

Leave a comment

Filed under Global

Post-War Is Back

Burning allied bomber in 1942. Credit: John Atherton

Burning allied bomber in 1942. Credit: John Atherton

In geopolitics, a century can seem awfully short.

One hundred years ago this week, German troops crossed the border into Belgium, starting the First World War and setting the stage for the second. The paradox is that as more time passes, the great catastrophe of the early 20thcentury only seems to grow in importance. Most major wars and conflicts today are shaped by how its actors relate to the two world wars.

Take Ukraine’s civil war, where Russia and the rebels it supports use every opportunity to revive the memory of 1941. Moscow’s prolific propaganda depicts Ukrainian troops as fascists, showing its leaders alongside footage of Nazi war criminals on TV.

The rebels have clearly internalized the notion that they are protecting Eastern Ukraine’s Russian speakers from Nazi genocide. Rebel leader Igor Bezler recently said of pro-Kiev militants, “They are fascists! So why should we stand on ceremony with them? Questioning, an execution, that’s it.” In rebel-held Slovyansk, militia leader Igor Strelkov executed thieves and enemies based on martial law implemented by Stalin on June 22, 1941, as signed death sentences show.

Perhaps less obviously, the Western response to Russian aggression is also shaped by the experience of two world wars. British and American officials implicitly refer to Chamberlain’s failed appeasement policy in 1938 when arguing for a tough stance against Putin. Former U.S. Secretary of State Hillary Clinton made that connection explicit when she compared Putin’s invasion of Crimea to Hitler’s annexation of Czechoslovakia.

Germany’s foreign minister Frank-Walter Steinmeier, on the other hand, bases his more hesitant Russia policy on the experience of World War I. As The Economist’s Berlin office likes to point out, he argues that Putin must always be offered a way out through diplomatic channels to avoid the kind of irreversible escalation of tensions that led to the outbreak of World War I. Putin himself somewhat cynically used this argument in a speech commemorating World War I last week.

In East Asia’s simmering border disputes, World War II is just as present. Japan’s Prime Minister Shinzo Abe is trying to strengthen the country’s military capabilities by loosening the country’s post-war pacifist constitution. These attempts are accompanied by a campaign to teach schoolchildren a less apologetic narrative of the country’s role in World War II. China and the Korea are wary of a stronger Japanese army primarily because they were among the main victims of Japanese militarism in the 1930s and 1940s.

In Iraq and Syria, World War I serves as a reference point for the fighters of Islamic State. Their campaign to turn two separate countries into one Sunni caliphate is also an explicit attempt to redraw the borders imposed by Britain and France in the wake of the Ottoman Empire’s defeat in 1918.

World War II and the Holocaust played an important role in Israel’s decision to attack Gaza. Benjamin Netanyahu’s former national security advisor Yaakov Amidror recently explained that the Israeli Prime Minister is “a guy who has a historical view of events.”

“He understands that one of the most important differences between the past and the present is the ability of Jews to defend themselves,” Amidror is quoted in The New York Times. “If he feels that Israel might endanger its ability to defend itself because of the international community, he will decide to use the capabilities of Israel even against the international community.”

There are of course many conflicts today that have little to do with the events of the early 20th century. Still, the number of disputes with a direct link to the pre-1945 years is striking and marks a major departure from past decades.

The two world wars hung heavy over the Cold War, which dominated global politics between 1945 and 1991. But following the Soviet Union’s collapse, the international order appeared to have escaped from its post-war state.

Major armed conflicts in the 1990s and 2000s—the Balkan wars, the Rwandan genocide, the Congo wars, the U.S.-led invasions of Iraq and Afghanistan, the Russo-Georgian war—all had little or no direct connection to the two world wars. History had entered a new phase in which 9/11 replaced Auschwitz and Stalingrad as the 21st century’s great catastrophe, or so it seemed.

Now the world wars have staged their comeback. One reason is that many of the disputes that were fought over so violently between 1914 and 1945 are still unresolved today.

In Eastern Europe, both world wars were fought over one fundamental question: was the region part of Russia’s sphere of influence or should it integrate with Central Europe? This question is still at the heart of Ukraine’s civil war today–although the choice is no longer between Nazism and Stalinism, but between liberal democracy and Putin’s proto-fascism. This similarity makes it makes it almost logical that Russian propaganda evokes the memory of World War II.

In East Asia, the unresolved Japanese question–how much regional power the country should wield–is at the heart of current border disputes. Almost eighty years ago it led Tokyo to war with its neighbors.

But while unresolved disputes play a role, a more important reason for the continued importance of the world wars is their usefulness as propaganda.

Russian state media use the memory of World War II to stir up popular support for the Kremlin’s involvement in Ukraine and mobilize fighters. China’s government is using anti-Japanese sentiment rooted in World War II to strengthen patriotism and divert attention from its own corruption. And in the U.S., foreign-policy hawks are using the disastrous 1938 appeasement to discredit virtually all efforts of diplomacy vis-à-vis Russia or Iran.

After 1945, world leaders hoped the bloodshed of two world wars would be a lesson to mankind and allow for a more peaceful future. As transnational institutions like the EU and the UN flourished, their hope seemed justified.

Today, leaders are more likely to use the memory of two world wars to encourage violence and the killings of the past become a justification for the killings of the present. As world leaders commemorate the outbreak of World War I this week, we can only hope some of them remember how much progress mankind has made in the past 70 years–and how easily it can be reversed.

 

This article also appeared on the World Policy Journal’s blog.

Leave a comment

Filed under Global

Does Government Spying Matter? The Case of Kim Philby

Wars in Ukraine, Syria and Gaza – not to mention the latest immigration “scandal” – have pushed Edward Snowden’s NSA revelations to the very backs of our minds. But before we forget: there is still an important debate about the benefits and drawbacks of government spying going on. The latest, somewhat implicit contribution to that debate is well hidden in the final pages of this week’s New Yorker.

In a fascinating article, Malcolm Gladwell recounts the case of Kim Philby and the greatest spy scandal of the 20th century. Philby, the Cambridge educated son of a diplomat, rose to the highest echelons of Britain’s secret service M.I.6 in the 1940s and 1950s, before he was exposed as a Soviet double agent and forced to flee to Odessa in 1961. Philby had been head of the M.I.6’s anti-Soviet section and later became chief liaison between M.I.6 and CIA. There was little of import the spy service did in those years that Philby didn’t report to the KGB.

“What it comes to is that when you look at the whole period from 1944 to 1951, the entire Western intelligence effort, which was pretty big, was what you might call minus advantage,” the C.I.A. officer Miles Copeland, Jr.—himself a close friend of Philby’s—said. “We’d have been better off doing nothing,” Gladwell writes.

News of his defection triggered the predictable paranoia. The M.I.5 executive Peter Wright began suspecting most Labour Party ministers of being Soviet spies, clearly fearing for his country’s safety. But the real surprise in Gladwell’s article is that none of Philby’s work for the KGB mattered much in the end:

In a review of “Spycatcher” published in the journal Intelligence and National Security, the historian Harry Gelber made a similar point about the many betrayals and lost secrets that fuelled Wright’s feverish mole-hunting. Wright’s problem was that he was unable to assess the consequences of the intelligence losses. The Soviets got details of the Concorde’s electronics systems. Did this make any difference to the Soviet civilian or military aviation performance? Who knows if the Soviets even believed what they were told? The revelations about Britain’s atomic program leaked to the Soviets by Klaus Fuchs are believed to have accelerated the Soviets’ own nuclear operation by two years. In the grand scheme of things, did that two-year leap amount to anything? Gelber searched for some account of how the world would have been different if Fuchs or Philby or the Rosenbergs had never lived, and couldn’t find it.

He concluded, “One cannot help being left with the uneasy suspicion that, just possibly, a good deal of what he tells may have mattered less than hard-working, intelligent but sometimes narrow-minded participants like Peter Wright spent their professional lives thinking it did.”

If there was any period in history when government spying mattered, you would think it was the height of the Cold War in the late 1940s and early 1950s. And yet the KGB’s complete insight into British intelligence through Philby didn’t give the Soviet Union any strategic advantage.

Spying today may be very different from the 1940s. It is directed at different targets and transnational terrorists pose a very different kind of threat than the Soviet state. Still, Gladwell’s article leaves the suspicion that secret services tend to exaggerate the importance of their own work.

 

Leave a comment

Filed under Global

Are Financial Crises Caused by Governments? Probably Not.

Ever since financial markets imploded in 2008, preventing the next crisis has preoccupied governments and academics across the globe. Most have argued for stricter government oversight of financial markets, claiming that unregulated markets are prone to irrational exuberance, which will inevitably lead to the next boom and bust.

In its recent cover story, The Economist takes a radically different approach. The paper argues that too much government involvement – rather than a lack of it – was to blame for the recent financial crisis. The editors base this argument on an analysis of five financial crises: 1792, 1825, 1857, 1907 and 1929.

Each crisis led legislators to bail out financial institutions deemed systemically important. This in turn encouraged banks to take on more risk, making every next crisis worse than the prior. Since banks (and other institutions) had reason to believe they would be bailed out anyway, they had little incentive to invest prudently.

Moral hazard created a spiral of worsening crises. This could have been prevented, the editors argue, if governments had let banks go bust and let the markets take care of themselves.

Blaming crises on governments is popular among neo-liberal thinkers, partially because it is such an easy claim to make. Since governments are always involved somewhere somehow, no one can disprove the claim that crises wouldn’t happen if markets were completely free.

But just because a theory can’t be disproven doesn’t mean it makes sense.

The Economist is certainly right to argue that bailouts encourage risk-taking and my do more harm than good in the long run. But its argument against government intervention in general is far more flimsy.

If government intervention encourages risk-taking while unregulated markets are more prudent, as The Economist claims, we should be able to find historical evidence for this. But the paper fails to present any.

In fact, much of the irrational risk-taking that led to crises was done by individuals and institutions that were hardly regulated and had no prospect of ever being bailed out. The markets crashed in 1929 because individuals took bets on overvalued stocks. These speculators were hardly regulated by the government and knew they would never be bailed out, and yet they still took risks.

In the lead-up to the 2008 crisis, largely unregulated private-equity funds and mom-and-pop investors were just as eager to jump on sub-prime mortgages as were more heavily regulated banks.

Moreover, The Economist itself points out that the financial crisis of 1907 was caused by investment trusts, which were far less regulated than banks.

It may well be that certain forms of government intervention, especially bail-outs, play a role in making financial crises successively worse. But history shows that unregulated financial actors are at least as prone to irrational exuberance as their more regulated counterparts. Rather than argue against government regulation in general, a more prudent argument can be made in favor of regulation that truly discourages risk-taking.

The Economist makes a convincing claim that deposit insurance and bail-outs encourage risk taking and should be done away with. But other regulations, such as stricter capital requirements for banks or rules forcing lenders to keep some of the mortgages they originate on their books, discourage risk-taking. Getting rid of them would almost certainly do more harm than good.

In Its defense, The Economist does acknowledge in passing that not all forms of government intervention are bad. But at the same time the paper criticizes laws that discourage risk-taking, such as the Dodd-Frank Act or transaction taxes. This leaves the impression that The Economist’s argument is driven more by anti-government dogma than by a calm assessment of which regulations help and which don’t.

We may all crave simple solutions. But, sadly, getting the government out of the markets won’t solve all our problems.

1 Comment

Filed under Global

Three stories from 2013 that will matter 20 years from now

In reviewing 2013, The Atlantic recently took a deep look into the future. Rather than list the most significant events of the past year by their immediate impact, editor Moses Naim presented us “the stories from 2013 that will reshape the world long after this year draws to a close”. The stories Naim chose were: the shale-gas boom, America’s battered reputation abroad, tensions in the Middle East, China’s new assertiveness, and growing discontent over social inequality.

There is no doubt these stories are big, and will matter for years to come. But Naim only picked stories from the headlines, while the developments that matter in the future are often those contemporaries overlook. Take the 1980s, for example. Everyone back then knew that the slow collapse of the Soviet bloc or China’s turn towards a market economy would be a big deal for years to come. But hardly anyone foresaw that the rise of Islamist militancy in Afghanistan or Reagan’s financial deregulation would shake the world two decades later. In this spirit, here are the three most overlooked stories from 2013 that will shape our world for years and decades to come:

1. Crowdfunding

Following the 2008 financial meltdown, the U.S. government embarked on a string of regulatory reforms culminating in the Dodd-Frank Act. In effect, the new regulations made lending more difficult and less profitable for banks, discouraging the type of risk taking that led to the crisis. But while banks are curtailed, private equity and hedge funds are largely left to do whatever they want. In commercial real estate, which I cover for the Real Estate Weekly, this regulatory imbalance has already led financing to shift from traditional bank loans, which have become more rare and expensive, to private equity.

This is where crowdfunding enters the picture. In September, the Securities and Exchange Commission made it legal to raise funds from accredited investors in the U.S. via the internet. Since then, crowdfunding startups have become almost hyperactive. Much like private equity, crowdfunding is now subject to less stringent regulations and capital requirements than banking, which means they can delve into more risky projects and offer higher returns.

Such legal incentives matter. With banks constrained, the newly liberated crowdfunding firms will join private equity in shifting financing away from banks. Ten years from now, will you still deposit your $5,000 savings with a bank if you can get twice the return by investing them in a crowdfunded skyscraper or in private equity funds, which are increasingly targeting small-time savers?

For centuries, financing has been dominated by the banking sector. 2013 could be the year this begins to change.

2. East Africa’s Monetary Union

We can ignore Africa because the whole continent is a basket case full of corruption and civil wars, right? Wrong. One day, Africa’s economy will be huge. And 2013 might be the year its turnaround finally took off with a surprisingly little-noticed policy decision.

In November, Rwanda, Uganda, Tanzania and Kenya signed a deal to establish a single currency within ten years.This could be a huge step for several reasons. First of all, it is undisputed that enlarging a market spurs growth. Rwanda and Kenya, despite all their troubles, are already very innovative, for example in their use of mobile-phone banking. With wages rising in China, a unified, low-wage market in East Africa with improved communications and innovative financing could attract much more foreign and domestic investment.

Monetary union could also make the region more politically stable. Once countries are tied together in a single market, each member state has an interest in keeping the others peaceful and stable. If an insurgency broke out in one state, other governments would be far more likely to help and  intervene quickly than they are today, in order to prevent the union from losing credibility. In essence, this would be the military version of the bailouts we recently saw in the Eurozone. Rebellions frequently break out in Africa because governments and their armies are weak. But rising up in rebellion against four governments simultaneously is daunting, even in East Africa.

There are still many factors that could derail the monetary union – from inter-ethnic tensions over corruption to spillover from Somalia’s civil war. But if it succeeds, the union could grow to include more and more countries, kick-starting a new age of African stability and prosperity.

3. Commercial Drones

In November, Amazon CEO Jeff Bezos announced that he intends to use drones to deliver parcels. This could be the first step towards the use of commercial drones across the world. Dystopia? Probably not.

According to Neil Jacobstein, commercial drones could be the next internet – developed by the military but soon spreading out to transform the world. In The World Policy Journal’s recent fall issue, Jacobstein makes the case to legalize their use, albeit with strict security regulations.

He has a point. Drones, if safe, can revolutionize our world much in the way the internet has. They could drastically reduce transportation costs, bringing the world closer together and boosting growth. In the long run, they could also lead to a revival of remote areas. Who needs to live close to shops if you can have everything you need (maybe even Thai Food from your favorite restaurant?) delivered by drone into the Canadian wilderness?

Commercial drones can create a new world of physical (not just technological) connectivity. This is why, twenty years from now, Bezos’ 2013 announcement  could be far bigger than any speech by Barack Obama.

Leave a comment

Filed under Global

The Economist says 2014 is like 1914. Is it?

Here’s an interesting thought: we may be months away from World War III. This, in essence, is the argument by an op-ed in The Economist’s holiday issue. Looking back at new year 1914, the (unnamed) author concludes that the world back then was living in globalized prosperity and no one saw the catastrophe coming. This complacency, the author writes, was in part to blame for the war’s outbreak. A hundred years later, the author finds a similar complacency with regard to the territorial dispute in the South China Sea:

Yet the parallels remain troubling. The United States is Britain, the superpower on the wane, unable to guarantee global security. Its main trading partner, China, plays the part of Germany, a new economic power bristling with nationalist indignation and building up its armed forces rapidly. Modern Japan is France, an ally of the retreating hegemon and a declining regional power. The parallels are not exact—China lacks the Kaiser’s territorial ambitions and America’s defence budget is far more impressive than imperial Britain’s—but they are close enough for the world to be on its guard.

Which, by and large, it is not. The most troubling similarity between 1914 and now is complacency. Businesspeople today are like businesspeople then: too busy making money to notice the serpents flickering at the bottom of their trading screens. Politicians are playing with nationalism just as they did 100 years ago.

The comparison between China and Germany – and Europe in 1914 and Asia in 2014 in general – seems like a stretch. Germany went to war because it felt its back against the wall and was loosing the peace. It was encircled by a hostile triple entente that had more economic and military strength, and was fearful of falling behind the booming U.S. and industrializing Russia, as historians like Niall Ferguson have argued. A quick victory through a surprise attack in the near future seemed like Germany’s best chance to become a true world power.

China today, on the other hand, knows that it is destined to become the world’s leading economy and a military superpower within the next few decades. Unlike Germany in 1914, time is working in its favor, and going to war over territorial disputes now would only derail its rise. Similarly, Japan can’t afford any war in the near future, crippled by government debt and a weak military. A war may still happen once China is no longer on the rise and feels its time has come, and once Japan has built up a stronger military. But that won’t be anytime soon, and certainly not in 2014.

While World War III isn’t imminent, the article makes the important point that we should never underestimate the potential for another great war. Much like 1914, Westerners today have taken over the naive view that wars only happen in poor countries. The lesson of World War I suggests otherwise.

3 Comments

Filed under Global

Runciman’s Theory of why Democracies Succeed

The question why some democracies fail and other succeed has generally been the preoccupation of those studying third-world countries, such as India and Pakistan. But the government shutdown has shown that this question should also concern anyone interested in the U.S., as the country’s political system seems more and more dysfunctional compared to Britain’s or Germany’s. The Economist recently reviewed two books by historians that take the longer view and try to explain democracies’ successes and failures. David Runciman’s The Confidence Trap: A History of Democracy in Crisis from World War I to the Present has an interesting theory, according to The Economist:

“Mr Runciman illustrates his thoughts with seven critical episodes: unforeseen war (1918), unexpected slump (1933), threats to post-war Europe (1947), possible annihilation in the Cuban missile crisis (1962), stagflation (1974), short-lived triumphalism (1989) and financial meltdown (2008).

Add those up, and you get a fair list of the challenges facing present-day democracies. So why do they repeat mistakes? Oddly, perhaps, for a historian, Mr Runciman suggests that ignoring the past is a democratic strength. Old problems recur, but never quite in the same form. Unlike autocracies, which are “fatalistic” and inflexible, democracies expect the future to be different. Counting on ceaseless change, in other words, helps democracy adapt and muddle through.”

As a historian, I am obviously a strong supporter of not ignoring the past. Had the Republicans looked back closely at 1996, they might have realized that shutting down the government was a bad idea. Either way, The Confidence Trap seems like an interesting book.

Leave a comment

Filed under Global

Is Janet Yellen the Most Powerful Woman in World History?

The Atlantic’s Matthew O’Brien has a bold claim: Janet Yellen, the soon-to-be Fed Chair, will be the most powerful woman ever. While Queen Victoria, Maggie Thatcher and Merkel were or are very powerful, O’Brien argues, their power was limited by their countries’ borders. Yellen, on the other hand, will determine the economic fate of a country dominating the world economy, which gives her “more control over the global economy than any other living person once she’s confirmed as Fed Chair”.

The argument sounds compelling, but I think O’Brien overestimates the Fed’s power. Sure, the Fed could do some serious damage –  as it did with some bad decisions in 1929. But Neo-Keynesian thinkers have made a pretty compelling case that monetary policy itself can only go so far, and loses much of its impact when interest rates start nearing zero. The post-2008 crisis is a case in point. The Fed’s quantitative easing may have prevented further damage, but it was unable to create significant economic growth by itself. As long as fiscal policy doesn’t play along, Janet Yellen will by no means decide the fate of the world economy.

She will be very powerful, sure. But she won’t rule half the world – as Queen Victoria did. She arguably won’t even have as much influence as Angela Merkel, who holds the keys to Europe’s economic future.

Here’s the link: http://www.theatlantic.com/business/archive/2013/10/why-janet-yellen-would-be-the-most-powerful-woman-in-world-history/280423/

Leave a comment

Filed under Global, U.S.

Should we call Assad a terrorist?

Authoritarian governments keen on attacking the opposition have found a new favorite swear word: terrorists! Egypt’s military rulers recently used the term to describe largely peaceful protesters, following in the linguistic footsteps of Gaddafi, Putin and many others. This trend shows how much our understanding of “terrorism” has changed over the centuries.

There is no universal definition of terrorism, but the U.N. General Assembly has repeatedly used the following: “criminal acts intended or calculated to provoke a state of terror in the general public, a group of persons or particular persons for political purposes.” This definition leaves open the possibility that terrorism can be perpetrated by an army or government. Indeed, terrorism has historically had little to do with bearded outlaws.

Terror entered political language in the aftermath of the French Revolution as a state-led political program: Robespierre’s revolutionary dictatorship used excessive violence to strike fear in opponents and fight the counterrevolutionary movement. “The attribute of popular government in a revolution is at one and the same time virtue and terror”, Robespierre famously said in 1794. “Terror without virtue is fatal; virtue without terror is impotent.”

The Bolsheviks adopted this thinking during the so-called Red Terror of the Russian civil war, 1917-1921, that saw thousands of counterrevolutionaries murdered. When Austrian socialist Karl Kautsky criticized the “bloody terrorism carried out by Socialist governments”, Leon Trotsky, at the time head of the Red Army, responded: “terror can be very efficient against a reactionary class which does not want to leave the scene of operations. Intimidation is a powerful weapon of policy, both internationally and internally.” The concept of Red Terror was resurrected under Haile Mengistu Mariam in Ethiopia – whose regime murdered hundreds of thousands of people in the 1970s and 1980s.

To be sure, terrorism wasn’t always just a state affair. Left-wing radicals used bombs and assassinations to fight democratic governments and monarchies from the 19th century onwards. But the term terror always referred to a practice rather than a group of people.

Then came 9/11 and the Bush administration’s “war on terror”, a slogan that really referred to a certain group of Islamist radicals. Soon the term terror in common usage no longer described the practice of intimidating through violence, and instead became a name for an organized group of non-state actors attacking states by killing civilians.

This gradual change in meaning has given repressive regimes a publicity advantage. The Egyptian government’s massacre of protesters, clearly intended to sow panic among oppositionists and discourage further demonstrations, fits the classical definition of terrorism. The same goes for Assad’s use of poison gas against civilians, which has limited military value but creates terror among opponents. And yet hardly anyone brands them “bloody terrorists”, as Kautsky would have done in his time. On the contrary: they are the ones who can accuse the opposition of terrorism. Ever since “terrorists” became a term for non-state actors, repressive governments no longer have to worry about being branded as such.

In global politics, wording matters. The Egyptian government’s description of oppositionists as terrorists seems to work very well as a propaganda tool, winning over Egyptians fearful of chaos and violence. In the same vein, our unwillingness to call the military rulers terrorists arguably weakens the opposition’s case. Saying a regime uses violence or repression will never have the same impact as saying it employs systematic terror.

Winning over the hearts and minds of media consumers is crucial for the repressive regimes in Egypt and Syria. Our misuse of the term terror makes it easier for them.

Leave a comment

Filed under Global, Middle East

Why Measuring Productivity Differently Will Change the Way We Live

Can you imagine a world without 9-to-5 jobs? Adam Davidson suggests it’s not too far away. In the latest New York Times Magazine, the journalist urges us to change the way we measure productivity. Accountants and Lawyers bill by the hour, encouraging slow work. Paying people by what they produce, rather than by how long it takes them to produce it, would more accurately reflect productivity in the modern economy.

What’s really interesting about Davidson’s argument is his historical narrative. He claims that measuring productivity by hour of work, popularized among accountants in the 1950s, is a leftover of the industrial age. This way of measuring made sense for assembly-line manufacturing, where time units had a fixed correlation to output. But since the 1960s industrial production has become less and less important at the expense of services and the creative economy. Measuring productivity of the latter has little to do with hours worked.

Davidson writes: “Measuring productivity is central to economic policy — it’s especially crucial in the decisions made by the Federal Reserve — but we are increasingly flying blind. It’s relatively easy to figure out if steel companies can make a ton of steel more efficiently than in the past (they can, by a lot), but we have no idea how to measure the financial value of ideas and the people who come up with them. “Compared with the mid-1900s, goods production is not as important a part of our economy, but we continue to devote about 90 percent of our statistical resources to measuring it,” says Barry Bosworth, a Brookings Institution economist who is a leading thinker on productivity in the service sector.”

Davidson doesn’t address the consequences of measuring creative work by its value rather than by time worked, but they would certainly revolutionize our economy. On a microeconomic level, it might mean the end of the 8-hour work day. Measuring the value of ideas would let us work until we have achieved results, not until the clock hits five. How about working 2 hours on Tuesday and 13 on Wednesday? What’s already a reality in some creative professions could become the norm.

The possible macroeconomic effects are just as intriguing. Countries still tend to measure the productivity of their workforce by how many hours people work in a day. For example, business advocates have long lambasted unions for trying to shorten work days. During the Euro Crisis, some have urged Spain to scrap the Siesta, a lengthy lunch break, and stretch out work days to increase productivity. But what if a long break and shorter work days increase productivity in our modern service economy? A one-hour nap might make a good idea more likely than 20 hours of hard work. Perhaps the supposedly lazy Greeks are way ahead of the hard-working Americans.

The entire capitalist system revolves around the notion of productivity. If it turns out we’ve been measuring it incorrectly, sweeping changes are bound to follow.

Here’s a link to the article:

Leave a comment

Filed under Economics, Global