A quid pro quo setting undermines the purpose of media as the guardian of democracy.

The links between politics and media have always been an object of much scrutiny. No wonder, considering they should serve as the fourth pillar of democracy. However, in recent years, media’s role has been undermined by the polarization of politics and society alike. Traditional news outlets lose ground to obscure websites spreading biased news at best. Yet in some countries, a more imminent danger for the traditional media does not come from the customers but rather from the advertisers.Adam Szeidl and Ferenc Szucs brought evidence from Hungary showing that the right-wing parties associated with Viktor Orban engage in influencing the media via advertising done by state-controlled firms. As soon as the right-wing government was formed in 2010, the state-controlled companies redirected their advertisement to those media which were owned by Mr Simicska, a media mogul, who happens to be Mr Orban’s former roommate. A reverse channel exists as well. When a news website changed its owner, its coverage of government scandals adjusted accordingly. With an owner linked to the government, the coverage drops rapidly.
The authors use a peculiar event in Hungarian politics to prove their case. In February 2015, there was an unanticipated parting of Mr Orban and Mr Simicska. Since then, the media which had supported the government became independent or even hostile to Mr Orban. Surprisingly, the advertisement from state-controlled firms immediately plummeted to the level of advertising before Mr Orban took power. The figure below illustrates this development where private advertisement is used as a benchmark. The horizontal axis describes the share of total advertisement in right-wing leaning media.

Using the same event, Szeidl and Szucs show that the channel of media coverage is active as well. Indeed, before the fallout date in February 2015, the media owned by Mr Simicska consistently underreported on the government’s scandals compared to the independent media outlets but were in line with other media related to the right-wing politics. However, once the relationship between the two former roommates crumbled, the media owned by Mr Simicska quickly drifted to the pattern of independent media. The following figure comprehensively describes the change. The horizontal axis gives a measure of scandal coverage.

What does all this mean? The connection between the government and the media is no laughing matter. On the more positive note, transfers such as those through advertisement are costly and cumbersome. That suggests more direct routes are not yet available which gives a glimpse of hope to the Hungarian civil society. There are still some checks and balances in place hindering a more efficient behavior from the government’s perspective. But it also explains the ferocity with which Mr Orban attacks Central European University. Mr Szeidl works at this very institution.

The universality principle scares the hell out of progressives, and it should.

It might have come as a major surprise to see libertarians fighting for basic income guarantee (see our last post). Switching the perspectives completely, SmallTalkEconomics now presents the rationale of why the progressives oppose such policy.
The first and foremost objection to basic income guarantee (BIG) is its fiscal feasibility. Many economists have done more or less sophisticated calculations proving BIG to be either too expensive (Henderson 2015) or fiscally neutral (Garfinkel et al. 2003). Despite the non-existing consensus, both estimates share the assumption of scrapping many existing policies which target specific groups such as children or the elderly. Yet these are the most popular policies of a welfare state and form a backbone of the social democratic vision of a just society. Eliminating those could be hardly perceived as leftist.
Daniel Sage and Patrick Diamond in their 2017 paper evaluate the suitability of BIG for becoming a turning point in decreasing support for European social democratic parties. Abolishing policies based on deservingness and replacing them by the universality principle seems to be a highly unpopular move. And not without a reason. Especially in countries with limited welfare infrastructure, the logic of BIG falters. The funds necessary for BIG would simply do more good if they were focused on those in need. The rich and redistributive countries would be also less keen to increase targeted welfare spending if BIG proves to be insufficient. This represents a substantial risk for the social democratic ideals.
Another reason for BIG popularity is the claim that it can outweigh the distortions caused by technology. But BIG would not affect inequality, it would merely hand out a paycheck for technological unemployment. Although the unemployed would have more time and space to get new qualification, they would not have the means to do so unless the government provided them with such training, thus increasing the overall spending. Moreover, BIG does not address the issue of tax revenue. Would the extra costs be covered by a more progressive income tax, a wealth tax, or even a robot tax? Such questions remain unanswered.
The progressives are also not necessarily utopian. Acknowledging that BIG encourages extensive leisure, they are also well-aware that such leisure means unemployment which is associated with poorer health and well-being. Hence BIG might lead to undesirable consequences drawing potentially more money from the public budget. Early experiments with unconditional income indeed showed that BIG decreased the working hours of the recipients. The consequences of unemployment reflected in health and well-being might be attributed to social norms rather than unemployment itself. However, social norms are difficult to change and even the enthusiastic progressives remain skeptical about such high level of social engineering.

Unconditional hand-outs make more sense than you think, even from the libertarian perspective.

The notion of basic income guarantee (BIG) for every citizen has been discussed for some time. Although the idea is getting momentum and is even being tested in some countries (Finland, the Netherlands, or Canada), it is often viewed rather as a utopian dream of leftist economists. However, it is not just the egalitarian part of BIG which appeals to many in the economic profession and beyond, it is also the political economy bringing support from the right. Even Mike Munger from Duke University, a libertarian candidate for governor and member of Cato Institute, recently expressed his support for BIG.
As one might guess, the praise of the unconditional income coming from the classical liberal scholars is based on two pillars: individual incentives and market distortions. The current welfare system sometimes disincentivizes people to provide for themselves. For many government programs, people are eligible only if they earn less than a certain level of income. Crossing this threshold means the marginal tax rate is larger than 100 percent. There is no reason for people in such a situation to look for any incremental source of income.
One example of such policy are disability benefits. Although working conditions gradually improve in all sectors of the economy, more and more people are eligible for disability benefits. This increase can be tracked to regions with former presence of heavy industry. Data shows that disability benefits in fact serve as a transfer from the prosperous regions to those negatively affected by advancing technology and trade. Without judging the necessity, form or magnitude of the transfers, it is clear that policies which disincentivize work are among the worst ways to help lagging regions. BIG would solve this incentive problem and would thus be a viable substitution of disability benefits and related welfare programs trying to cope with such global dynamics.
The other perk of BIG is that it creates minimal distortions in the economy. Unconditional income is essentially a negative head tax. Although not acceptable from a social point of view, head tax has been identified as the most efficient form of taxation. It hardly affects prices and hence keeps the information flow in the economy intact causing virtually no market distortions. Thanks to BIG, it would be also possible to scrap policies such as minimum wage or unemployment benefits. Young workers would be able to gain their first job experiences disregarding their productivity relative that associated with the minimum wage.
Does BIG have any downsides, any obstacles preventing its implementation? It surely does, but those will be addressed in the next article. Until then, hurray for the BIG!

Towards a more informed view on what’s hidden under the veil of aggregated data

The current level of economic well-being is largely to be thanked to increased specialization in the economy, which was enabled by international trade and integration of regional and national markets. Although international trade is not a zero-sum game, even the most vocal proponents of openness to trade admit there are businesses finding themselves on the losing side of the bargain. Yet we know surprisingly little about the adverse effects of international trade exposure, especially on the micro level. David Autor, David Dorn, and Gordon Hanson made the first step into these unchartered lands and provide grounds for a more informed discussion about international trade.

Their great contribution stems from using regional instead of national data and also from distinguished use of econometric methods. Commuting zones in the US serve as a smart proxy of local labor markets. Labor mobility and spillovers from dynamic regions are often not able to compensate for losses caused by increased competition. Disaggregation of the data at the local labor market level enables a closer look at this issue. To avoid endogeneity, they use data on other advanced countries’ imports from China. The original US data could be endogenous; import intensity is not caused only by supply but by demand as well. However, by using import data of a similar country (actually a composite index of several ones), one can isolate only the supply effect and thus avoid the endogeneity trap.

The results prove Chinese import competition negatively affects labor markets. Even a brief look at the data shows clear negative correlation between manufacturing employment and Chinese import penetration in the US (see figure below). More advanced analysis shows import penetration indeed reduces the number of employees working in manufacturing, but also shrinks the total labor force (as many people prematurely retire or apply for disability benefits), pushes down wages in non-manufacturing, and increases government transfer receipts. Trade exposure thus affects the whole labor force: manufacturing through employment and non-manufacturing through wages. There is also already a mechanism in place compensating for losses caused by trade – unemployment, retirement, disability and other welfare benefits serve as an insurance against trade shocks. Unfortunately, looking at regions formerly known for booming manufacturing, such mechanism does not compensate for the contemporary economic dynamic.

Although trade with China most likely benefits the US economy in the aggregate terms, there are regions losing out in the short term. This could partly explain the widely spread disenchantment with new trade agreements all around the developed world (be it TPP or TTIP). However, for those working in the service sector in major cities, it is tempting to focus only on the upside of international trade. The newly acquired insights should not serve as an argument against openness to trade; instead, it should make the discussion more informed and make us all think of how to kick-start regions lagging behind.

Many economic crises start on Wall Street and it is often not clear whether fundamental problems caused the financial markets to crumble or whether it was the bankers that brought down the whole economy. The latest crisis made the proponents of the latter view more vocal but there are still moments in history that suggest that the mistakes of the financial markets can be easily contained; just think of the dot-com bubble. When the bubble burst in 2001, stock markets plunged but the real economy only quivered. Learning from that, regulators want to prevent any distress on the financial markets or at least limit the damage to the real economy.

Probably the most vulnerable link between the financial sector and the rest are banks. When banks suffer, the rest of the economy gets into trouble as well. Consequently, the regulators want to protect banks from any sort of trouble as they want to isolate the real economy from shocks caused by the financial markets. However, with all the capital regulation, risk-weighted assets, and so on, they focus predominantly on the asset side of banks’ balance sheets. What if there is another way?

Banks are not vulnerable because of their assets but because of their liability structure. Dependence on deposits makes them prone to runs, although they can be structurally healthy and only in need of liquidity. John Cochrane suggests a solution. What if all banks were financed only by equity? What if you did not deposit your salary in a bank, but you instead immediately bought shares of the bank? The idea sounds outrageous at first, but Cochrane argues that not much would actually change. We would be still able to withdraw money at ATMs or pay for a sandwich with a credit card. The only difference would be that we would not reduce the amount of money deposited on our account but we would sell a portion of our bank shares – just a technical change from the consumer’s perspective. And luckily, today’s technology would allow that.

When depositors actually own shares, they have no reason to run. Even if they are concerned with the bank’s actions, they would not sell out indefinitely as the bank’s assets remain safe and stable. Thus the banks are safe from runs and they, along with their shareholders, have to only endure mild swings in value of their assets. The discussion is still at the beginning and there are many potential pitfalls along the way, but thinking about our structural problem in this way can help us to rethink the status quo and stop kicking the can down the road.

The debate about the effects of migration on economy and society in Central, Eastern, and Southeastern Europe (CESEE) is uninformed and emotional, but it also misses a phenomenon that has economically influenced the CESEE region more than both the civil war in Syria and dire socioeconomic conditions in Maghreb. A recent IMF study shows that the post-communist countries have been affected by emigration more than it was previously thought and that the associated brain drain had significant negative effects on their economies.

The data clearly shows that the emigrants are more educated than the average population, and thus may be taking the economic potential of their home countries with them. The researchers estimate that an increase of 1 percentage point of immigrants’ share on indigenous population can be associated with 2 percent of GDP per capita growth. On the other hand, the adverse effect of skilled labor migration in the original countries translates itself to lower labor productivity. That in turn results in the convergence gap between Western Europe and CESEE region being 7 percentage points higher than it would have been without the massive emigration.

The standard economic theory suggests that such migration pushes wages of skilled labor in CESEE up and thus motivates the unskilled labor to move up in the labor hierarchy which results in higher GDP per capita. Although the wages had been pushed up by the migration, the output per capita had not grown due to that. The authors turn to endogenous growth models putting more emphasis on agglomeration effects of skilled labor: its abundance only increases its returns. Clearly, as educated people leave the economic hubs of the post-communist countries, they hardly make them more performing.

It would be foolish though to consider only the economic effects. Exodus of educated people surely influences also the domestic institutions and the quality of public life in general. Although it is difficult to quantify this effect, it should be taken into account in the analysis and in the plans of how to counter this issue. The authors suggest to improve the rule of law and other institutions in order to cut the vicious circle. Moreover, they cite Ireland’s success of engaging with diasporas all over the world and bringing back workforce that obtained its qualification elsewhere.

Such paper is a refreshing read in the world where only the issues of unskilled immigration are considered. True, the effects seem negative for the CESEE region, which can hardly improve the case for free movement of labor. However, since both the respective migrants and Europe as a whole are better off thanks to the migration, the report ends on a cheerful note after all.

After Benford’s law, which describes frequency distribution of leading digits in datasets and is often used for accounting fraud detection, data scientists developed a new method which helps to uncover fraud using data analysis. In a paper from 2016, Dmitry Kobak, Sergey Shpilkin and Maxim S. Pshenichnikov show that the results of Russian elections have some very disturbing artefacts indicating fraudulent behavior at polling stations.

The authors started with a simple assumption that people, when making up numbers, tend to go with round integers. So if polling stations do not report the real results but made-up numbers instead, there would be a disproportionate count of polling stations reporting round and neat percentages. This natural inclination to round numbers can be only intensified by thresholds that the central authority considers as “success”.

To test that some polling stations really do make up the reported numbers, the authors used a Monte Carlo simulation to estimate likelihoods of the whole spectrum of percentage results. This way, they were able to find 99.99% confidence intervals for the number of polling stations reporting round results. Their careful analysis shows that there are indeed improbable spikes in the empirical distributions which can be hardly explained in any other way than by fraud.

Interestingly, this phenomenon can be observed since the presidential elections in 2004 when Vladimir Putin was seeking his first reelection. The analysis works with the data from Russian elections in years 2000 till 2012, and it is only the early elections of 2000 and 2003 that do not suggest manipulation in the vote count. The paper does not attempt to answer the question of what the driver of this turning point is, but it is symptomatic that the regions showing persistent anomalies are largely located in the North Caucasian Federal District (e.g. Chechnya).

As a control, the same method is employed using data from German, Polish and Spanish elections. None of these countries show suspicious spikes in the data. Could a group of statisticians serve as a watchdog for national elections? It certainly seems so! Although fraudulent governments can randomize in their vote count manipulation or simply use other dishonest methods to influence democratic elections, let us hope that the honest data scientist will be always step ahead uncovering dirty practices.