A quid pro quo setting undermines the purpose of media as the guardian of democracy.

The links between politics and media have always been an object of much scrutiny. No wonder, considering they should serve as the fourth pillar of democracy. However, in recent years, media’s role has been undermined by the polarization of politics and society alike. Traditional news outlets lose ground to obscure websites spreading biased news at best. Yet in some countries, a more imminent danger for the traditional media does not come from the customers but rather from the advertisers.Adam Szeidl and Ferenc Szucs brought evidence from Hungary showing that the right-wing parties associated with Viktor Orban engage in influencing the media via advertising done by state-controlled firms. As soon as the right-wing government was formed in 2010, the state-controlled companies redirected their advertisement to those media which were owned by Mr Simicska, a media mogul, who happens to be Mr Orban’s former roommate. A reverse channel exists as well. When a news website changed its owner, its coverage of government scandals adjusted accordingly. With an owner linked to the government, the coverage drops rapidly.
The authors use a peculiar event in Hungarian politics to prove their case. In February 2015, there was an unanticipated parting of Mr Orban and Mr Simicska. Since then, the media which had supported the government became independent or even hostile to Mr Orban. Surprisingly, the advertisement from state-controlled firms immediately plummeted to the level of advertising before Mr Orban took power. The figure below illustrates this development where private advertisement is used as a benchmark. The horizontal axis describes the share of total advertisement in right-wing leaning media.

Using the same event, Szeidl and Szucs show that the channel of media coverage is active as well. Indeed, before the fallout date in February 2015, the media owned by Mr Simicska consistently underreported on the government’s scandals compared to the independent media outlets but were in line with other media related to the right-wing politics. However, once the relationship between the two former roommates crumbled, the media owned by Mr Simicska quickly drifted to the pattern of independent media. The following figure comprehensively describes the change. The horizontal axis gives a measure of scandal coverage.

What does all this mean? The connection between the government and the media is no laughing matter. On the more positive note, transfers such as those through advertisement are costly and cumbersome. That suggests more direct routes are not yet available which gives a glimpse of hope to the Hungarian civil society. There are still some checks and balances in place hindering a more efficient behavior from the government’s perspective. But it also explains the ferocity with which Mr Orban attacks Central European University. Mr Szeidl works at this very institution.

The universality principle scares the hell out of progressives, and it should.

It might have come as a major surprise to see libertarians fighting for basic income guarantee (see our last post). Switching the perspectives completely, SmallTalkEconomics now presents the rationale of why the progressives oppose such policy.
The first and foremost objection to basic income guarantee (BIG) is its fiscal feasibility. Many economists have done more or less sophisticated calculations proving BIG to be either too expensive (Henderson 2015) or fiscally neutral (Garfinkel et al. 2003). Despite the non-existing consensus, both estimates share the assumption of scrapping many existing policies which target specific groups such as children or the elderly. Yet these are the most popular policies of a welfare state and form a backbone of the social democratic vision of a just society. Eliminating those could be hardly perceived as leftist.
Daniel Sage and Patrick Diamond in their 2017 paper evaluate the suitability of BIG for becoming a turning point in decreasing support for European social democratic parties. Abolishing policies based on deservingness and replacing them by the universality principle seems to be a highly unpopular move. And not without a reason. Especially in countries with limited welfare infrastructure, the logic of BIG falters. The funds necessary for BIG would simply do more good if they were focused on those in need. The rich and redistributive countries would be also less keen to increase targeted welfare spending if BIG proves to be insufficient. This represents a substantial risk for the social democratic ideals.
Another reason for BIG popularity is the claim that it can outweigh the distortions caused by technology. But BIG would not affect inequality, it would merely hand out a paycheck for technological unemployment. Although the unemployed would have more time and space to get new qualification, they would not have the means to do so unless the government provided them with such training, thus increasing the overall spending. Moreover, BIG does not address the issue of tax revenue. Would the extra costs be covered by a more progressive income tax, a wealth tax, or even a robot tax? Such questions remain unanswered.
The progressives are also not necessarily utopian. Acknowledging that BIG encourages extensive leisure, they are also well-aware that such leisure means unemployment which is associated with poorer health and well-being. Hence BIG might lead to undesirable consequences drawing potentially more money from the public budget. Early experiments with unconditional income indeed showed that BIG decreased the working hours of the recipients. The consequences of unemployment reflected in health and well-being might be attributed to social norms rather than unemployment itself. However, social norms are difficult to change and even the enthusiastic progressives remain skeptical about such high level of social engineering.

Having trouble finding a job? Blame the phantoms.

If you ever spent some time looking for a job online, you have most likely stumbled upon a listing for a position that has already been filled, and most people would agree that it is very annoying. In their recent paper, Bruno Decreuse and Arnaud Chéron call these job listings phantoms. It is not hard to imagine how phantoms directly create inefficiency in the labor market. Job seekers who apply to phantom listings lose their time by calling and employers who leave their ads online even after they have filled the advertised position lose their time by responding to job seekers and explaining them that the position has already been filled. Yet, many employers just leave their phantoms out there. The authors estimate that around 37% of job listings on Craigslist (a major job board in the US) advertise for already filled vacancies.

There are several reasons why employers may leave phantom job listings online: keeping active ads is usually free on websites such as Craigslist and deleting ads takes time and is thus costly in itself. Another reason is that some firms and state-run agencies are legally obliged to publish job listings despite already knowing who they will hire. Further, some employers may choose to keep outdated job listings as insurance until the new hire has started working. It is not easy to identify the presence of phantoms, but the authors found a way – the table below shows the average age (in days) of job listings in 23 major US cities. The maximum age of a job posting is 30 days, after which Craigslist automatically deletes it. The authors argue that if outdated ads were destroyed by employers, the density of job listings by age would be decreasing, and the quartiles of the distribution would be separated by an increasing number of days. However, as the table shows, this is not the case.

The authors then develop an extended version of the canonical continuous-time equilibrium search unemployment model by embedding a matching technology that permits the existence of phantom job listings. This model allows them to analyze the consequences of this particular market friction. They conclude that in the long run, there exists a unique steady state ensured by two mechanisms working against each other. When firms expect the matching process to be very efficient, they have strong incentive to post vacancies. This reduces the proportion of phantom job listings. Then this phantom proportion grows, and there is a reversal in the supply of vacancies, which ultimately determines the steady state.

Unconditional hand-outs make more sense than you think, even from the libertarian perspective.

The notion of basic income guarantee (BIG) for every citizen has been discussed for some time. Although the idea is getting momentum and is even being tested in some countries (Finland, the Netherlands, or Canada), it is often viewed rather as a utopian dream of leftist economists. However, it is not just the egalitarian part of BIG which appeals to many in the economic profession and beyond, it is also the political economy bringing support from the right. Even Mike Munger from Duke University, a libertarian candidate for governor and member of Cato Institute, recently expressed his support for BIG.
As one might guess, the praise of the unconditional income coming from the classical liberal scholars is based on two pillars: individual incentives and market distortions. The current welfare system sometimes disincentivizes people to provide for themselves. For many government programs, people are eligible only if they earn less than a certain level of income. Crossing this threshold means the marginal tax rate is larger than 100 percent. There is no reason for people in such a situation to look for any incremental source of income.
One example of such policy are disability benefits. Although working conditions gradually improve in all sectors of the economy, more and more people are eligible for disability benefits. This increase can be tracked to regions with former presence of heavy industry. Data shows that disability benefits in fact serve as a transfer from the prosperous regions to those negatively affected by advancing technology and trade. Without judging the necessity, form or magnitude of the transfers, it is clear that policies which disincentivize work are among the worst ways to help lagging regions. BIG would solve this incentive problem and would thus be a viable substitution of disability benefits and related welfare programs trying to cope with such global dynamics.
The other perk of BIG is that it creates minimal distortions in the economy. Unconditional income is essentially a negative head tax. Although not acceptable from a social point of view, head tax has been identified as the most efficient form of taxation. It hardly affects prices and hence keeps the information flow in the economy intact causing virtually no market distortions. Thanks to BIG, it would be also possible to scrap policies such as minimum wage or unemployment benefits. Young workers would be able to gain their first job experiences disregarding their productivity relative that associated with the minimum wage.
Does BIG have any downsides, any obstacles preventing its implementation? It surely does, but those will be addressed in the next article. Until then, hurray for the BIG!

Were you not born into a rich family? Never mind, your chances to become a billionaire have never been better.

Steven Kaplan and Joshua Rauh (both from the University of Chicago) conducted one of the less typical academic researches when they studied characteristics of the richest Americans. Using the Forbes ranking which lists the top 400 wealthiest individuals in the US economy, the authors explored three simple questions: whether wealth is self-made or inherited; in what industrial activities the firms of the rich operate and to what extent technology plays a role in their business activity; and whether the Forbes 400 members have graduated from college or not.

To provide a bigger picture, they analysed four years with approximately 10-year gaps between them. Starting with 1982 and finishing with 2011’s issue they described the dynamics of the changes. The study reveals that nowadays the Forbes 400 are more likely to run their own business rather than a business established by their ancestors. In 1982 only 40 percent of the Forbes 400 members started their own business, compared to almost 70 percent in 2011. The effect of family’s wealth has also become smaller within the group of self-made billionaires. While in 1982, roughly 60 percent of Forbes 400 members grew up in rich families, nowadays, it is only around 20 percent. Equally interestingly, the share of Forbes 400 members from poor families has remained constant, around 20 percent. The figure below provides more details.

The data further indicate increasing importance of education. In particular, the percentage of college graduates among the 400 richest businessmen has grown by 10 percentage points, to 87 percent. Similarly, the rate of dropouts has increased as well. Naturally, the number of those who did not attend college at all has decreased. Considering the industry of the businesses, there are several trends worth attention. In the last 20 years, retail, technology-based industries and financial firms (hedge funds, private equity) have become more represented in the Forbes 400, whereas real estate, energy and media have experienced a decline. Overall, the structure of Forbes 400 has changed dramatically since 1982 and the position of the wealthiest individuals seems to be more open for those with no business to inherit.

Towards a more informed view on what’s hidden under the veil of aggregated data

The current level of economic well-being is largely to be thanked to increased specialization in the economy, which was enabled by international trade and integration of regional and national markets. Although international trade is not a zero-sum game, even the most vocal proponents of openness to trade admit there are businesses finding themselves on the losing side of the bargain. Yet we know surprisingly little about the adverse effects of international trade exposure, especially on the micro level. David Autor, David Dorn, and Gordon Hanson made the first step into these unchartered lands and provide grounds for a more informed discussion about international trade.

Their great contribution stems from using regional instead of national data and also from distinguished use of econometric methods. Commuting zones in the US serve as a smart proxy of local labor markets. Labor mobility and spillovers from dynamic regions are often not able to compensate for losses caused by increased competition. Disaggregation of the data at the local labor market level enables a closer look at this issue. To avoid endogeneity, they use data on other advanced countries’ imports from China. The original US data could be endogenous; import intensity is not caused only by supply but by demand as well. However, by using import data of a similar country (actually a composite index of several ones), one can isolate only the supply effect and thus avoid the endogeneity trap.

The results prove Chinese import competition negatively affects labor markets. Even a brief look at the data shows clear negative correlation between manufacturing employment and Chinese import penetration in the US (see figure below). More advanced analysis shows import penetration indeed reduces the number of employees working in manufacturing, but also shrinks the total labor force (as many people prematurely retire or apply for disability benefits), pushes down wages in non-manufacturing, and increases government transfer receipts. Trade exposure thus affects the whole labor force: manufacturing through employment and non-manufacturing through wages. There is also already a mechanism in place compensating for losses caused by trade – unemployment, retirement, disability and other welfare benefits serve as an insurance against trade shocks. Unfortunately, looking at regions formerly known for booming manufacturing, such mechanism does not compensate for the contemporary economic dynamic.

Although trade with China most likely benefits the US economy in the aggregate terms, there are regions losing out in the short term. This could partly explain the widely spread disenchantment with new trade agreements all around the developed world (be it TPP or TTIP). However, for those working in the service sector in major cities, it is tempting to focus only on the upside of international trade. The newly acquired insights should not serve as an argument against openness to trade; instead, it should make the discussion more informed and make us all think of how to kick-start regions lagging behind.

Do you think that’s absurd? Keep reading.

Tax systems of most developed countries are astonishingly complex. Depending on your country, factors such as your income, consumption, wealth, number of children, charitable contributions, mortgage payments, health insurance expenditures, or whether you donate blood may all influence your tax liability, i.e. the total amount of tax due each year. The ultimate aim of designing a tax system is fairness, although the concept is rather subjective. There are multiple plausible, yet very different views on what is fair. One view (very rare, luckily) is that a flat tax would be the most fair tax, as everybody would just pay the same amount. In practice, two main principles are used to argue which taxes are considered fair – the ability-to-pay principle and the benefits principle. While income tax or wealth tax are prominent examples of taxes we deem fair based on the ability-to-pay principle (you pay as much as your financial situation allows you to), consumption tax or highway tolls are typical taxes based on the idea of the benefits principle (you pay as much as you use a certain good or public service). The framework for optimal taxation which remains a centerpiece in modern public finance was laid out by James A. Mirrlees (of Mirrlees Review) and William Vickrey, for which they jointly received a Nobel Prize in 1996.

As argued by Nicholas Gregory Mankiw and Matthew Weinzierl (both from Harvard), based on this theory, a surprisingly suitable choice could be taxing people’s height, or more precisely, providing a tax credit to short people and imposing a tax surcharge for tall people. Before you discard the idea as nonsense and stop reading this article, think about the reasons not to tax height. In short, the optimal taxation theory as it stands today defines an ideal characteristic based on which to tax as:

exogenous, meaning that the tax does not distort incentives as there is no way to change this characteristic,

easily observable, meaning that it is easy to see what the value of this characteristic is for every person,

positively correlated with ability, so that the ability-to-pay principle is satisfied.

Income, which is currently the most important characteristic of people for tax purposes in most countries, does not do too well in (1) because of deadweight loss and in (2) as documented by relatively high shares of the shadow economy throughout the world. Income does pretty well in (3); in fact, it is probably the best indicator of ability we have at this time.

But how about height? While the first two requirements are clearly met for height, the third one might raise questions. However, previous research (see, for example, here, here or here, and figure below) convincingly shows that taller people make more money – on average by between 1 and 2 percents per additional inch of height – and, assuming correlation between ability and income, a potential height tax would also be correlated with ability. Starting from here, the authors rigorously show in their article that the current theory suggests that we should indeed impose a height tax, and moreover, the optimal level of such tax is substantial. The fact that in practice, we do not impose such a tax and that most people think it absurd suggests that the current optimal taxation theory must in some way fail to capture our intuitive notions of distributive justice.

And preferably one that doesn’t leave you

Already back in the early 60’s, David Gale and Lloyd Shapley constructed a matching algorithm which has become one of the crucial tools of modern market design. In particular, using examples of college admissions and stability of marriage, they presented a method how a market where money cannot serve as an allocation rule may reach a stable and pareto-optimal equilibrium. Other examples of such markets include organ donations, allocation of children to schools or assigning dorm rooms to college students.

Suppose the following example. There are women and men taking dancing lessons and before each lesson begins, they need to form dancing couples. Naturally, all participants will think about and compare the potential counterparts; they may even consider a list based on their preferences. As the authors prove mathematically, employing the Gale-Shapley (also called “deferred-acceptance”) procedure ensures that the solution to the problem, i.e. the couples that have formed, will be stable. By stable in this example we understand an allocation in which no two people of opposite sex would both rather have each other than their current partners, and thus will not switch partners given that their preferences do not change. After explaining the algorithm and its characteristics, the authors examine the optimality issue, which, however, is a more complex task and depends on a particular set-up of the problem.

The figure below shows an example of a stable outcome of the Gale-Shapley procedure based on the preferences of all 8 participants. The number for each pair represent the ranking of the counterpart among the four potential mates. As an example, the first cell – (1, 3) – tells us that A would be the first choice for alpha and alpha would be the third choice for A. Running the procedure yields the stable pairs whose preferences for each other are circled in the table. An interesting aspect of this particular outcome is that no participant was matched with his or her first choice. Nevertheless, the stability ensures that no participant could improve his situation while not leaving at least one other participant worse off. The results reached in this paper have had an enormously important impact on the future development of market design theory. Lloyd Shapley received a Nobel Prize in Economics in 2012 for this work and passed away in March this year; David Gale, who passed away in 2008, has not received the prize because it can only be awarded to living scientists.

Reference: Gale, D., & Shapley, L. S. (1962). College admissions and the stability of marriage. The American Mathematical Monthly, 69(1), 9-15. Available here.

Recently, social media have become the primary source of information for many and thus play an increasingly important role in influencing opinions, and apparently more so for the young generation. The problem is that on social media, opinions you usually get exposed to become less and less diverse as a result of both automatic filtering algorithms (whose goal is to get as many shares as possible) and the fact that you are more likely to follow people with similar views (who are, in turn, likely to share information biased in favor of their opinion). An example is your Facebook feed, which is personalized based on your past clicks and likes, making you less likely to “see the bigger picture”. This phenomenon is called a filter bubble and it concerns both search engine results and social media. However, as some people argue (here or here), social media are a more powerful tool for filter bubbles, and this is a problem. Others think that when it comes to the really important decisions, it might not be that bad just yet, and there are even guidelines on how to get out of your filter bubble.

In a recent working paper, Shane Greenstein, Yuan Gu and Feng Zhu (all from Harvard Business School) analyzed whether Wikipedia, and in particular articles about US politics, are also affected by filter bubbles and thus become more biased over time. They chose politics because it is a good example of the so-called contested knowledge, which can be loosely defined as knowledge that answers questions for which there is no single “right answer”. Relying on metrics developed by Gentzkow and Shapiro’s 2010 article in Econometrica, the authors measure empirically whether selected Wikipedia articles become more or less segregated (i.e. slanted towards a certain view) over time.

Their results, somewhat optimistically, show that Wikipedia contributors increasingly offer content to those with different points of view, which reinforces unsegregated conversations at Wikipedia over time. Interestingly enough, the authors additionally estimate that this slant convergence process takes one year longer on average for Republicans than for Democrats. The authors stress the importance of the option to remove previously added material (such as on Wiki-style pages) or using aggregate contributions (such as Yelp or Rotten Tomatoes) over the social media style, where additional material just gets piled up on top of what is already there.

Collective intelligence, of which Wikipedia is perhaps the most astounding source, thus seems to cope fairly well with the filter bubble problem, at least for now. While we can, let’s all try to use that to our advantage and rely more on objective sources when forming opinions, challenge ideas shared by people we follow on social media, verify facts using multiple sources and most importantly, keep an open mind. And share this article with our filter bubbles.

Unfortunately, we have not yet arrived to a time when people of all colors have the same opportunities. Numerous studies have shown that in the US, being an African American is strongly correlated with earning less money for the same work, having worse access to education, being more likely to be unemployed and so on. Nonetheless, there are areas in which one would not expect racism – like responding to CV submissions not including pictures of the person. However, as shown by Marianne Bertrand and Sendhil Mullainathan in their field experiment, even your name can affect your chances.

The authors started by choosing typically white and typically African American sounding names, such as Emily Walsh and Jamal Jones, and sent out resumes randomly in response to help-wanted ads in Chicago and Boston. In total, they responded to over 1 300 job offers, sending out more than 5 000 resumes. They randomly assigned previous work experience to these fake resumes and responded to a variety of jobs to get a sample as good as random.

The authors find large differences in callback rates. While applicants with a white-sounding name need to apply for, on average, 10 positions to receive a callback, it is 15 for applicants with an African American-sounding name. Moreover, the researchers also recorded and analyzed the applicants’ addresses and the perceived quality of the resume (in terms of previous work experience relevant to the job, education etc.). The results suggest that living in a wealthier neighborhood helps significantly, and a high quality resume helps more when you have a white-sounding name.

Reference: Bertrand, M., & Mullainathan, S. (2004). Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. The American Economic Review, 94(4), 991-1013. Available here. A freely accessible working paper version is available here.