Transcription

1 Rise of the Machines: The Effects of Labor-Saving Innovations on Jobs and Wages Andy Feng & Georg Graetz August 17, 2013 Preliminary, comments welcome. Please do not cite without permission. Abstract We study the labor market effects of increased automation. We build a model in which firms optimally design machines, train workers, and assign these factors to tasks. Consistent with findings from computer science and robotics, the model features tasks which are difficult from an engineering perspective but easy for humans to carry out due to innate capacities for complex functions like vision, movement, and communication. In equilibrium, firms assign low-skill workers to such tasks. High skill workers have a comparative advantage in tasks which require much training and are difficult to automate. Workers in the middle of the skill distribution perform tasks of intermediate difficulty on both dimensions. When the cost of designing machines falls, firms adopt machines predominantly in tasks that were previously performed by middle-skill workers. Occupations at both the bottom and the top of the wage distribution experience employment gains. Wage inequality increases at the top but decreases at the bottom. As design costs fall much further, only the most skilled workers enjoy rising skill premiums, and an increasing fraction of the labor force is employed in jobs that require little or no training. The model s implications are consistent with recent evidence of job polarization and a hollowing-out of the wage distribution. In addition, we provide novel evidence on trends in occupational training requirements that is in line with the model s predictions. We thank Francesco Caselli, Guy Michaels, John Van Reenen, Alan Manning, Michael Boehm, Johannes Boehm, and Claudia Steinwender for valuable comments and suggestions. Both: London School of Economics and Center for Economic Performance, Houghton Street, London WC2A 2AE, UK. s: and 1

2 1 Introduction How does labor-replacing technical change affect the allocation of workers to jobs, and what are its effects on the wage distribution? To answer these questions, we build a model guided by two insights. First, when technologies are available that can carry out a wide range of tasks autonomously, the allocation of workers and machines to tasks will be determined by comparative advantage (Simon 1960, pp.23-24). Second, there are tasks that seem easy to any worker but building a machine capable of performing them may be costly if not impossible: occupations such as waiters, taxi drivers, or housekeepers do not require much skill beyond vision, movement, and communication in natural language, but these are highly complex functions from an engineering point of view. The two insights combine to generate an equilibrium in which tasks that require little or no training are performed by workers at the bottom of the skill distribution that is, workers with high learning costs. Middle skill workers compete directly with machines, as their comparative advantage (CA) is in tasks that require a non-negligible amount of training and that are of intermediate complexity in engineering terms. Finally, high skill workers CA is in highly training-intensive, complex tasks, and thus they face significantly less competition from machines than the middle-skilled. We model labor-replacing technical change as an exogenous fall in the cost of making machines, resulting from innovations that facilitate the automation of a wide range of tasks. Examples include the electrification of manufacturing, 1 the information and communication technology (ICT) revolution, and recent advances in robotics and artificial intelligence. 2 Responding to the fall in machine design costs, firms adopt machines in tasks that were previously performed by middle skill workers. Low skill workers jobs might also be subject to automation, but to a lesser degree than middle skill workers. The reallocation of workers causes occupations (sets of tasks) at both the bottom and the top of the wage distribution to experience employment gains, a phenomenon known in the literature as job polarization. Wage inequality increases at the top but decreases at the bottom. As machine design costs drop further, the part of the wage distribution featuring rising inequality becomes smaller. An increasing fraction of the labor force is employed in jobs that require little or no training. We borrow from organizational economics in modeling the production process. This allows us to work with a precise notion of objective complexity that we call knowledge intensity. Following Garicano (2000), we assume that production requires knowledge which must be possessed by workers and embodied in machines. The knowledge intensity of a task is the amount of knowledge required to attain a given level of productivity. As machines are made of inanimate matter which is devoid of knowledge to start with, it is knowledge intensity alone that determines the cost of building a machine capable of performing a task. However, the amount of training a worker requires may differ even across two tasks of equal knowledge intensity: in some cases she can draw on innate capabilities, as when driving a car safely through traffic; but in other cases knowledge must be painstakingly acquired, as when solving differential equations. The distinction between 1 Electrification facilitated automation because electric motors could be arranged much more flexibly than steam engines (Boff 1967, p.513). 2 We provide a list of examples for recent progress in these areas in Section 2. 2

3 knowledge intensity and training intensity is critical for explaining why middle skill workers are most affected by increasing automation. Although the model assumes that all factors are perfect substitutes at the task level, it is possible for complementarities between factors to arise because tasks are q-complements in the production of the final good. When it gets cheaper to make machines, firms respond in two ways. First, they upgrade existing machines. Second, they adopt machines in tasks previously performed by workers. The first effect on its own would lead to a rise in wages for all workers, because the increase in machines task output raises the marginal product of all other tasks. The second effect, however, forces some workers to move to different tasks, creating excess supply which puts downward pressure on their wages. Since middle skill workers are most likely to be displaced by increased automation, their wages relative to low skill and high skill workers will decline. Thus, whether technology substitutes or complements (in terms of wage effects) for a worker of given skill type will depend on that worker s exposure to automation. Our model features a continuum of worker types as well as a continuous task space, building on the framework developed by Costinot and Vogel (2010). Existing task-based models in the wage inequality literature either assume a small number of worker types and a continuum of tasks, or a continuum of types and a small number of tasks. The disadvantage of either approach is that by construction, relative wages within large sub-groups of workers are unaffected by technical change. 3 Our assumptions allow us to characterize the effects of labor-saving innovations on the entire wage distribution, and we are able to derive predictions about changes in both betweenand within-group wage inequality. 4 The model s implications are consistent with a growing empirical literature arguing that recent technical change has led to polarization of labor markets in the US and Europe. 5 Modern ICT appears to substitute for workers in middle wage jobs, while complementing labor in high and low wage jobs, thus causing the observed reallocation of employment and the hollowing-out of the wage distribution. 6 Our model provides a precise mechanism explaining these findings. In particular, the model suggests that the ICT revolution has caused job polarization because it has facilitated a more wide-ranging automation of tasks. A corollary is that job polarization should not be a unique consequence of the recent ICT revolution. Indeed, Gray (2011) finds that electrification in the US during the first half of the 20th century led to a fall in the relative demand for middle skill workers. Our theory delivers several novel predictions about trends in occupational training requirements. In the model we distinguish between general and specific skill. The former refers to the ease with which a worker acquires the latter, namely, task-specific knowledge. We gauge the 3 To see this for the case of a continuum of workers and a discrete set of tasks, consider two distinct workers who are both assigned to the same task and remain so after a change in technology. The two workers relative wage will stay constant as they both face the exact same change in the price of the task they perform. 4 In the wage inequality literature, between-group inequality refers to differences in mean wages across groups defined by observable characteristics such as education and experience. Within-group inequality refers to wage dispersion within such groups. 5 Job polarization has first been documented for the US by Autor, Katz, and Kearney (2006), for the UK by Goos and Manning (2007), and for European economies by Goos, Manning, and Salomons (2009). 6 See Autor, Levy, and Murnane (2003), Michaels, Natraj, and Van Reenen (2010), and Goos, Manning, and Salomons (2011) for evidence favoring the technological explanation. 3

4 amount of task-specific knowledge required in an occupation using measures of training intensity from the Dictionary of Occupational Titles (DOT) and the O*NET database. This allows us to measure training requirements in the US at two points in time, 1971 and We find empirical support for the model s prediction of a polarization in training requirements, i.e. an increase in the employment shares of jobs requiring minimal and very high levels of training. Furthermore, we show that occupations that initially had intermediate training intensities experienced a fall in training requirements. The model provides a ready explanation: new technologies induced firms to automate the subset of tasks in a given occupation which required intermediate training by workers. We also find that almost all occupations experienced an increase in mean years of schooling, irrespective of changes in training requirements. This is in line with the model s prediction about an increase in skill supply. Finally, we show that changes in occupational wage premia are positively correlated with changes in training requirements, again consistent with the model. The paper s main contributions can be summarized as follows. First, we present the first model of labor-saving technical change that allows for endogenous technology adoption as well as endogenous machine design and training choices. Second, to the best of our knowledge our model is the first to generate job polarization endogenously. Existing models 7 usually assume that technology substitutes for middle skill workers while complementing high and low skill ones this is instead a result in our paper. Third, we provide comparative static results for the entire wage distribution, for instance we derive predictions about the effects of automation on wage inequality at the top of the wage distribution. Finally, we derive and test novel predictions about trends in occupational training requirements. The connection between technological change and training seems to have been neglected in the empirical literature (Handel 2000), 8 but our model suggests that the two topics are intimately linked. The plan of the paper is as follows. The next subsection reviews related literature. Section 2 motivates the conceptual framework which underlies our modeling of tasks, and relates our framework to the one used by Autor, Levy, and Murnane (2003). Section 3 presents and solves the model. Section 4 discusses comparative statics, in particular how job assignment and the wage distribution change as a response to increased automation. We also present comparative statics for a change in skill supplies. Section 5 presents two extensions to the model: endogenous capital accumulation and a fixed cost of technology adoption. Section 6 confronts the model s prediction with existing empirical evidence and takes novel implications of the model to the data. Section 7 concludes. All proofs are contained in the appendix. 1.1 Related literature We build on a rather small literature on labor-saving innovations. Zeira (1998) presents a model in which economic development is characterized by the adoption of technologies that reduce labor requirements relative to capital requirements. Over time, an increasing number of tasks can be 7 See e.g. Autor, Levy, and Murnane (2003), Autor, Katz, and Kearney (2006), Acemoglu and Autor (2011), Autor and Dorn (2013), and Cortes (2012). 8 Not so in the theoretical literature on wage inequality see Section

5 produced by new, more capital-intensive technologies. In an extreme example which is closely related to our paper, new technologies only use capital, while old ones only use labor. We extend this type of setting by explicitly modeling the characteristics of tasks and thus the direction of technical change, as well as by allowing for heterogenous workers. Holmes and Mitchell (2008) present a model of firm organization where the problem of matching workers and machines to tasks is solved at the firm level. Their model admits a discrete set of worker types and they do not consider technical change. The paper is related to a wider theoretical literature that has used assignment models to investigate the effects of technical change on the role of workers in the production process and on the wage distribution. One strand of papers analyzes the matching of workers with technologies of different vintages. Wage inequality results for instance when workers must acquire vintage-specific skills (Chari and Hopenhayn 1991) or machines are indivisible (Jovanovic 1998). Furthermore, skill or unskill bias of technical change can arise when new technologies require different learning investments than old ones, and when learning costs are a function of skill (Caselli 1999). We abstract from the issue of workers having to learn how to operate new technologies and focus instead on the problem of assigning workers and machines to tasks, following a recent literature that has emphasized a task-based approach to labor markets (Autor 2013). The interaction of workers and machines is nevertheless present in our model: since tasks are assumed to be q-complements, the efficiency of machines affects the marginal products of all workers in the economy. We adopt the model of task production developed by Garicano (2000) in his theory of firm organization and knowledge hierarchies. Garicano and Rossi-Hansberg (2006) use this model to analyze how hierarchical organizations are affected by a decline in communication and knowledge acquisition costs, another consequence of the ICT revolution. Our focus is instead on labor-saving innovations, and we keep the model simple by not allowing hierarchies of multiple layers. Finally, on the methodological side our paper is in the tradition of Ricardian theories of international trade, combining aspects of Dornbusch, Fischer, and Samuelson (1977) and Costinot and Vogel (2010). While these papers characterize equilibrium allocations given factor endowments and productivity levels, our focus is on endogenizing productivity differences, using modeling techniques similar to those of Costinot (2009). We shed light on the sources of comparative advantage between differently-skilled workers and machines. 2 Motivating the Conceptual Framework Researchers in artificial intelligence, robotics, and cognitive science have long been aware that some abilities that humans acquire quickly at an early age rely in fact on highly complex functions that are difficult if not impossible to reverse-engineer. Steven Pinker notes that [the] mental abilities of a four-year-old that we take for granted recognizing a face, lifting a pencil, walking across a room, answering a question in fact solve some of the hardest engineering problems ever conceived (Pinker 1994, p.192). In contrast, many abilities that humans must painstakingly acquire, such as mastery in arithmetic, are trivial from an engineering perspective. This insight 5

6 has become known as Moravec s paradox:...it is comparatively easy to make computers exhibit adult-level performance in solving problems on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility (Moravec 1988, p.15). Moravec resolves the paradox by considering the objective or intrinsic difficulty of a task, for instance the amount of information processing required, or the degrees of freedom and dexterity necessary to carry out a certain physical action. While the average human will find it somewhat challenging to divide 105 by 14 in his head, he has no trouble crossing a crowded public square on foot without constantly bumping into people. However, in terms of intrinsic difficulty the latter task is much harder than the former. 9 The reason that we are usually not aware of this fact is that we rely on innate abilities 10 for functions like movement or perception, but have no such advantage when it comes to abstract tasks like arithmetic. 11 While in reality the intrinsic difficulty of a task would have to be assessed on multiple dimensions, we adopt a one-dimensional concept for simplicity. In our framework, a task s intrinsic difficulty is measured by its knowledge intensity. Formally, more-knowledge-intensive tasks require a larger amount of knowledge for a given level of productivity. Solving the division exercise mentioned above is a task with low knowledge intensity, because the required procedure can easily be codified. Crossing the crowded public square, in contrast, requires a vast amount of knowledge about movement and coordination, not to mention the ability to correctly anticipate the actions of the people around. Because machines are made of inanimate matter which is initially devoid of knowledge, 12 it is knowledge intensity alone that determines the difficulty of building a machine capable of performing a given task. The preceding discussion makes clear however that the amount of training a human worker requires may differ even across two tasks of equal knowledge intensity. This is because she can draw on a vast endowment of knowledge providing her with certain innate capabilities, although for the most part this knowledge may be unconscious or tacit. The presence of such knowledge endowments (either innate or acquired early) applicable to a wide range of tasks suggests introducing a second dimension into our task framework, which we call 9 On the challenge of making walking robots, to say nothing of visual perception, Spear (2001, p.336) comments that [in] practice this is very difficult to achieve as the leg position requires continuous sensing to ensure safe positioning and large amounts of real time computing to ensure that the robot moves without overbalancing something the human brain achieves with ease (when sober anyway!). 10 Innateness of a certain skill does not need to imply that humans are born with it; instead, the subsequent development of the skill could be genetically encoded. For a critical discussion of the concept of innateness, see Mameli and Bateson (2011). 11 Moravec (1988, pp.15-16) provides an evolutionary explanation for this:...survival in the fierce competition over such limited resources as space, food, or mates has often been awarded to the animal that could most quickly produce a correct action from inconclusive perceptions. Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it. 12 Of course, many materials have productive properties take for instance copper with its electrical conductivity; but the knowledge contained in materials is usually highly specific and limited, so that it is probably safe for our purposes to ignore this exception to the rule. 6

7 Table 1: Two-Dimensional Task Framework, Examples Training intensity + Knowledge intensity + driving a car grading essays language research waiting tables strategic decision making assembly arithmetic driving a train bookkeeping grading MCQs training intensity: more-training-intensive tasks require more resources for equipping a human worker with a given level of knowledge specific to the task. In contrast to knowledge intensity, which refers to an objective understanding of knowledge requirements, the training intensity of a task is an attribute that only arises in the context of a worker performing a task. We offer two illustrative examples. First, compare the task of driving a train with that of driving a car. The former takes place in a well-controlled environment, unlike the latter, which has therefore higher knowledge intensity. 13 But to humans, the two tasks may not seem all that different in terms of difficulty the uncertainties of navigating through road traffic do not pose an extraordinary challenge since many of the key functions they require, such as vision, are innate. Second, contrast the task of grading an exam consisting of multiple choice questions (MCQs) with that of marking an essay-based test. MCQs allow only for a limited set of possible answers, and the recipe for grading them is trivial (but the task is still somewhat training intensive as it requires the ability to read and add up marks). In contrast, grading an essay may involve assessing a large variety of approaches to the questions posed. Clearly, the latter is more knowledgeintensive than the former. But in this case, it is also more training-intensive: most humans will find grading an essay the more difficult task, perhaps even impossible to complete in the absence of subject-specific training. Driverless trains and machine-grading of MCQs have been around much longer than driverless cars and automatic grading of essays, both appearing only recently (Markoff 2010, Shermis and Hamner 2012). We will show the model to be consistent with this fact. Table 1 provides an overview of our task framework and contains some further examples. We are not the first to employ a multi-dimensional task space to analyze the impact of technological change on jobs and wages. In particular, Autor, Levy, and Murnane (2003, henceforth ALM) categorize tasks as routine and non-routine on one dimension, and as analytic, interactive and manual on another. They call a task routine if it can be accomplished by machines following explicit programmed rules (ibid., p.1283). In contrast, non-routine tasks are tasks for which rules are not sufficiently well understood to be specified in computer code and executed 13 We consider only the process of driving the train, not the engineering knowledge and familiarity with railway infrastructure that train drivers posses in practice. 7

8 by machines (ibid.). The terms analytic, interactive and manual are used to characterize both routine and non-routine tasks in more detail. While ALM s framework echoes many of the issues that we have discussed here, we believe that our own framework offers several advantages. First, it is more general, as it avoids specific attributes such as interactive and manual. Second, it is not context-dependent. Machine capabilities constantly expand, so we prefer to avoid a task construct that depends on the current state of technology. 14 Thus, knowledge intensity is an objective, time-invariant measure of the information required to do a particular task, irrespective of whether a machine or a human does it. Third, the concept of training intensity is absent in ALM. Finally, ALM s framework implicitly leaves firms little choice to automate a given task, as routine tasks are assumed to be automated, and non-routine tasks are not. Our framework instead allows us to endogenize this choice. Notwithstanding these differences, it is still possible to interpret ALM s empirical results in light of our framework. For instance, their measure of routine-ness might in practice be inversely related to knowledge intensity. matches up to empirical findings in Section 6. We will return to this issue when discussing how our model While we believe that our task framework is an improvement over existing literature and that it generates useful and novel insights, there are some limitations. For instance, technological change often leads to the introduction of new tasks and activities (flying airplanes, writing software). While our framework in principle allows for an endogenous task space, it does not suggest in what way technology might affect the set of tasks in the economy. Furthermore, automation does not necessarily involve machines replicating exactly the steps that humans carry out in completing a given task. Instead, a task can be made less knowledge-intensive by moving it to a more controlled environment. 15 Our framework does not explicitly allow for this possibility, but our conclusions should still be broadly correct if the cost of moving a process to a more controlled environment is increasing in its knowledge intensity. Finally, technological change tends to cause organizational change, but to keep the analysis tractable and to be able to focus on a single mechanism, we omit firm organization from the model. What we do not view as a limitation is the assumption that machines could in principle perform any task. There are three reasons. First, comparative advantage ensures that some tasks will always be performed by humans, so that the model will be consistent with the fact that some tasks are not performed by machines in reality. Second, we can parameterize the model such that machine productivity levels in some tasks are vanishingly small. Third, and most importantly, recent technological progress suggests that machine capabilities might be expanding quite rapidly. Brynjolfsson and McAfee (2011, p.14) argue that machines can potentially substitute for humans in a much larger range of tasks than was thought possible not long ago, citing recent advances in pattern recognition (driverless cars), complex communication (machine translation), and combi- 14 To give an example, Levy and Murnane (2004) consider taking a left-turn on a busy road a nonroutine task unlikely to be automated in the foreseeable future. But less than a decade later, the driverless car has become a reality. 15 See ALM (p.1283) and Simon (1960, pp.33-35). A recent example is the new sorting machine employed by the New York Public Library (Taylor 2010). 8

9 nations of the two (IBM s successful Jeopardy contestant Watson). Markoff (2012) provides an account of the increased flexibility, dexterity, and sophistication of production robots. 16 For our model to be useful as a guide to medium-term future developments in the economy, we deem it prudent to make the most conservative assumption about what tasks are safe from automation. 3 The Model 3.1 Overview The model has one period which we interpret as a worker s lifetime. 17 There is a unique final good which is produced using a continuum of intermediate inputs, or tasks. These tasks are produced by workers of different skill levels and machines. Crucially, all factors of production are perfect substitutes at the task level. Although this may seem a strong assumption, the loss of generality is not substantial provided all tasks are essential in producing the final good, a condition that we shall maintain throughout. In fact, when tasks are imperfect substitutes in producing the final good, factors of production will appear to be imperfect substitutes in the aggregate. Labor services as well as the economy s capital stock are supplied inelastically and all firms are perfectly competitive. Intermediate firms hire workers or capital to produce task output that is then sold to final good firms. Factors productivity is not a given: intermediate firms must train workers, and must transform generic capital into task-specific machines in order for these factors to be capable of performing tasks. Technologies for worker training and machine design are public knowledge. Training levels and machine quality are choices faced by the intermediate firms which, unlike the decision of what factor to hire, are made independently of factor prices and task prices. This is because training and design costs are assumed to be in units of factor inputs and not in units of the final good. Optimal training and design choices, and hence productivity, result instead from the properties of tasks and their interplay with attributes of the factors of production. Characterizing these choices is subject of the Section 3.5. The result is a productivity schedule that determines comparative advantage between factors and across tasks. This then allows us to apply standard results to solve for the equilibrium assignment of factors to tasks in Section 3.6. Thus, we proceed by a kind of backward induction : first, we solve for factors productivity conditional on firms hiring these factors; and second, we characterize hiring choices, using the results of the first step. 3.2 The Task Space Tasks differ along two dimensions, knowledge intensity, denoted by σ Σ, and training intensity, denoted by τ T. The higher is a task s σ, the more knowledge is required for a worker or a machine to attain a given level of productivity. The higher is a task s τ, the more resources are required to equip a worker with a given level of knowledge. Recall that the concept of knowledge intensity refers to an objective understanding of knowledge requirements, for instance, 16 An overview of recent developments in robotics research can be found in Nourbakhsh (2013). 17 We discuss a dynamic (multi-period) version of the model in Section

10 the amount of information processing required to perform a given task. In contrast, the training intensity of a task is an attribute that only arises in the context of a worker performing a task. Completion of tasks results in intermediate outputs that are used to produce the final good. Let Y denote the output of the unique final good, and let task output be denoted by y(σ, τ). For tractability, we use a Cobb-Douglas production function, log Y = [log y(σ, τ)] db(σ, τ). Σ T The weighting function B(σ, τ) determines the relative importance of each task in final good production. To ensure constant returns to scale we assume Σ T db(σ, τ) = 1. Throughout most of our analysis we make the following, simplifying assumption about the domains of the parameters τ and σ. Assumption 1 τ T = {0, 1}, σ Σ = [σ, σ], σ > 0 Under this assumption, there is a set of tasks for which τ = 0, so that knowledge acquisition costs are zero, or equivalently, all workers have an innate ability to perform these tasks. We will call these tasks innate ability tasks. We will refer to the tasks with τ = 1 as training-intensive tasks. Within both these sets of tasks, knowledge intensity varies continuously. We will state explicitly when Assumption 1 is imposed. 3.3 Worker Training, Machine Design, and Technological Change The technologies for training workers and designing machines are as follows. Intermediate firms must pay τ/s efficiency units of labor to equip a worker of skill s with a unit measure of knowledge. Higher skilled workers have lower learning costs. Higher values of τ imply a larger learning cost, holding knowledge and skill constant. Similarly, to transform one unit of capital into a machine equipped with a unit measure of knowledge, intermediate firms must pay c K 1/s K units of capital. We will refer to c K as the machine design cost, which is the main exogenous driving force in our model. As a matter of notation, it will be more convenient to work with s K, machine skill, instead of c K. Notice that a tasks s τ does not affect design costs, by definition. Workers and machines productivity depends on their task-specific knowledge as well as a task-neutral productivity term, which shifts a factor s productivity proportionately in all tasks. Let task-neutral productivity of machines be denoted by A K. Our model admits exogenous technological change in the form of a decrease in c K or an increase in A K, although we will mainly be concerned with the former. A fall in c K represents any technological advance that lowers the cost of automation of a wide range of tasks, typically a combination of improved software (programming languages, algorithms) and improved hardware (CPU speed, robotics). A rise in A K represents improved efficiency of existing machinery. In reality, the forces affecting the two parameters may not always be mutually exclusive. This does not impair the model s ability to generate sharp predictions, however, since both parameters give rise to the same comparative statics. 10

11 3.4 A Simple Example To illustrate how task characteristics and factor attributes affect productivity differences across factors and tasks, we present a simple example. We impose Assumption 1. Let us assume for the moment that worker training and machine design are exogenously determined by task characteristics. In particular, suppose that factors are either made capable of performing a task or not, so that there is no intensive margin for task-specific productivity. Let knowledge intensity σ be the amount of knowledge required for a factor to be able to perform a given task. A worker with learning cost 1/s will produce A(1 σ/s) units of task output in training-intensive tasks (τ = 1), where A is the worker s task-neutral productivity. The same worker will produce A units in any innate ability task (τ = 0). A machine will produce A K (1 σ/s K ) units regardless of training intensity. Now consider two workers with skill levels s, s such that s > s, and two tasks with equal training intensity τ = 1 but different knowledge intensities σ, σ such that σ > σ. (How taskneutral productivities A and A compare is irrelevant for what follows.) Simple algebra establishes that the higher skilled worker is relatively more productive in task σ, i.e. she has a comparative advantage in the more knowledge-intensive task. Machines comparative advantage will depend on the level of design costs c K 1/s K. For instance, if s K < s, then the machine has a comparative advantage over both workers in the less knowledge-intensive task. Next, take an innate ability task and a training-intensive task both with equal knowledge intensity σ. Machines are equally productive in both tasks but workers are more productive in the innate ability task. Therefore, machines have a comparative advantage in the trainingintensive tasks. This is why some training-intensive tasks will always be performed by machines, even if machine design cost exceed the training cost of the least-skilled worker. Finally, consider again two workers with skill levels s, s such that s > s and take an innate ability task and a training-intensive task both with equal knowledge intensity σ. Because the higher-skilled worker has a higher task-specific productivity in the training-intensive task, she has a comparative advantage in that task. This is why workers at the bottom of the skill distribution will generally perform innate ability tasks, and why middle skill workers will compete with machines in training-intensive tasks of intermediate knowledge intensity. The simple example illustrates the main forces driving our results about the effects of increased automation on job assignment and the wage distribution. In fact, the simple model presented here generates an equilibrium assignment and comparative static results that are qualitatively the same as in the model with endogenous worker training and machine design. However, the simple model does not explicitly describe the production process, so that it is not clear what precisely drives the results. Moreover, it does not allow us to assess if the results are robust to allowing firms a productivity choice (via worker training and machine design). We address these limitations in the following section. 11

12 3.5 The Production Process for Tasks and Firms Productivity Choices We model the production process for tasks explicitly, following Garicano (2000). In order to produce, factors (workers, machines) must confront and solve problems. These problems are task-specific. There is a continuum of problems Z [0, ) in each task, and problems are ordered by frequency. Thus, there exists a non-increasing probability density function for problems in each task. Factors draw problems and produce if and only if they know the solution to the problem drawn. We assume that a mass A of problems is drawn, and A may vary across factors. Hence, the task-neutral productivity term introduced in Section 3.3 has a more precise interpretation in this context. Task output per factor unit is equal to A times the integral of the density function over the set of problems to which the factor knows the solution. For simplicity, we will assume that all workers draw a unit mass of problems in all tasks, or A = 1. Equilibrium assignment and comparative statics results are qualitatively the same if we instead assume that A A(s) with A (s) 0. The distribution of problems in a task with knowledge intensity σ is given by the cumulative density function F (Z; σ), which we assume to be continuously differentiable in both Z and the shift parameter σ. Let F/ σ < 0, so that σ indexes first-order stochastic dominance. In terms of the examples discussed in Section 3.2, driving a car and grading an essay are more knowledgeintensive (higher σ) than driving a train or grading an MCQ test since the number of distinct problems typically encountered in the former set of tasks is higher than in the latter. The probability density function corresponding to F is f(z; σ). Because F is continuously differentiable and Z indexes frequency, f is strictly decreasing in Z. Let ε F,σ (Z, σ) denote the elasticity of F with respect to σ holding Z constant, and similarly for ε f,σ (Z, σ). We impose the following condition on the family of distributions F (Z; σ). Assumption 2 ε F,σ (Z, σ) < ε f,σ (Z, σ) for all Z, σ > 0 This assumption will give rise to a set of intuitive comparative advantage properties, for instance high skill workers will have a comparative advantage in knowledge-intensive tasks. One of the distributions satisfying Assumption 2 is the exponential distribution with mean σ. Note that the distribution of problems depends only on σ and not on τ. As discussed above, training intensity is not an intrinsic property of a task, but arises from the fact that humans have evolved such that some tasks require less effort to master than others, even holding constant (objective) knowledge intensity. We now characterize optimal training an design choices and thus derive equilibrium productivity of workers and machines. First observe that firms will equip factors with a set of knowledge [0, z], since it can never be optimal not to know the solutions to the most frequent problems. Assume that each worker is endowed with one efficiency unit of labor. After incurring learning costs, 1 τz/s efficiency units are left for production, solving a fraction F (z; σ) of problems drawn. Similarly, after the design cost, 1 z/s K units of capital are left, and the machine solves a fraction F (z; σ) of problems drawn. Let the productivity level of an optimally trained worker of skill s in task (σ, τ) be denoted by α N (s, σ, τ), and similarly let α K (s K, σ) be the productivity 12

14 (0, 1) by (2). Furthermore, from applying the envelope theorem to (2) it follows that α is increasing in s and decreasing in σ. Higher skilled factors are more productive since they face a lower learning/design cost, and productivity declines in knowledge intensity since a larger cost is incurred to achieve a given level of productivity. To characterize comparative advantage, we rely on the following result. Lemma 1 The productivity schedule α( s, σ) is strictly log-supermodular if Assumption 2 holds. The log-supermodularity of the productivity schedule implies that in training-intensive tasks, factors with higher skill have a comparative advantage in more knowledge-intensive tasks, or s > s, σ > σ α( s, σ ) α( s, σ ) > α( s, σ) α( s, σ). For instance, high skill workers have a comparative advantage over low skill workers in more knowledge-intensive tasks; all workers with s > s K have a comparative advantage over machines in more knowledge-intensive tasks; and so on. As the proof of Lemma 1 establishes, these comparative advantage properties hold if and only if optimal knowledge z( s, σ) is increasing in σ. Thus, high skill factors have a comparative advantage in more knowledge-intensive tasks because these tasks induce a higher level of knowledge, and to high skill factors this comes at a lower cost. The effect of σ on the optimal knowledge level is in principle ambiguous. A higher σ implies a lower opportunity cost of learning an additional problem since factors are less productive, ceteris paribus. However, the marginal benefit may increase or decrease depending on the problem distribution. Assumption 2 ensures that the fall in marginal costs outweighs any effect on the marginal benefit. Comparative advantage properties regarding training intensity are straightforward. Since α is increasing in s, and because all workers have productivity one in all innate ability tasks, high skill workers have a comparative advantage over low skill workers in any training-intensive task. Furthermore, because machine productivity is the same in innate ability tasks as in training-intensive tasks if knowledge-intensity is held constant, it follows that machines have a comparative advantage over all workers in any training-intensive task relative to the innate ability task with the same knowledge intensity. This seemingly trivial result has profound implications for the assignment of factors to tasks, and for the reallocation of factors in response to a fall in c K (a rise in s K ). It is at the root of the job polarization phenomenon, as we will show in Section 4 below. 3.6 Competitive Equilibrium To complete the setup of the model, let there be a mass K of machine capital and normalize the labor force to have unit mass. We assume a skill distribution that is continuous and without mass points. Let V (s) denote the differentiable CDF, and v(s) the PDF, both with support S = [s, s]. Let the share of innate ability tasks (τ = 0) in final good production be β. The 14

15 production function can now be written as log Y = 1 µ σ σ {β log y 0 (σ) + (1 β) log y 1 (σ)} dσ, (3) where the term µ σ σ ensures constant returns to scale. The subscripts 0 and 1 indicate innate ability (τ = 0) and training-intensive (τ = 1) tasks, respectively. We have established in Section 3.5 that in innate ability tasks, machine productivity is given by α(s K, σ), while worker productivity equals one. Hence, output of the innate ability task with knowledge intensity σ is given by y 0 (σ) = A K α(s K, σ)k 0 (σ) + s s n 0 (s, σ)dσ, (4) where k 0 (σ) and n 0 (c, σ) are the masses of machine capital and of worker type s, respectively, allocated to innate ability task σ. In training-intensive tasks, as we have seen, both machine and worker productivity depends on the function α( s, σ). Hence we can write task output of the training-intensive task σ as y 1 (σ) = A K α(s K, σ)k 1 (σ) + s s α(s, σ)n 1 (s, σ)dσ. (5) There is a large number of perfectly competitive firms producing the final good, and buying task output from perfectly competitive intermediates producers. We normalize the price of the final good to one and denote the price of task σ in sector τ {0, 1} by p τ (σ). Profits of final good firms are given by Π = Y τ σ σ p τ (σ)y τ (σ)dσ, and profits of intermediate producers in sector j and with knowledge intensity σ are Π τ (σ) = p τ (σ)y τ (σ) rk τ (σ) s s w(s)n τ (s, σ)ds where r is the rental rate of capital and w(s) is the wage paid to a worker with skill s. Recall that design and learning costs are already included in the α( s, σ) terms which enter intermediate producer s profits through the task production functions (4) and (5). As in Costinot and Vogel (2010), a competitive equilibrium is defined as an assignment of factors to tasks such that all firms maximize profits and markets clear. Profit-maximizing task demand by final good producers is y 0 (σ) = β Y µ p 0 (σ), y 1(σ) = 1 β µ Y p 1 (σ). (6) 15

16 Profit maximization by intermediates producers implies p 0 (σ) w(s) s [s, s], p 1 (σ)α(s, σ) w(s) s [s, s], p τ (σ)α(s K, σ) r/a K τ {0, 1}; p 0 (σ) = w(s) if n 0 (s, σ) > 0, p 1 (σ)α(s, σ) = w(s) if n 1 (s, σ) > 0, p τ (σ)α(s K, σ) = r/a K if k τ (σ) > 0. (7) Factor market clearing conditions are v(s) = τ σ σ n τ (s, σ)dσ for all s [s, s] (8) and K = τ σ σ k τ (σ)dσ. (9) A competitive equilibrium in this economy is a set of functions y : Σ T R + (task output); k : Σ T R + and n : S Σ T R + (factor assignment); p : Σ T R + (task prices); w : S R + (wages); and a real number r (rental rate of capital) such that conditions (1), (2), and (4) to (9) hold. The equilibrium assignment of factors to tasks is determined by comparative advantage, which is a consequence of the zero-profit condition (7). 19 Because high skill workers have a comparative advantage in training-intensive tasks (holding knowledge intensity constant), in equilibrium the labor force is divided into a group of low skill workers performing innate ability tasks, and a group of high skill workers carrying out training-intensive tasks: there exists a marginal worker with skill s, the least-skilled worker employed in training-intensive tasks. This is formally stated in part (a) of Lemma 2 below. We focus on the empirically relevant case in which machines as well as workers perform both training-intensive and innate ability tasks. 20 In this case, machines are assigned to a subset of innate ability and training-intensive tasks that are relatively less knowledge-intensive, while low skill workers perform the remaining innate ability tasks: there is a threshold task σ 0, the marginal innate ability tasks, dividing the set of innate ability tasks into those performed by 19 To see how comparative advantage determines patterns of specialization, consider two firms, one producing training-intensive task σ, the other producing training-intensive task σ. Suppose in equilibrium, firm σ is matched with workers of type s and firm σ is matched with workers of type s. Then (7) implies α(s, σ ) α(s, σ ) α(s, σ) α(s, σ), which shows that type s (s ) has a comparative advantage in task σ (σ ), precisely the task to which she was assumed to be matched. 20 Sufficient conditions for the existence of such an equilibrium are derived Appendix A.1. We assume throughout that these conditions are satisfied. We note however that in general, no innate ability tasks may be performed by machines, and/or no training-intensive tasks may be performed by workers. 16

17 machines (σ σ 0 ) and those carried out by low skill workers (σ σ 0 ). Similarly, there is a marginal training-intensive task σ1 that divides the set of training-intensive tasks into those performed by machines (σ σ1 ) and those carried out by high skill workers (σ σ 1 ). As in the case of the marginal worker, existence of these marginal tasks is of course a consequence of the comparative advantage properties discussed at the end of Section 3.5. These properties also imply σ0 < σ 1 : the marginal training-intensive task is always more knowledge-intensive than the marginal innate ability task (recall that machines are relatively more productive in training-intensive tasks than workers, holding knowledge intensity constant); and s > s K : it is always cheaper to train (though not to employ) the marginal worker than to design a machine in any task. These results are formally stated in part (b) of Lemma 2. An illustration of the equilibrium assignment is given in Figure 1. Lemma 2 (a) In a competitive equilibrium, there exists an s (s, s] such that n 0 (s, σ) > 0 for some σ if and only if s s, and n 1 (s, σ) > 0 for some σ if and only if s s. (b) If k 0 (σ) > 0 for some σ, then s > s K, and there exist σ 0, σ 1 Σ with σ 0 < σ 1 such that k 0 (σ) > 0 if and only if σ σ 0 ; k 1 (σ) > 0 if and only if σ σ 1 ; n 0 (s, σ) > 0 if and only if s s and σ σ 0 ; and n 1 (s, σ) > 0 if and only if s s and σ σ 1. It remains to determine the assignment of low skill workers (s s ) to innate ability tasks (τ = 0, σ σ 0 ) and that of high skill workers (s s ) to training-intensive tasks (τ = 1, σ σ 1 ). The solution to the matching problem in innate ability tasks is indeterminate as all workers are equally productive in these tasks. However, knowledge of the assignment is not necessary to pin down task output and prices, as shown below. High skill workers are assigned to trainingintensive tasks according to comparative advantage, with higher skilled workers carrying out more knowledge-intensive tasks. Formally, we have: Lemma 3 In a competitive equilibrium, if s < s, there exists a continuous and strictly increasing matching function M : [s, s] [σ 1, σ] such that n 1(s, σ) > 0 if and only if M(s) = σ. Furthermore, M(s ) = σ 1 and M(s) = σ. This result is an application of Costinot and Vogel (2010), with the added complication that domain and range of the matching function are determined by the endogenous variables s and σ1. The matching function is characterized by a system of differential equations. Using arguments along the lines of the proof of Lemma 2 in Costinot and Vogel (2010), it can be shown that the matching function satisfies M (s) = µ 1 β w(s)v(s), (10) Y 17

19 and that the wage schedule is given by d log w(s) ds = log α(s, M(s)). (11) s The last equation is due to the fact that in equilibrium, a firm producing training-intensive task σ chooses worker skill s to minimize marginal cost w(s)/α(s, σ). Once differentiability of the matching function has been established, (10) can easily be derived from the market clearing condition (8) given Lemma 2, and using (6) and (7). In particular, Lemma 2 and (8) imply s s v(s )ds = σ σ1 n 1 (M 1 (σ ), σ )dσ. Changing variables on the RHS of the last expression and differentiating with respect to s yields v(s) = n 1 (s, M(s))M (s), and substituting (5) we obtain M (s) = α(s, M(s))v(s). (12) y(m(s)) After eliminating task output and price using (6) and (7), (10) follows. Figure 2 illustrates how the matching function assigns workers to training-intensive tasks. In order to characterize the equilibrium more fully, and for comparative statics exercises, it is necessary to derive equations pinning down the endogenous variables σ 0, σ 1, and s. These equations are due to a set of no-arbitrage conditions. In particular, firms producing the marginal tasks are indifferent between hiring labor or capital, and the marginal worker is indifferent between performing innate ability tasks or the marginal training-intensive tasks. Formally, the price and wage functions must be continuous, otherwise the zero-profit condition (7) could not hold. This is a well-known result in the literature on comparative-advantage-based assignment models. Hence, the no-arbitrage conditions for the marginal tasks are and r A K α(s K, σ 0 ) = w(s) for all s s (13) r A K α(s K, σ1 ) = w(s ) α(s, σ1 (14) ), and the no-arbitrage condition for the marginal worker is w(s) = w(s ) for all s s. (15) The last result implies that there is a mass point at the lower end of the wage distribution. The mass point is a result of normalizing A, the amount of problems drawn, to one for all workers. Relaxing this assumption would complicate the analysis, although the main results would go 19

Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets Nathaniel Hendren January, 2014 Abstract Both Akerlof (1970) and Rothschild and Stiglitz (1976) show that

Gains from Trade: The Role of Composition Wyatt Brooks University of Notre Dame Pau Pujolas McMaster University February, 2015 Abstract In this paper we use production and trade data to measure gains from

Chapter 13 Real Business Cycle Theory Real Business Cycle (RBC) Theory is the other dominant strand of thought in modern macroeconomics. For the most part, RBC theory has held much less sway amongst policy-makers

THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING 1. Introduction The Black-Scholes theory, which is the main subject of this course and its sequel, is based on the Efficient Market Hypothesis, that arbitrages

How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly

Break-even Analysis An enterprise, whether or not a profit maximizer, often finds it useful to know what price (or output level) must be for total revenue just equal total cost. This can be done with a

Chapter 8 Inflation This chapter examines the causes and consequences of inflation. Sections 8.1 and 8.2 relate inflation to money supply and demand. Although the presentation differs somewhat from that

OFFSHORING IN A KNOWLEDGE ECONOMY* POL ANTRÀS LUIS GARICANO ESTEBAN ROSSI-HANSBERG How does the formation of cross-country teams affect the organization of work and the structure of wages? To study this

1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

In this chapter, we present the theory of consumer preferences on risky outcomes. The theory is then applied to study the demand for insurance. Consider the following story. John wants to mail a package

Notes 10: An Equation Based Model of the Macroeconomy In this note, I am going to provide a simple mathematical framework for 8 of the 9 major curves in our class (excluding only the labor supply curve).

Trading Tasks: The International Organization of Production Esteban Rossi-Hansberg Princeton University with : Pol Antras, Luis Garicano and Gene Grossman Introduction Revolutionary progress in communication

Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the

OF PENCILS AND COMPUTERS Kevin Lang Department of Economics Boston University 270 Bay State Road Boston, MA 02215 (lang@bu.edu). September 10, 2002 This paper was written in part while I was visiting the

Factor Markets Problem 1 (APT 93, P2) Two goods, coffee and cream, are complements. Due to a natural disaster in Brazil that drastically reduces the supply of coffee in the world market the price of coffee

Economics Education and Research Consortium Working Paper Series Economic Development and Gains from Trade Georgi Trofimov Working Paper No 98/06 This project (No 96-161) was supported by the Economics

Chapter 7 Sealed-bid Auctions An auction is a procedure used for selling and buying items by offering them up for bid. Auctions are often used to sell objects that have a variable price (for example oil)

Single-Period Balancing of Pay Per-Click and Pay-Per-View Online Display Advertisements Changhyun Kwon Department of Industrial and Systems Engineering University at Buffalo, the State University of New

ECON 312: Oligopolisitic Competition 1 Industrial Organization Oligopolistic Competition Both the monopoly and the perfectly competitive market structure has in common is that neither has to concern itself

Online Ad Auctions By Hal R. Varian Draft: February 16, 2009 I describe how search engines sell ad space using an auction. I analyze advertiser behavior in this context using elementary price theory and

I. Learning Objectives In this chapter students will learn: A. The significance of resource pricing. B. How the marginal revenue productivity of a resource relates to a firm s demand for that resource.

21 st Century Knowledge Worker: the Centaur Daniel Kiss Introduction The centaur is a well-known mythological creature, half-human half-horse. The most famous of centaurs was Chiron, the teacher of Asclepius,

A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses Michael R. Powers[ ] Temple University and Tsinghua University Thomas Y. Powers Yale University [June 2009] Abstract We propose a

The Basics of Graphical Models David M. Blei Columbia University October 3, 2015 Introduction These notes follow Chapter 2 of An Introduction to Probabilistic Graphical Models by Michael Jordan. Many figures

KOÇ UNIVERSITY ECON 321 - INTERNATIONAL TRADE Mid-term Exam (100 points; 90 minutes) Answer all 5 questions. In providing answers to the questions in this section algebra or graphs might be helpful. State

Short-Run Production and Costs The purpose of this section is to discuss the underlying work of firms in the short-run the production of goods and services. Why is understanding production important to

Chapter 14 DEMAND AND SUPPLY IN FACTOR MARKETS Key Concepts Prices and Incomes in Competitive Factor Markets Factors of production (labor, capital, land, and entrepreneurship) are used to produce output.

Variables and Hypotheses When asked what part of marketing research presents them with the most difficulty, marketing students often reply that the statistics do. Ask this same question of professional

Partial Equilibrium: Positive Analysis This Version: November 28, 2009 First Version: December 1, 2008. In this Chapter we consider consider the interaction between different agents and firms, and solve

Notes on indifference curve analysis of the choice between leisure and labor, and the deadweight loss of taxation Jon Bakija This example shows how to use a budget constraint and indifference curve diagram

385 356 PART FOUR Capital Budgeting a large number of NPV estimates that we summarize by calculating the average value and some measure of how spread out the different possibilities are. For example, it

Insurance Michael Peters December 27, 2013 1 Introduction In this chapter, we study a very simple model of insurance using the ideas and concepts developed in the chapter on risk aversion. You may recall

The Elasticity of Taxable Income: A Non-Technical Summary John Creedy The University of Melbourne Abstract This paper provides a non-technical summary of the concept of the elasticity of taxable income,

11 Ideals The presentation here is somewhat different than the text. In particular, the sections do not match up. We have seen issues with the failure of unique factorization already, e.g., Z[ 5] = O Q(

Schooling, Political Participation, and the Economy Online Supplementary Appendix: Not for Publication) Filipe R. Campante Davin Chor July 200 Abstract In this online appendix, we present the proofs for

Joint Product Signals of Quality By James E. McClure and Lee C. Spector Ball State University Muncie, Indiana September 1991 Abstract In the absence of other information about the quality of an experience

Chapter 5 The Market-Clearing Model Most of the models that we use in this book build on two common assumptions. First, we assume that there exist markets for all goods present in the economy, and that

Introduction to time series analysis Margherita Gerolimetto November 3, 2010 1 What is a time series? A time series is a collection of observations ordered following a parameter that for us is time. Examples

1 Endogenous Growth We present two models that are very popular in the, so-called, new growth theory literature. They represent economies where, notwithstanding the absence of exogenous technical progress,

Summary Ph.D. thesis Fredo Schotanus Horizontal cooperative purchasing Purchasing in groups is a concept that is becoming increasingly popular in both the private and public sector. Often, the advantages

Social Security Eligibility and the Labor Supply of Elderly Immigrants George J. Borjas Harvard University and National Bureau of Economic Research Updated for the 9th Annual Joint Conference of the Retirement

On the Interaction and Competition among Internet Service Providers Sam C.M. Lee John C.S. Lui + Abstract The current Internet architecture comprises of different privately owned Internet service providers

Financial Development and Macroeconomic Stability Vincenzo Quadrini University of Southern California Urban Jermann Wharton School of the University of Pennsylvania January 31, 2005 VERY PRELIMINARY AND

Chapter 3 A Classical Economic Model what determines the economy s total output/income how the prices of the factors of production are determined how total income is distributed what determines the demand

Online Appendix for Student Portfolios and the College Admissions Problem Hector Chade Gregory Lewis Lones Smith November 25, 2013 In this online appendix we explore a number of different topics that were

Chapter 4 Inflation and Interest Rates in the Consumption-Savings Framework The lifetime budget constraint (LBC) from the two-period consumption-savings model is a useful vehicle for introducing and analyzing

ECON 3010 Intermediate Macroeconomics Chapter 3 National Income: Where It Comes From and Where It Goes Outline of model A closed economy, market-clearing model Supply side factors of production determination

When to Use the Open Business Model for Software Products under Network Effects? Lizhen Xu, Marius F. Niculescu and D. J. Wu Scheller College of Business, Georgia Institute of Technology, Atlanta, GA 30308

Chapter 3 EXPENDITURE MULTIPLIERS: THE KEYNESIAN MODEL* Key Concepts Fixed Prices and Expenditure Plans In the very short run, firms do not change their prices and they sell the amount that is demanded.

To ensure the functioning of the site, we use cookies. We share information about your activities on the site with our partners and Google partners: social networks and companies engaged in advertising and web analytics. For more information, see the Privacy Policy and Google Privacy &amp Terms.
Your consent to our cookies if you continue to use this website.