Economics & Finance (School of)http://hdl.handle.net/10023/22
Sun, 18 Feb 2018 07:31:42 GMT2018-02-18T07:31:42ZEconomics & Finance (School of)https://research-repository.st-andrews.ac.uk:443/bitstream/id/12254/6027.jpghttp://hdl.handle.net/10023/22
Earnout financing in the financial services industryhttp://hdl.handle.net/10023/12622
This paper explores the effects of earnout contracts used in US financial services M&A. We use propensity score matching (PSM) to address selection bias issues with regard to the endogeneity of the decision of financial institutions to use such contracts. We find that the use of earnout contracts leads to significantly higher acquirer abnormal (short- and long-run) returns compared to counterpart acquisitions (control deals) which do not use such contracts. The larger the size of the deferred (earnout) payment, as a fraction of the total transaction value, the higher the acquirers’ gains in the short- and long-run. Both acquirer short- and long-run gains increase when the management team of the target institution is retained in the post-acquisition period.
Sat, 01 Oct 2016 00:00:00 GMThttp://hdl.handle.net/10023/126222016-10-01T00:00:00ZBarbopoulos, Leonidas G.Molyneux, PhilWilson, John O.S.This paper explores the effects of earnout contracts used in US financial services M&A. We use propensity score matching (PSM) to address selection bias issues with regard to the endogeneity of the decision of financial institutions to use such contracts. We find that the use of earnout contracts leads to significantly higher acquirer abnormal (short- and long-run) returns compared to counterpart acquisitions (control deals) which do not use such contracts. The larger the size of the deferred (earnout) payment, as a fraction of the total transaction value, the higher the acquirers’ gains in the short- and long-run. Both acquirer short- and long-run gains increase when the management team of the target institution is retained in the post-acquisition period.Entrepreneurship, agency frictions and redistributive capital taxationhttp://hdl.handle.net/10023/12485
Motivated by the observation that among OECD countries redistribution is negatively correlated with entrepreneurial activity, we examine the implications of entrepreneurial financial frictions for optimal linear capital taxation, in a setting where the government is concerned with redistribution. By including financial frictions, we emphasize the effect of a new channel affecting the equity-efficiency trade-off of redistribution: taxes affect the allocative efficiency of capital and, ultimately, total factor productivity. We find that high tax rates are optimal, provided that they are applied to wealth, rather than risky capital. Under plausible parameter values, we find that the optimal tax on risky capital is lower than that on wealth, and roughly in line with current U.S. levels. This suggests welfare gains from taxing only wealth at a higher rate.
Sun, 27 Aug 2017 00:00:00 GMThttp://hdl.handle.net/10023/124852017-08-27T00:00:00ZKnowles, Matthew PaulBoar, CorinaMotivated by the observation that among OECD countries redistribution is negatively correlated with entrepreneurial activity, we examine the implications of entrepreneurial financial frictions for optimal linear capital taxation, in a setting where the government is concerned with redistribution. By including financial frictions, we emphasize the effect of a new channel affecting the equity-efficiency trade-off of redistribution: taxes affect the allocative efficiency of capital and, ultimately, total factor productivity. We find that high tax rates are optimal, provided that they are applied to wealth, rather than risky capital. Under plausible parameter values, we find that the optimal tax on risky capital is lower than that on wealth, and roughly in line with current U.S. levels. This suggests welfare gains from taxing only wealth at a higher rate.Capital deaccumulation and the large persistent effects of financial criseshttp://hdl.handle.net/10023/12484
In a panel of OECD and emerging economies, I find that banking crises are characterized by larger initial drops in investment than other large recessions and are followed by particularly persistent drops in output. Furthermore, the larger the drop in investment during the crisis, the more persistent the decrease in output subsequently. I present a model to account for these patterns, in which a financial shock temporarily increases the costs of external finance for investing entrepreneurs, leading to a drop in investment and a persistent slump in output and employment. Critical to the model is the distinction between different types of capital with different depreciation rates. Intangible capital and equipment have high depreciation rates, leading these stocks to drop substantially when investment falls during a financial shock. This can cause output and employment to remain low for close to a decade, through the contribution of equipment and intangibles to production and labor demand. I show that the consequences of such a financial shock correspond to several features of the US Great Recession (2008-2014), especially the large drop in equipment and intangible capital. In the model, TFP and government spending shocks do not lead to such large declines in investment or persistent output decreases, so the model is also consistent with the more transitory output drops seen after non-financial recessions, where such shocks may have been more important. ​
Sat, 17 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/124842015-01-17T00:00:00ZKnowles, Matthew PaulIn a panel of OECD and emerging economies, I find that banking crises are characterized by larger initial drops in investment than other large recessions and are followed by particularly persistent drops in output. Furthermore, the larger the drop in investment during the crisis, the more persistent the decrease in output subsequently. I present a model to account for these patterns, in which a financial shock temporarily increases the costs of external finance for investing entrepreneurs, leading to a drop in investment and a persistent slump in output and employment. Critical to the model is the distinction between different types of capital with different depreciation rates. Intangible capital and equipment have high depreciation rates, leading these stocks to drop substantially when investment falls during a financial shock. This can cause output and employment to remain low for close to a decade, through the contribution of equipment and intangibles to production and labor demand. I show that the consequences of such a financial shock correspond to several features of the US Great Recession (2008-2014), especially the large drop in equipment and intangible capital. In the model, TFP and government spending shocks do not lead to such large declines in investment or persistent output decreases, so the model is also consistent with the more transitory output drops seen after non-financial recessions, where such shocks may have been more important. ​The clarity incentive for issue engagement in campaignshttp://hdl.handle.net/10023/12479
Although parties focus disproportionately on favorable issues in their election campaigns, it is also the case that parties spend much of the ‘short campaign’ addressing the same issues – and especially salient issues. If able to inﬂuence the importance of issues for voters through their emphasis, it is puzzling that parties spend any time on unfavourable issue positions. We suggest that while parties prefer to emphasize popular issue positions, they also face an additional incentive to emphasize issues that are salient to voters: clarifying their positions on these issues for sympathetic voters. Leveraging the surprise general election victory of the British Conservative party in 2015—which brought about a hitherto unexpected referendum on EU membership—we show that, consistent with this hypothesis, voter uncertainty is especially costly for parties on salient issues. We formalize this argument using a model of party strategy with endogenous issue salience.
Wed, 14 Jun 2017 00:00:00 GMThttp://hdl.handle.net/10023/124792017-06-14T00:00:00ZBasu, ChitralekhaKnowles, Matthew PaulAlthough parties focus disproportionately on favorable issues in their election campaigns, it is also the case that parties spend much of the ‘short campaign’ addressing the same issues – and especially salient issues. If able to inﬂuence the importance of issues for voters through their emphasis, it is puzzling that parties spend any time on unfavourable issue positions. We suggest that while parties prefer to emphasize popular issue positions, they also face an additional incentive to emphasize issues that are salient to voters: clarifying their positions on these issues for sympathetic voters. Leveraging the surprise general election victory of the British Conservative party in 2015—which brought about a hitherto unexpected referendum on EU membership—we show that, consistent with this hypothesis, voter uncertainty is especially costly for parties on salient issues. We formalize this argument using a model of party strategy with endogenous issue salience.​Afriat's Theorem and Samuelson's 'Eternal Darkness'http://hdl.handle.net/10023/12274
Suppose that we have access to a finite set of expenditure data drawn from an individual consumer, i.e., how much of each good has been purchased and at what prices. Afriat (1967) was the first to establish necessary and sufficient conditions on such a data set for rationalizability by utility maximization. In this note, we provide a new and simple proof of Afriat’s Theorem, the explicit steps of which help to more deeply understand the driving force behind one of the more curious features of the result itself, namely that a concave rationalization is without loss of generality in a classical finite data setting. Our proof stresses the importance of the non-uniqueness of a utility representation along with the finiteness of the data set in ensuring the existence of a concave utility function that rationalizes the data.
Mon, 01 Aug 2016 00:00:00 GMThttp://hdl.handle.net/10023/122742016-08-01T00:00:00ZPolisson, MatthewRenou, LudovicSuppose that we have access to a finite set of expenditure data drawn from an individual consumer, i.e., how much of each good has been purchased and at what prices. Afriat (1967) was the first to establish necessary and sufficient conditions on such a data set for rationalizability by utility maximization. In this note, we provide a new and simple proof of Afriat’s Theorem, the explicit steps of which help to more deeply understand the driving force behind one of the more curious features of the result itself, namely that a concave rationalization is without loss of generality in a classical finite data setting. Our proof stresses the importance of the non-uniqueness of a utility representation along with the finiteness of the data set in ensuring the existence of a concave utility function that rationalizes the data.Examining monetary policy reaction in the People’s Republic of China – a Markov switching policy index approachhttp://hdl.handle.net/10023/12076
This paper estimates a monetary policy rule for the People’s Republic of China (PRC) using a standard OLS estimation and a Markov switching model. As the People’s Bank of China (PBOC) generally uses a battery of instruments in the conduct of its monetary policy, these models are estimated using a constructed monetary policy index (MPI) in place of the traditional interest rate. This allows for a better understanding of the role the PBOC has played in the PRC’s unprecedented economic growth and its relatively low inflation over the last twenty years. This paper will not only examine the unique characteristics of Chinese monetary policy but may also give a more general insight into the dynamics of monetary policy reactions in other emerging markets and economies in transition.
The authors are grateful for the financial support from the Irish Research Council (IRC) and The Paul Tansey Postgraduate Research Scholarship in Economics.
Fri, 13 May 2016 00:00:00 GMThttp://hdl.handle.net/10023/120762016-05-13T00:00:00ZEgan, Paul GerardLeddin, Anthony J.This paper estimates a monetary policy rule for the People’s Republic of China (PRC) using a standard OLS estimation and a Markov switching model. As the People’s Bank of China (PBOC) generally uses a battery of instruments in the conduct of its monetary policy, these models are estimated using a constructed monetary policy index (MPI) in place of the traditional interest rate. This allows for a better understanding of the role the PBOC has played in the PRC’s unprecedented economic growth and its relatively low inflation over the last twenty years. This paper will not only examine the unique characteristics of Chinese monetary policy but may also give a more general insight into the dynamics of monetary policy reactions in other emerging markets and economies in transition.Essays on issues in climate change policyhttp://hdl.handle.net/10023/12023
This thesis addresses three themes relating to climate change. The first is which types of fossil fuel to leave in the ground when they can differ in both their extraction cost and emissions rate. The analysis shows that without resource constraints there will always be use of at least one fossil fuel in the steady-state. With exhaustion constraints, any fossil fuel that has a lower extraction cost than the marginal cost of the backstop will be extracted in finite time regardless of the emissions rate. The only environmental consideration is the timing of extraction rather than leaving fossil fuel stock in the ground forever. The second theme is how altruistic concern of individuals for the well-being of others influences the socially optimal consumption levels and optimal emissions tax in a global context. If individuals have altruistic concern but believe that their consumption is negligible, they will not change their behaviour. However, non-cooperative governments maximising domestic welfare will internalise some of the damage inflicted on other countries depending on the level of altruistic concern individuals have and the cooperative optimum also changes as altruism leads individuals to effectively experience damage in other countries as well as the direct damage to them. Still, for behaviour to change, individuals need to make their decisions in a different way. The third chapter develops a new theory of moral behaviour whereby individuals balance the cost of not acting in their own self-interest against the hypothetical moral value of adopting a Kantian form of behaviour, asking what would happen if everyone else acted in the same way as they did. If individuals behave this way, then altruism matters and it may induce individuals to cut back their consumption. But nevertheless the optimal environmental tax is exactly the same as the standard Pigovian tax.
Fri, 23 Jun 2017 00:00:00 GMThttp://hdl.handle.net/10023/120232017-06-23T00:00:00ZDaube, MarcThis thesis addresses three themes relating to climate change. The first is which types of fossil fuel to leave in the ground when they can differ in both their extraction cost and emissions rate. The analysis shows that without resource constraints there will always be use of at least one fossil fuel in the steady-state. With exhaustion constraints, any fossil fuel that has a lower extraction cost than the marginal cost of the backstop will be extracted in finite time regardless of the emissions rate. The only environmental consideration is the timing of extraction rather than leaving fossil fuel stock in the ground forever. The second theme is how altruistic concern of individuals for the well-being of others influences the socially optimal consumption levels and optimal emissions tax in a global context. If individuals have altruistic concern but believe that their consumption is negligible, they will not change their behaviour. However, non-cooperative governments maximising domestic welfare will internalise some of the damage inflicted on other countries depending on the level of altruistic concern individuals have and the cooperative optimum also changes as altruism leads individuals to effectively experience damage in other countries as well as the direct damage to them. Still, for behaviour to change, individuals need to make their decisions in a different way. The third chapter develops a new theory of moral behaviour whereby individuals balance the cost of not acting in their own self-interest against the hypothetical moral value of adopting a Kantian form of behaviour, asking what would happen if everyone else acted in the same way as they did. If individuals behave this way, then altruism matters and it may induce individuals to cut back their consumption. But nevertheless the optimal environmental tax is exactly the same as the standard Pigovian tax.The earnout structure matters : takeover premia and acquirer gains in earnout financed M&Ashttp://hdl.handle.net/10023/11821
In this article, based on both parametric and non-parametric methods, we provide a robust solution to the long-standing issue on how earnouts in corporate takeovers are structured and how their structure influences the takeover premia and the abnormal returns earned by acquirers. First, we quantify the effect of the terms of earnout contract (relative size and length) on the takeover premia. Second, we demonstrate how adverse selection considerations lead the merging firms to set the initial payment in an earnout financed deal at a level that is lower than, or equal to, the full deal payment in a comparable non-earnout financed deal. Lastly, we show that while acquirers in non-earnout financed deals experience negative abnormal returns from an increase in the takeover premia, this effect is neutralised in earnout financed deals.
Sun, 01 May 2016 00:00:00 GMThttp://hdl.handle.net/10023/118212016-05-01T00:00:00ZBarbopoulos, Leonidas G.Adra, SamerIn this article, based on both parametric and non-parametric methods, we provide a robust solution to the long-standing issue on how earnouts in corporate takeovers are structured and how their structure influences the takeover premia and the abnormal returns earned by acquirers. First, we quantify the effect of the terms of earnout contract (relative size and length) on the takeover premia. Second, we demonstrate how adverse selection considerations lead the merging firms to set the initial payment in an earnout financed deal at a level that is lower than, or equal to, the full deal payment in a comparable non-earnout financed deal. Lastly, we show that while acquirers in non-earnout financed deals experience negative abnormal returns from an increase in the takeover premia, this effect is neutralised in earnout financed deals.Preferential votes and minority representation in open list proportional representation systemshttp://hdl.handle.net/10023/11799
Under open list proportional representation, voters vote both for a party and for some candidates within its list (preferential vote). Seats are assigned to parties in proportion to their votes and, within parties, to the candidates obtaining the largest number of preferential votes. The paper examines how the number of candidates voters can vote for affects the representation of minorities in parliaments. I highlight a clear negative relationship between the two. Minorities are proportionally represented in parliament only if voters can cast a limited number of preferential votes. When the number of preferential votes increases, a multiplier effect arises, which disproportionately increases the power of the majority in determining the elected candidates.
Wed, 04 Oct 2017 00:00:00 GMThttp://hdl.handle.net/10023/117992017-10-04T00:00:00ZNegri, MargheritaUnder open list proportional representation, voters vote both for a party and for some candidates within its list (preferential vote). Seats are assigned to parties in proportion to their votes and, within parties, to the candidates obtaining the largest number of preferential votes. The paper examines how the number of candidates voters can vote for affects the representation of minorities in parliaments. I highlight a clear negative relationship between the two. Minorities are proportionally represented in parliament only if voters can cast a limited number of preferential votes. When the number of preferential votes increases, a multiplier effect arises, which disproportionately increases the power of the majority in determining the elected candidates.Local labor markets and the persistence of population shockshttp://hdl.handle.net/10023/11798
This paper studies the persistence of a large, unexpected, and regionally very unevenly distributed population shock, the inflow of eight million ethnic Germans from Eastern Europe to West Germany after World War II. Using detailed census data from 1939 to 1970, we show that the shock had a persistent effect on the distribution of population within local labor markets, but only a temporary effect on the distribution between labor markets. These results suggest that locational fundamentals determine population patterns across but not within local labor markets, and they can help to explain why previous studies on the persistence of population shocks reached such different conclusions.
The research in this paper was funded by Deutsche Forschungsgemeinschaft (grant no. BR 4979/1-1, “Die volkswirtschaftlichen Effekte der Vertriebenen und ihre Integration in Westdeutschland, 1945-70”).
Sat, 30 Sep 2017 00:00:00 GMThttp://hdl.handle.net/10023/117982017-09-30T00:00:00ZBraun, Sebastian TillKramer, AnicaKvasnicka, MichaelThis paper studies the persistence of a large, unexpected, and regionally very unevenly distributed population shock, the inflow of eight million ethnic Germans from Eastern Europe to West Germany after World War II. Using detailed census data from 1939 to 1970, we show that the shock had a persistent effect on the distribution of population within local labor markets, but only a temporary effect on the distribution between labor markets. These results suggest that locational fundamentals determine population patterns across but not within local labor markets, and they can help to explain why previous studies on the persistence of population shocks reached such different conclusions.R&D cyclicality and composition effects : a unifying approachhttp://hdl.handle.net/10023/11790
Existing empirical studies do not concur on whether R&D spending is procyclical or countercyclical: the former hypothesis is supported by studies of aggregate R&D spending, whereas the latter is vindicated by firm-level evidence. In this paper, we reconcile the two facts by advancing a general equilibrium framework, in which, while a single firm's R&D spending profile is countercyclical, aggregate R&D spending is procyclical owing to procyclical fluctuations in the number of R&D performers. Our findings suggest that economic crises might be beneficial for economic performance by fostering individual R&D effort. An advantage of our framework is that it brings together conflicting pieces of empirical evidence, while incorporating and building upon Schumpeter's hypothesis of countercyclical innovation.
Wed, 27 Sep 2017 00:00:00 GMThttp://hdl.handle.net/10023/117902017-09-27T00:00:00ZChernyshev, NikolayExisting empirical studies do not concur on whether R&D spending is procyclical or countercyclical: the former hypothesis is supported by studies of aggregate R&D spending, whereas the latter is vindicated by firm-level evidence. In this paper, we reconcile the two facts by advancing a general equilibrium framework, in which, while a single firm's R&D spending profile is countercyclical, aggregate R&D spending is procyclical owing to procyclical fluctuations in the number of R&D performers. Our findings suggest that economic crises might be beneficial for economic performance by fostering individual R&D effort. An advantage of our framework is that it brings together conflicting pieces of empirical evidence, while incorporating and building upon Schumpeter's hypothesis of countercyclical innovation.The relationship between R&D and competition : reconciling theory and evidencehttp://hdl.handle.net/10023/11789
The hypothesis of a hump-shaped relationship between innovation and competition due to Aghion, Bloom, Blundell, Griffith, and Howitt (2005), has been tested for different data sets without garnering conclusive support. In this paper we argue that this lack of agreement is because of a difference in approaches to measuring innovation (either in terms of R&D outcomes or by R&D effort). We develop a unified tractable general-equilibrium framework, in which, while R&D outcomes are a hump-shaped function of competition, R&D effort can be observed to be either increasing, decreasing, or hump-shaped. This enables our paper, first, to reconcile the conclusions by Aghion et al. (2005) with more recent results and, second, to inform further attempts to identify the hump-shaped relationship in data.
Generous financial support of the Royal Economic Society is gratefully recognised.
Wed, 27 Sep 2017 00:00:00 GMThttp://hdl.handle.net/10023/117892017-09-27T00:00:00ZChernyshev, NikolayThe hypothesis of a hump-shaped relationship between innovation and competition due to Aghion, Bloom, Blundell, Griffith, and Howitt (2005), has been tested for different data sets without garnering conclusive support. In this paper we argue that this lack of agreement is because of a difference in approaches to measuring innovation (either in terms of R&D outcomes or by R&D effort). We develop a unified tractable general-equilibrium framework, in which, while R&D outcomes are a hump-shaped function of competition, R&D effort can be observed to be either increasing, decreasing, or hump-shaped. This enables our paper, first, to reconcile the conclusions by Aghion et al. (2005) with more recent results and, second, to inform further attempts to identify the hump-shaped relationship in data.Partial knowledge restrictions on the two-stage threshold model of choicehttp://hdl.handle.net/10023/11768
In the context of the two-stage threshold model of decision making, with the agent's choices determined by the interaction of three "structural variables," we study the restrictions on behavior that arise when one or more variables are exogenously known. Our results supply necessary and sufficient conditions for consistency with the model for all possible states of partial knowledge, and for both single- and multi- valued choice functions.
Sun, 01 May 2016 00:00:00 GMThttp://hdl.handle.net/10023/117682016-05-01T00:00:00ZManzini, PaolaMariotti, MarcoTyson, Christopher J.In the context of the two-stage threshold model of decision making, with the agent's choices determined by the interaction of three "structural variables," we study the restrictions on behavior that arise when one or more variables are exogenously known. Our results supply necessary and sufficient conditions for consistency with the model for all possible states of partial knowledge, and for both single- and multi- valued choice functions.A time varying DSGE model with financial frictionshttp://hdl.handle.net/10023/11730
We build a time varying DSGE model with financial frictions in order to evaluate changes in the responses of the macroeconomy to financial friction shocks. Using U.S. data, we find that the transmission of the financial friction shock to economic variables, such as output growth, has not changed in the last 30. years. The volatility of the financial friction shock, however, has changed, so that output responses to a one-standard deviation of the shock increase twofold in the 2007-2011 period in comparison with the 1985-2006 period. The time varying DSGE model with financial frictions improves the accuracy of forecasts of output growth and inflation during the tranquil period of 2000-2006, while delivering similar performance to the fixed coefficient DSGE model for the 2007-2012 period.
Galvão, Kapetanios and Petrova acknowledge fnancial support from the ESRC grant No ES/K010611/1.
Fri, 02 Sep 2016 00:00:00 GMThttp://hdl.handle.net/10023/117302016-09-02T00:00:00ZGalvão, Ana BeatrizGiraitis, LiudasKapetanios, GeorgePetrova, KaterinaWe build a time varying DSGE model with financial frictions in order to evaluate changes in the responses of the macroeconomy to financial friction shocks. Using U.S. data, we find that the transmission of the financial friction shock to economic variables, such as output growth, has not changed in the last 30. years. The volatility of the financial friction shock, however, has changed, so that output responses to a one-standard deviation of the shock increase twofold in the 2007-2011 period in comparison with the 1985-2006 period. The time varying DSGE model with financial frictions improves the accuracy of forecasts of output growth and inflation during the tranquil period of 2000-2006, while delivering similar performance to the fixed coefficient DSGE model for the 2007-2012 period.Procedures for eliciting time preferenceshttp://hdl.handle.net/10023/11729
We study three procedures to elicit attitudes towards delayed payments: the Becker-DeGroot-Marschak procedure; the second price auction; and the multiple price list. The payment mechanisms associated with these methods are widely considered as incentive compatible, thus if preferences satisfy Procedure Invariance, which is also widely (and often implicitly) assumed, they should yield identical time preference distributions. We find instead that the monetary discount rates elicited using the Becker-DeGroot-Marschak procedure are significantly lower than those elicited with a multiple price list. We show that the behavior we observe is consistent with an existing psychological explanation of preference reversals.
Wed, 01 Jun 2016 00:00:00 GMThttp://hdl.handle.net/10023/117292016-06-01T00:00:00ZFreeman, DavidManzini, PaolaMariotti, MarcoMittone, LuigiWe study three procedures to elicit attitudes towards delayed payments: the Becker-DeGroot-Marschak procedure; the second price auction; and the multiple price list. The payment mechanisms associated with these methods are widely considered as incentive compatible, thus if preferences satisfy Procedure Invariance, which is also widely (and often implicitly) assumed, they should yield identical time preference distributions. We find instead that the monetary discount rates elicited using the Becker-DeGroot-Marschak procedure are significantly lower than those elicited with a multiple price list. We show that the behavior we observe is consistent with an existing psychological explanation of preference reversals.Validity of willingness to pay measures under preference uncertaintyhttp://hdl.handle.net/10023/11697
Recent studies in the marketing literature developed a new method for eliciting willingness to pay (WTP) with an open-ended elicitation format: the Range-WTP method. In contrast to the traditional approach of eliciting WTP as a single value (Point-WTP), Range-WTP explicitly allows for preference uncertainty in responses. The aim of this paper is to apply Range-WTP to the domain of contingent valuation and to test for its theoretical validity and robustness in comparison to the Point-WTP. Using data from two novel large-scale surveys on the perception of solar radiation management (SRM), a little-known technique for counteracting climate change, we compare the performance of both methods in the field. In addition to the theoretical validity (i.e. the degree to which WTP values are consistent with theoretical expectations), we analyse the test-retest reliability and stability of our results over time. Our evidence suggests that the Range-WTP method clearly outperforms the Point-WTP method.
This paper is part of the project ACCEPT, which is funded by the German Federal Ministry for Education and Research (grant number 01LA1112A). The publication of this article was funded by the Open Access fund of the Leibniz Association. All data is available on the project homepage (https://www.ifw-kiel.de/forschung/umwelt/projekte/accept) and from Figshare (https://dx.doi.org/10.6084/m9.figshare.3113050.v1).
Wed, 20 Apr 2016 00:00:00 GMThttp://hdl.handle.net/10023/116972016-04-20T00:00:00ZBraun, CarolaRehdanz, KatrinSchmidt, UlrichRecent studies in the marketing literature developed a new method for eliciting willingness to pay (WTP) with an open-ended elicitation format: the Range-WTP method. In contrast to the traditional approach of eliciting WTP as a single value (Point-WTP), Range-WTP explicitly allows for preference uncertainty in responses. The aim of this paper is to apply Range-WTP to the domain of contingent valuation and to test for its theoretical validity and robustness in comparison to the Point-WTP. Using data from two novel large-scale surveys on the perception of solar radiation management (SRM), a little-known technique for counteracting climate change, we compare the performance of both methods in the field. In addition to the theoretical validity (i.e. the degree to which WTP values are consistent with theoretical expectations), we analyse the test-retest reliability and stability of our results over time. Our evidence suggests that the Range-WTP method clearly outperforms the Point-WTP method.Revealed preferences over risk and uncertaintyhttp://hdl.handle.net/10023/11663
We develop a nonparametric procedure, called the lattice method, for testing the consistency of contingent consumption data with a broad class of models of choice under risk and under uncertainty. Our method allows for risk loving and elation seeking behavior and can be used to calculate, via Afriat's efficiency index, the magnitude of violations from a particular model of choice. We evaluate the performance of different models (including expected utility, disappointment aversion, rank dependent utility, mean-variance utility, and stochastically monotone utility) in the data collected by Choi et al. (2007), in terms of pass rates, power, and predictive success.
Ludovic Renou would like to acknowledge financial support from the French National Research Agency (ANR), under the grant CIGNE (ANR-15-CE38-0007-01).
Wed, 13 Sep 2017 00:00:00 GMThttp://hdl.handle.net/10023/116632017-09-13T00:00:00ZPolisson, MatthewQuah, John K.-H.Renou, LudovicWe develop a nonparametric procedure, called the lattice method, for testing the consistency of contingent consumption data with a broad class of models of choice under risk and under uncertainty. Our method allows for risk loving and elation seeking behavior and can be used to calculate, via Afriat's efficiency index, the magnitude of violations from a particular model of choice. We evaluate the performance of different models (including expected utility, disappointment aversion, rank dependent utility, mean-variance utility, and stochastically monotone utility) in the data collected by Choi et al. (2007), in terms of pass rates, power, and predictive success.Optimal allocation with ex post verification and limited penaltieshttp://hdl.handle.net/10023/11657
Several agents with privately known social values compete for a prize. The prize is allocated based on the claims of the agents, and the winner is subject to a limited penalty if he makes a false claim. If the number of agents is large, the optimal mechanism places all agents above a threshold onto a shortlist along with a fraction of agents below the threshold, and then allocates the prize to a random agent on the shortlist. When the number of agents is small, the optimal mechanism allocates the prize to the agent who makes the highest claim, but restricts the range of claims above and below.
Fri, 01 Sep 2017 00:00:00 GMThttp://hdl.handle.net/10023/116572017-09-01T00:00:00ZMylovanov, TymofiyZapechelnyuk, AndriySeveral agents with privately known social values compete for a prize. The prize is allocated based on the claims of the agents, and the winner is subject to a limited penalty if he makes a false claim. If the number of agents is large, the optimal mechanism places all agents above a threshold onto a shortlist along with a fraction of agents below the threshold, and then allocates the prize to a random agent on the shortlist. When the number of agents is small, the optimal mechanism allocates the prize to the agent who makes the highest claim, but restricts the range of claims above and below.The shine of precious metals around the global financial crisishttp://hdl.handle.net/10023/11634
We analyze the price behavior of the main precious metals – gold, silver, platinum and palladium – before, during and in the aftermath of the 2007–08 financial crisis. Using the mildly explosive/multiple bubble technology developed by Phillips, Shi and Yu (2015, International Economic Review 56(4), 1043–1133), we find significant, short periods of mildly explosive behavior in the spot and futures prices of all four metals. Fewer periods are detected using exchange-rate adjusted prices, and almost none when deflated prices are used. We assess whether these findings are indicative of bubble behavior. Convenience yield is shown to have little efficacy in this regard, while other fundamental proxy variables and position data offer only very limited evidence against prices having been anything other than fundamentals-driven. Possible exceptions are in gold in the run-up to the highpoint of the financial crisis, and in silver and palladium around the launch of specific financial products. Some froth, however, is reported and discussed for each metal.
Figuerola-Ferretti thanks the Spanish Ministry of Education and Science for support under grants MICINN ECO2010-19357, ECO2012-36559 and ECO2013-46395, and McCrorie, The Carnegie Trust for the Universities of Scotland under grant no. 31935.
Thu, 01 Sep 2016 00:00:00 GMThttp://hdl.handle.net/10023/116342016-09-01T00:00:00ZFiguerola-Ferretti, IsabelMcCrorie, J. RoderickWe analyze the price behavior of the main precious metals – gold, silver, platinum and palladium – before, during and in the aftermath of the 2007–08 financial crisis. Using the mildly explosive/multiple bubble technology developed by Phillips, Shi and Yu (2015, International Economic Review 56(4), 1043–1133), we find significant, short periods of mildly explosive behavior in the spot and futures prices of all four metals. Fewer periods are detected using exchange-rate adjusted prices, and almost none when deflated prices are used. We assess whether these findings are indicative of bubble behavior. Convenience yield is shown to have little efficacy in this regard, while other fundamental proxy variables and position data offer only very limited evidence against prices having been anything other than fundamentals-driven. Possible exceptions are in gold in the run-up to the highpoint of the financial crisis, and in silver and palladium around the launch of specific financial products. Some froth, however, is reported and discussed for each metal.Persuasion of a privately informed receiverhttp://hdl.handle.net/10023/11504
We study persuasion mechanisms in linear environments. A receiver has a private type and chooses between two actions. A sender designs a persuasion mechanism or an experiment to disclose information about a payoff-relevant state. A persuasion mechanism conditions information disclosure on the receiver's report about his type, whereas an experiment discloses information independent of the receiver's type. We establish the equivalence of implementation by persuasion mechanisms and by experiments, and characterize optimal persuasion mechanisms.
Kolotilin acknowledges financial support from the Australian Research Council. Zapechelnyuk acknowledges financial support from the Economic and Social Research Council (grant no. ES/N01829X/1)
Mon, 04 Dec 2017 00:00:00 GMThttp://hdl.handle.net/10023/115042017-12-04T00:00:00ZKolotilin, AntonMylovanov, TymofiyZapechelnyuk, AndriyLi, MingWe study persuasion mechanisms in linear environments. A receiver has a private type and chooses between two actions. A sender designs a persuasion mechanism or an experiment to disclose information about a payoff-relevant state. A persuasion mechanism conditions information disclosure on the receiver's report about his type, whereas an experiment discloses information independent of the receiver's type. We establish the equivalence of implementation by persuasion mechanisms and by experiments, and characterize optimal persuasion mechanisms.Electoral systems, taxation and immigration policies : which system builds a wall first?http://hdl.handle.net/10023/11407
When exposed to similar migration flows, countries with different institutional systems may respond with different levels of openness. We study in particular the different responses determined by different electoral systems. We find that Winner Take All countries would tend to be more open than countries with PR when all other policies are kept constant, but, crucially, if we consider the endogenous differences in redistribution levels across systems, then the openness ranking may switch.
Morelli wishes to thank the European Research Council, grant 694583 on power relations, for financial support.
Mon, 07 Aug 2017 00:00:00 GMThttp://hdl.handle.net/10023/114072017-08-07T00:00:00ZMorelli, MassimoNegri, MargheritaWhen exposed to similar migration flows, countries with different institutional systems may respond with different levels of openness. We study in particular the different responses determined by different electoral systems. We find that Winner Take All countries would tend to be more open than countries with PR when all other policies are kept constant, but, crucially, if we consider the endogenous differences in redistribution levels across systems, then the openness ranking may switch.Inequalities in accessing LPG and electricity consumption in India : the role of caste, tribe, and religionhttp://hdl.handle.net/10023/11401
Using the National Sample Survey Organisation data from the 68th round (2011–12) of 88,939 households, this paper investigates the inequalities in access to Liquid Petroleum Gas (LPG) and electricity usage by the households belonging to the three major disadvantaged groups in India, viz., the scheduled castes, the scheduled tribes, and the Muslims. The results of our analysis suggest that, after controlling for the other socio-economic factors which impinge on the households’ demand and supply characteristics, the households belonging to these disadvantaged groups do have poorer access to LPG and electricity usage as compared to the upper caste households. It is the scheduled caste and scheduled tribe households who would appear to face most discrimination in the equality spaces of the electricity usage and LPG distribution. Policy implications of the findings are considered.
Sun, 25 Jun 2017 00:00:00 GMThttp://hdl.handle.net/10023/114012017-06-25T00:00:00ZSaxena, VibhorBhattacharya, P.C.Using the National Sample Survey Organisation data from the 68th round (2011–12) of 88,939 households, this paper investigates the inequalities in access to Liquid Petroleum Gas (LPG) and electricity usage by the households belonging to the three major disadvantaged groups in India, viz., the scheduled castes, the scheduled tribes, and the Muslims. The results of our analysis suggest that, after controlling for the other socio-economic factors which impinge on the households’ demand and supply characteristics, the households belonging to these disadvantaged groups do have poorer access to LPG and electricity usage as compared to the upper caste households. It is the scheduled caste and scheduled tribe households who would appear to face most discrimination in the equality spaces of the electricity usage and LPG distribution. Policy implications of the findings are considered.Penalizing cartels : the case for basing penalties on the price overchargehttp://hdl.handle.net/10023/11323
In this paper we set out the welfare economics based case for imposing cartel penalties on the cartel overcharge rather than on the more conventional bases of revenue or profits (illegal gains). To do this we undertake a systematic comparison of a penalty based on the cartel overcharge with three other penalty regimes: fixed penalties; penalties based on revenue, and penalties based on profits. Our analysis is the first to compare these regimes in terms of their impact on both (i) the prices charged by those cartels that do form; and (ii) the number of stable cartels that form (deterrence). We show that the class of penalties based on profits is identical to the class of fixed penalties in all welfare-relevant respects. For the other three types of penalty we show that, for those cartels that do form, penalties based on the overcharge produce lower prices than those based on profit)while penalties based on revenue produce the highest prices. Further, in conjunction with the above result, our analysis of cartel stability (and thus deterrence), shows that penalties based on the overcharge out-perform those based on profits, which in turn out-perform those based on revenue in terms of their impact on each of the following welfare criteria: (a) average overcharge; (b) average consumer surplus; (c) average total welfare.
Yannis Katsoulacos acknowledges that this research has been co-financed by the European Union (European Social Fund – ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: ARISTEIA – CoLEG.
Tue, 01 Sep 2015 00:00:00 GMThttp://hdl.handle.net/10023/113232015-09-01T00:00:00ZKatsoulacos, YannisMotchenkova, EvgeniaUlph, David TregearIn this paper we set out the welfare economics based case for imposing cartel penalties on the cartel overcharge rather than on the more conventional bases of revenue or profits (illegal gains). To do this we undertake a systematic comparison of a penalty based on the cartel overcharge with three other penalty regimes: fixed penalties; penalties based on revenue, and penalties based on profits. Our analysis is the first to compare these regimes in terms of their impact on both (i) the prices charged by those cartels that do form; and (ii) the number of stable cartels that form (deterrence). We show that the class of penalties based on profits is identical to the class of fixed penalties in all welfare-relevant respects. For the other three types of penalty we show that, for those cartels that do form, penalties based on the overcharge produce lower prices than those based on profit)while penalties based on revenue produce the highest prices. Further, in conjunction with the above result, our analysis of cartel stability (and thus deterrence), shows that penalties based on the overcharge out-perform those based on profits, which in turn out-perform those based on revenue in terms of their impact on each of the following welfare criteria: (a) average overcharge; (b) average consumer surplus; (c) average total welfare.Higher tax for top earnershttp://hdl.handle.net/10023/11305
The literature can justify both increasing and decreasing marginal taxes (IMT & DMT) on top incomes under different welfare objectives and income distributions. Even when DMT are theoretically optimal, they are often politically infeasible. Then a flat tax seems to be a constrained optimal solution. We show however that, given any flat tax we can increase the total utility of a poor majority by raising the top income tax rate under a simple condition, which can be checked with empirical data. We further generalize our main results allowing different welfare weights, declining elasticity of labor supply and more tax bands.
Sun, 01 Oct 2017 00:00:00 GMThttp://hdl.handle.net/10023/113052017-10-01T00:00:00ZFitzRoy, Felix RJin, Jim YongtaoThe literature can justify both increasing and decreasing marginal taxes (IMT & DMT) on top incomes under different welfare objectives and income distributions. Even when DMT are theoretically optimal, they are often politically infeasible. Then a flat tax seems to be a constrained optimal solution. We show however that, given any flat tax we can increase the total utility of a poor majority by raising the top income tax rate under a simple condition, which can be checked with empirical data. We further generalize our main results allowing different welfare weights, declining elasticity of labor supply and more tax bands.Optimal robust bilateral trade : risk neutralityhttp://hdl.handle.net/10023/11119
A risk neutral seller and buyer with private information bargain over an indivisible item. We prove that optimal robust bilateral trade mechanisms are payoff equivalent to non-wasteful randomized posted prices.
Sun, 01 May 2016 00:00:00 GMThttp://hdl.handle.net/10023/111192016-05-01T00:00:00ZČopič, JernejPonsati Obiols, ClaraA risk neutral seller and buyer with private information bargain over an indivisible item. We prove that optimal robust bilateral trade mechanisms are payoff equivalent to non-wasteful randomized posted prices.Good politicians' distorted incentiveshttp://hdl.handle.net/10023/11098
I construct a political agency model that provides a new explanation for sub-optimal policy making decisions by incumbents. I show that electoral incentives can induce politicians to address less relevant issues, disregarding more important ones. Issue importance is defined in terms of the utility voters would receive if the issue was solved. Contrary to existing literature, sub-optimal policy making occurs even when voters are perfectly informed about issues’ characteristics and politicians are policy oriented. I provide an explanation that relies on the negative correlation between issue importance and probability of solving it: for a given effort exerted by incumbents, less relevant issues guarantee higher probability of success. In equilibrium, voters cannot commit to re-elect the incumbent if and only if the most important issue was solved. This is because solving the easy issue also constitutes a positive signal about incumbents’ type. Whenever re-election is sufficiently valuable, then, politicians will choose to address less relevant and easier issues.
I aknowledge the financial support of the Fonds de la Recherche Scientifique - F.N.R.S.
Sat, 06 May 2017 00:00:00 GMThttp://hdl.handle.net/10023/110982017-05-06T00:00:00ZNegri, MargheritaI construct a political agency model that provides a new explanation for sub-optimal policy making decisions by incumbents. I show that electoral incentives can induce politicians to address less relevant issues, disregarding more important ones. Issue importance is defined in terms of the utility voters would receive if the issue was solved. Contrary to existing literature, sub-optimal policy making occurs even when voters are perfectly informed about issues’ characteristics and politicians are policy oriented. I provide an explanation that relies on the negative correlation between issue importance and probability of solving it: for a given effort exerted by incumbents, less relevant issues guarantee higher probability of success. In equilibrium, voters cannot commit to re-elect the incumbent if and only if the most important issue was solved. This is because solving the easy issue also constitutes a positive signal about incumbents’ type. Whenever re-election is sufficiently valuable, then, politicians will choose to address less relevant and easier issues.Preferential votes and minority representationhttp://hdl.handle.net/10023/11097
Under open list proportional representation, voters vote both for a party and for some candidates within its list (preferential vote). Seats are assigned to parties in proportion to their votes and, within parties, to the candidates obtaining the largest number of preferential votes. The paper examines how the number of candidates voters can vote for affects the representation of minorities in parliaments. I highlight a clear negative relationship between the two. Minorities are proportionally represented in parliament only if voters can cast a limited number of preferential votes. When the number of preferential votes increases, a multiplier effect arises, which disproportionately increases the power of the majority in determining the elected candidates.
I am grateful to FNRS for their financial support.
Sat, 13 May 2017 00:00:00 GMThttp://hdl.handle.net/10023/110972017-05-13T00:00:00ZNegri, MargheritaUnder open list proportional representation, voters vote both for a party and for some candidates within its list (preferential vote). Seats are assigned to parties in proportion to their votes and, within parties, to the candidates obtaining the largest number of preferential votes. The paper examines how the number of candidates voters can vote for affects the representation of minorities in parliaments. I highlight a clear negative relationship between the two. Minorities are proportionally represented in parliament only if voters can cast a limited number of preferential votes. When the number of preferential votes increases, a multiplier effect arises, which disproportionately increases the power of the majority in determining the elected candidates.Socio-economic determinants of child and juvenile sex ratios in India : A longitudinal analysishttp://hdl.handle.net/10023/11078
The paper examines the determinants of the child and juvenile sex ratios in India in a multivariate framework, using district level data from the 1981, 1991, and 2001 Indian population censuses. The results strongly suggest that there are deep rooted cultural factors at play in the determination of the sex ratios at birth and at early ages, cultural factors that are not much responsive to the enhancement of women's agency or to economic development. However, the results also show that the behaviour of the juvenile sex ratio does respond to the enhancement of women's agency and to economic development. Policy implications of these findings are considered.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10023/110782017-01-01T00:00:00ZSaxena, VibhorBhattacharya, P.C.The paper examines the determinants of the child and juvenile sex ratios in India in a multivariate framework, using district level data from the 1981, 1991, and 2001 Indian population censuses. The results strongly suggest that there are deep rooted cultural factors at play in the determination of the sex ratios at birth and at early ages, cultural factors that are not much responsive to the enhancement of women's agency or to economic development. However, the results also show that the behaviour of the juvenile sex ratio does respond to the enhancement of women's agency and to economic development. Policy implications of these findings are considered.The quest for growth in developing countries : an analysis of the effects of foreign aid on economic growthhttp://hdl.handle.net/10023/11034
Large quantities of foreign development assistance continue to flow to many developing countries. At the same time, most of the aid-receiving countries have stagnated and become even more aid-dependent. This grim reality provokes vigorous debate on the effectiveness of aid. Despite the voluminous research on aid effectiveness, clear evidence to support the view that development aid stimulates economic growth remains scant.
This thesis intends to extend the existing literature on foreign aid and economic growth. First we re-examine results from cross-country studies to provide new insights on the lack of robustness of results from this approach. We further explore and deepen the observation that cross-country results are fragile, particularly when the number of countries in the sample changes. Secondly, we study the impact of district-level aid disbursement on the growth of average night-time light density in Malawi. We use two plausibly exogenous determinants of within-country aid allocation to isolate the causal effects of aid. The results show a robust and quantitatively significant effect of aid flows in stimulating growth of light density. We find a hump-shaped growth response over three years. Finally, the thesis presents a theoretical model that explores how aid affects economic growth and welfare in an economy with subsistence constraints. The main results from this analysis are; (i) productive aid has higher long run growth and welfare effects than pure aid (ii) the rate of convergence depends crucially on how close the initial conditions are to the subsistence level (iii) while growth effects are maximised when all the aid is allocated to productive aid, we find that optimal welfare is reached when some proportion of aid is also allocated to pure transfers.
Fri, 23 Jun 2017 00:00:00 GMThttp://hdl.handle.net/10023/110342017-06-23T00:00:00ZKhomba, Daniel ChrisLarge quantities of foreign development assistance continue to flow to many developing countries. At the same time, most of the aid-receiving countries have stagnated and become even more aid-dependent. This grim reality provokes vigorous debate on the effectiveness of aid. Despite the voluminous research on aid effectiveness, clear evidence to support the view that development aid stimulates economic growth remains scant.
This thesis intends to extend the existing literature on foreign aid and economic growth. First we re-examine results from cross-country studies to provide new insights on the lack of robustness of results from this approach. We further explore and deepen the observation that cross-country results are fragile, particularly when the number of countries in the sample changes. Secondly, we study the impact of district-level aid disbursement on the growth of average night-time light density in Malawi. We use two plausibly exogenous determinants of within-country aid allocation to isolate the causal effects of aid. The results show a robust and quantitatively significant effect of aid flows in stimulating growth of light density. We find a hump-shaped growth response over three years. Finally, the thesis presents a theoretical model that explores how aid affects economic growth and welfare in an economy with subsistence constraints. The main results from this analysis are; (i) productive aid has higher long run growth and welfare effects than pure aid (ii) the rate of convergence depends crucially on how close the initial conditions are to the subsistence level (iii) while growth effects are maximised when all the aid is allocated to productive aid, we find that optimal welfare is reached when some proportion of aid is also allocated to pure transfers.The whisky industry and the regional Scottish economy : an economic analysis of the impact of imminent innovations in public policyhttp://hdl.handle.net/10023/10984
This dissertation analyses imminent innovations in public policy that will impact upon the whisky industry, and, through linkage adjustments, the regional Scottish economy. An analysis of the interconnectedness between the whisky industry and the wider Scottish economy reveals that such linkages are substantial.
A holistic conspectus of the whisky industry in the first part of the dissertation reveals that the predominant form of structural change in the past has been merger & acquisition. Such consolidation has permitted economies in marketing & distribution, but it is contended that in this arena at least there is scope for further performance improvement in the industry. Nevertheless, with taxation forming such a significant proportion of the final price of the product, realising a sustained increase in demand is deemed to be largely outwith the capability of the industry.
It is advanced, therefore, that two tax-related developments in public policy in the next few years will impact not merely upon the whisky industry, but materially upon the regional Scottish economy as well. The first of these imminent innovations examined is the
proposed abolition of the intra-EU duty free concession in 1999. Whilst it is concluded that such a move is inevitable (and economically logical), it is nonetheless determined that this will have a meaningful detrimental impact upon the whisky industry and Scottish economy.
Secondly, the current proposals of the European Commission for the harmonisation of alcohol excises across the European Union are critically appraised, and are shown to be grounded on no logical economic principles, but instead, enshrine protection for European vinicultures. The rationale for alcohol taxation is considered de novo, concluding that within the United Kingdom & across the European Union, at a minimum all alcoholic beverages should be taxed on an equal basis according to alcoholic content, at a level sufficient to cover an estimate of the negative externalities associated with alcohol consumption.
Mindful of the importance of the whisky industry to the Scottish economy, it is revealed that in times past, the public authorities have been proactive in intervening to secure the continuing prosperity of the whisky industry, and it is contended that such a stance may be required of the present government. The dissertation concludes by advocating a set of reforms to the structure of alcohol excises in the United Kingdom.
An approximate halving of the excise applied to spirits, such that all alcoholic beverages are taxed equally according to alcoholic content, would ensure that the whisky industry & government could lobby with credibility for comparable structures to be adopted overseas, particularly in any revised proposals for European excise harmonisation. In addition, it is suggested that the fillip such a reform would give to domestic sales of whisky would mitigate the negative effects upon the whisky industry & regional Scottish economy of losing the intra-EU duty free concession in 1999.
Thu, 01 Jan 1998 00:00:00 GMThttp://hdl.handle.net/10023/109841998-01-01T00:00:00ZHaines, PaulThis dissertation analyses imminent innovations in public policy that will impact upon the whisky industry, and, through linkage adjustments, the regional Scottish economy. An analysis of the interconnectedness between the whisky industry and the wider Scottish economy reveals that such linkages are substantial.
A holistic conspectus of the whisky industry in the first part of the dissertation reveals that the predominant form of structural change in the past has been merger & acquisition. Such consolidation has permitted economies in marketing & distribution, but it is contended that in this arena at least there is scope for further performance improvement in the industry. Nevertheless, with taxation forming such a significant proportion of the final price of the product, realising a sustained increase in demand is deemed to be largely outwith the capability of the industry.
It is advanced, therefore, that two tax-related developments in public policy in the next few years will impact not merely upon the whisky industry, but materially upon the regional Scottish economy as well. The first of these imminent innovations examined is the
proposed abolition of the intra-EU duty free concession in 1999. Whilst it is concluded that such a move is inevitable (and economically logical), it is nonetheless determined that this will have a meaningful detrimental impact upon the whisky industry and Scottish economy.
Secondly, the current proposals of the European Commission for the harmonisation of alcohol excises across the European Union are critically appraised, and are shown to be grounded on no logical economic principles, but instead, enshrine protection for European vinicultures. The rationale for alcohol taxation is considered de novo, concluding that within the United Kingdom & across the European Union, at a minimum all alcoholic beverages should be taxed on an equal basis according to alcoholic content, at a level sufficient to cover an estimate of the negative externalities associated with alcohol consumption.
Mindful of the importance of the whisky industry to the Scottish economy, it is revealed that in times past, the public authorities have been proactive in intervening to secure the continuing prosperity of the whisky industry, and it is contended that such a stance may be required of the present government. The dissertation concludes by advocating a set of reforms to the structure of alcohol excises in the United Kingdom.
An approximate halving of the excise applied to spirits, such that all alcoholic beverages are taxed equally according to alcoholic content, would ensure that the whisky industry & government could lobby with credibility for comparable structures to be adopted overseas, particularly in any revised proposals for European excise harmonisation. In addition, it is suggested that the fillip such a reform would give to domestic sales of whisky would mitigate the negative effects upon the whisky industry & regional Scottish economy of losing the intra-EU duty free concession in 1999.The local environment shapes refugee integration : evidence from post-war Germanyhttp://hdl.handle.net/10023/10983
This paper studies how the local environment in receiving counties affected the economic, social, and political integration of the eight million expellees who arrived in West Germany after World War II. We first document that integration outcomes differed dramatically across West German counties. We then show that more industrialized counties and counties with low expellee inflows were much more successful in integrating expellees than agrarian counties and counties with high inflows. Religious differences between native West Germans and expellees had no effect on labor market outcomes, but reduced inter-marriage rates and increased the local support for anti-expellee parties.
The research in this paper was funded by Deutsche Forschungsgemeinschaft (grant no. BR 4979/1-1, “Die volkswirtschaftlichen Effekte der Vertriebenen und ihre Integration in Westdeutschland, 1945-70”).
Tue, 30 May 2017 00:00:00 GMThttp://hdl.handle.net/10023/109832017-05-30T00:00:00ZBraun, Sebastian TillDwenger, NadjaThis paper studies how the local environment in receiving counties affected the economic, social, and political integration of the eight million expellees who arrived in West Germany after World War II. We first document that integration outcomes differed dramatically across West German counties. We then show that more industrialized counties and counties with low expellee inflows were much more successful in integrating expellees than agrarian counties and counties with high inflows. Religious differences between native West Germans and expellees had no effect on labor market outcomes, but reduced inter-marriage rates and increased the local support for anti-expellee parties.Labor market returns to college major specificityhttp://hdl.handle.net/10023/10982
This paper explores the definition and measurement of college major specificity and estimates its labor market return over a worker’s life cycle. After reviewing the variety of measures which have been used to measure specialization, we propose a new approach: a Theil measure based on the transferability of skills across occupations. We calculate and compare representative measures using data from the American Community Survey, National Longitudinal Survey of Youth, and the Baccalaureate and Beyond. We then use these measures to estimate the return to specialized higher education. Our consistent finding is that the most "general" majors are the ones that pay off the most over time. While there is an initial earnings premium to majors with a tight connection to the labor market and to those classified as "vocational", this fades by age 30. Meanwhile, majors that teach versatile, transferable skills earn the most at every age. Employment returns are largely consistent with these earnings estimates. While vocational majors display a persistent employment premium over the life cycle, most other measures suggest that graduates from general majors work more hours, are more likely to be employed, and are more likely to be employed full time. Overall, major specificity explains 22% of the variation across majors in earnings and 28% of the variation in work hours.
Tue, 16 May 2017 00:00:00 GMThttp://hdl.handle.net/10023/109822017-05-16T00:00:00ZLeighton, MargaretSpeer, JaminThis paper explores the definition and measurement of college major specificity and estimates its labor market return over a worker’s life cycle. After reviewing the variety of measures which have been used to measure specialization, we propose a new approach: a Theil measure based on the transferability of skills across occupations. We calculate and compare representative measures using data from the American Community Survey, National Longitudinal Survey of Youth, and the Baccalaureate and Beyond. We then use these measures to estimate the return to specialized higher education. Our consistent finding is that the most "general" majors are the ones that pay off the most over time. While there is an initial earnings premium to majors with a tight connection to the labor market and to those classified as "vocational", this fades by age 30. Meanwhile, majors that teach versatile, transferable skills earn the most at every age. Employment returns are largely consistent with these earnings estimates. While vocational majors display a persistent employment premium over the life cycle, most other measures suggest that graduates from general majors work more hours, are more likely to be employed, and are more likely to be employed full time. Overall, major specificity explains 22% of the variation across majors in earnings and 28% of the variation in work hours.Choosing on Influencehttp://hdl.handle.net/10023/10876
Interaction, the act of mutual influence, is an essential part of daily life and economic decisions. This paper presents an individual decision procedure for interacting individuals. According to our model, individuals seek influence from each other for those issues that they cannot solve on their own. Following a choice-theoretic approach, we provide simple properties that aid to detect interacting individuals. Revealed preference analysis not only grants underlying preferences but also the influence acquired.
Financial support by the Spanish Ministry of Science and Innovation through Grant ECO2008-04756 (Grupo Consolidado-C), FEDER and also the Scottish Institute of Research in Economics (SIRE) is acknowledged.
Fri, 26 May 2017 00:00:00 GMThttp://hdl.handle.net/10023/108762017-05-26T00:00:00ZCuhadaroglu, TugceInteraction, the act of mutual influence, is an essential part of daily life and economic decisions. This paper presents an individual decision procedure for interacting individuals. According to our model, individuals seek influence from each other for those issues that they cannot solve on their own. Following a choice-theoretic approach, we provide simple properties that aid to detect interacting individuals. Revealed preference analysis not only grants underlying preferences but also the influence acquired.Rationally inattentive preferences and hidden information costshttp://hdl.handle.net/10023/10875
We show how information acquisition costs can be identified using observable choice data. Identifying information costs from behavior is especially relevant when these costs depend on factors–such as time, effort and cognitive resources–that are difficult to observe directly, as in models of rational inattention. Using willingness-to-pay data for opportunity sets–which require more or less information to make choices–we establish a set of canonical properties that are necessary and sufficient to identify information costs. We also provide an axiomatic characterization of the induced rationally inattentive preferences, and show how they reveal the amount of information a decision maker acquires.
Fri, 26 May 2017 00:00:00 GMThttp://hdl.handle.net/10023/108752017-05-26T00:00:00Zde Oliveira, HenriqueDenti, TommasoMihm, MaximilianOzbek, KemalWe show how information acquisition costs can be identified using observable choice data. Identifying information costs from behavior is especially relevant when these costs depend on factors–such as time, effort and cognitive resources–that are difficult to observe directly, as in models of rational inattention. Using willingness-to-pay data for opportunity sets–which require more or less information to make choices–we establish a set of canonical properties that are necessary and sufficient to identify information costs. We also provide an axiomatic characterization of the induced rationally inattentive preferences, and show how they reveal the amount of information a decision maker acquires.A model of self-disciplinehttp://hdl.handle.net/10023/10844
In this paper, we propose a model of self-discipline where a decision-maker balances the benefits of regulating her moods against a cost of self-discipline effort. Self-discipline is beneficial as it reduces the chances of internal conflict, yet it is a costly effort to undertake. We provide an axiomatic characterization of our model in a menu-choice framework, and show how costs of self-discipline can be elicited and compared across individuals. Our model generalizes well-known models of temptation-driven behavior by viewing temptations as the endogenous outcome of a self-discipline choice problem.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10023/108442016-01-01T00:00:00ZMihm, MaximilianOzbek, KemalIn this paper, we propose a model of self-discipline where a decision-maker balances the benefits of regulating her moods against a cost of self-discipline effort. Self-discipline is beneficial as it reduces the chances of internal conflict, yet it is a costly effort to undertake. We provide an axiomatic characterization of our model in a menu-choice framework, and show how costs of self-discipline can be elicited and compared across individuals. Our model generalizes well-known models of temptation-driven behavior by viewing temptations as the endogenous outcome of a self-discipline choice problem.Coordinating the household retirement decisionhttp://hdl.handle.net/10023/10801
This paper explores the sources of retirement synchronisation in dual career households. Empirical evidence suggests that majority of the couples exit the labor force within a short period of time, too tight to be explained by the age differences alone. This retirement coordination is frequently attributed to the complementarity of the spouses’ leisure. Contrary to this view, my estimates suggest that in a household with CES preferences the quantities of leisure consumed by husbands and wives are gross substitutes. Looking for alternative explanations, I develop a dynamic programming model of optimal retirement and labor supply decisions with uncertainty about the household structure, survival, future health status and income. Apart from leisure complementarity, four other channels may generate coordinated retirement in the model: correlated shocks to the individual health and wages, joint response to the shocks received by the household, correlated tastes for leisure due to sorting on unobservables captured by the household fixed effects, and spousal benefits provided by the Social Security. The model generates a distribution of optimal retirement timing that closely mimics the outcomes observed in the data. A counterfactual designed to shut down the family based provisions of the Social Security Act shows that most of the observed coordination can be explained by the existing Social Security policy.
Mon, 21 Aug 2017 00:00:00 GMThttp://hdl.handle.net/10023/108012017-08-21T00:00:00ZMerkurieva, IrinaThis paper explores the sources of retirement synchronisation in dual career households. Empirical evidence suggests that majority of the couples exit the labor force within a short period of time, too tight to be explained by the age differences alone. This retirement coordination is frequently attributed to the complementarity of the spouses’ leisure. Contrary to this view, my estimates suggest that in a household with CES preferences the quantities of leisure consumed by husbands and wives are gross substitutes. Looking for alternative explanations, I develop a dynamic programming model of optimal retirement and labor supply decisions with uncertainty about the household structure, survival, future health status and income. Apart from leisure complementarity, four other channels may generate coordinated retirement in the model: correlated shocks to the individual health and wages, joint response to the shocks received by the household, correlated tastes for leisure due to sorting on unobservables captured by the household fixed effects, and spousal benefits provided by the Social Security. The model generates a distribution of optimal retirement timing that closely mimics the outcomes observed in the data. A counterfactual designed to shut down the family based provisions of the Social Security Act shows that most of the observed coordination can be explained by the existing Social Security policy.Solving asset pricing models with stochastic volatilityhttp://hdl.handle.net/10023/10725
This paper provides a closed-form solution for the price-dividend ratio in a standard asset pricing model with stochastic volatility. The growth rate of the endowment is a first-order Gaussian autoregression, while the stochastic volatility innovations can be drawn from any distribution for which the moment-generating function exists. The solution is useful in allowing comparisons among numerical methods used to approximate the nontrivial closed form. The closed-form solution reveals that, when using perturbation methods around the deterministic steady state, the approximate solution needs to be sixth-order accurate in order for the parameter capturing the conditional standard deviation of the stochastic volatility process to be present. Published by Elsevier B.V.
Sun, 01 Mar 2015 00:00:00 GMThttp://hdl.handle.net/10023/107252015-03-01T00:00:00Zde Groot, OliverThis paper provides a closed-form solution for the price-dividend ratio in a standard asset pricing model with stochastic volatility. The growth rate of the endowment is a first-order Gaussian autoregression, while the stochastic volatility innovations can be drawn from any distribution for which the moment-generating function exists. The solution is useful in allowing comparisons among numerical methods used to approximate the nontrivial closed form. The closed-form solution reveals that, when using perturbation methods around the deterministic steady state, the approximate solution needs to be sixth-order accurate in order for the parameter capturing the conditional standard deviation of the stochastic volatility process to be present. Published by Elsevier B.V.Cost of borrowing shocks and fiscal adjustmenthttp://hdl.handle.net/10023/10464
Do capital markets impose fiscal discipline? To answer this question, we estimate the fiscal response to a change in the interest rate paid by 14 European governments over four decades in a panel VAR, using sign restrictions to identify structural shocks. A jump in the cost of borrowing leads to an improvement in the primary balance although insufficient to prevent a rise in the debt-to-GDP ratio. Adjustment mainly takes place via rising revenues rather than falling primary expenditures. For EMU countries, the primary balance response was stronger after 1992, when the Maastricht Treaty was signed, suggesting an important interaction between market discipline and fiscal rules.
Tue, 01 Dec 2015 00:00:00 GMThttp://hdl.handle.net/10023/104642015-12-01T00:00:00Zde Groot, OliverHolm-Hadulla, F.Leiner-Killinger, N.Do capital markets impose fiscal discipline? To answer this question, we estimate the fiscal response to a change in the interest rate paid by 14 European governments over four decades in a panel VAR, using sign restrictions to identify structural shocks. A jump in the cost of borrowing leads to an improvement in the primary balance although insufficient to prevent a rise in the debt-to-GDP ratio. Adjustment mainly takes place via rising revenues rather than falling primary expenditures. For EMU countries, the primary balance response was stronger after 1992, when the Maastricht Treaty was signed, suggesting an important interaction between market discipline and fiscal rules.Dirty little secrets : inferring fossil-fuel subsidies from patterns in emission intensitieshttp://hdl.handle.net/10023/10412
I develop a unique database of international fossil-fuel subsidies by examining country specific patterns in carbon emission-to-GDP ratios, known as emission-intensities. For most - but not all - countries, intensities tend to be hump-shaped with income. I construct a model of structural-transformation that generates this hump-shaped intensity and then show that deviations from this pattern must be driven by distortions to sectoral-productivity and/or fossil-fuel prices. Finally, I use the calibrated-model to measure these distortions for 170 countries for 1980-2010. This methodology reveals that fossil-fuel price-distortions are large, increasing and often hidden. Furthermore, they are major contributors to higher carbon-emissions and lower GDP.
Mon, 06 Mar 2017 00:00:00 GMThttp://hdl.handle.net/10023/104122017-03-06T00:00:00ZStefanski, RadoslawI develop a unique database of international fossil-fuel subsidies by examining country specific patterns in carbon emission-to-GDP ratios, known as emission-intensities. For most - but not all - countries, intensities tend to be hump-shaped with income. I construct a model of structural-transformation that generates this hump-shaped intensity and then show that deviations from this pattern must be driven by distortions to sectoral-productivity and/or fossil-fuel prices. Finally, I use the calibrated-model to measure these distortions for 170 countries for 1980-2010. This methodology reveals that fossil-fuel price-distortions are large, increasing and often hidden. Furthermore, they are major contributors to higher carbon-emissions and lower GDP.Lower tax for minimum wage earnershttp://hdl.handle.net/10023/10362
We show that minimum wage earners should pay a lower tax than high earners. Though intuitive, this idea is not supported by the existing literature. The optimal maximin tax curve and two-band taxes are usually decreasing. Since decreasing marginal taxes would be unpopular, by continuity a flat tax seems to be superior to increasing marginal taxes and should be a second best solution. However, using a simple utility function and a general income distribution, we find that lowering the marginal tax for minimum wage earners not only dominates the optimal flat tax under maximin, but also make everyone better off.
Wed, 01 Feb 2017 00:00:00 GMThttp://hdl.handle.net/10023/103622017-02-01T00:00:00ZFitzRoy, FelixJin, JimWe show that minimum wage earners should pay a lower tax than high earners. Though intuitive, this idea is not supported by the existing literature. The optimal maximin tax curve and two-band taxes are usually decreasing. Since decreasing marginal taxes would be unpopular, by continuity a flat tax seems to be superior to increasing marginal taxes and should be a second best solution. However, using a simple utility function and a general income distribution, we find that lowering the marginal tax for minimum wage earners not only dominates the optimal flat tax under maximin, but also make everyone better off.Higher tax for top earnershttp://hdl.handle.net/10023/10361
The literature can justify increasing and decreasing marginal taxes (IMT & DMT) on top income under different social objectives and income distributions. Even if DMT are optimal, they are often politically infeasible. Then a flat tax seems to be a constrained optimal solution. We show however that, if we want to maximize the utility of a poor majority any flat tax can be inferior to some IMT. We provide a sufficient condition for (two-band) IMT to dominate any flat tax and further generalize this result to allow different welfare weights, declining elasticity of labour supply and more tax bands.
Wed, 01 Feb 2017 00:00:00 GMThttp://hdl.handle.net/10023/103612017-02-01T00:00:00ZFitzRoy, FelixJin, JimThe literature can justify increasing and decreasing marginal taxes (IMT & DMT) on top income under different social objectives and income distributions. Even if DMT are optimal, they are often politically infeasible. Then a flat tax seems to be a constrained optimal solution. We show however that, if we want to maximize the utility of a poor majority any flat tax can be inferior to some IMT. We provide a sufficient condition for (two-band) IMT to dominate any flat tax and further generalize this result to allow different welfare weights, declining elasticity of labour supply and more tax bands.Financial intermediation, resource allocation, and macroeconomic interdependencehttp://hdl.handle.net/10023/10342
This paper studies the role of the financial sector in affecting domestic resource allocation and cross-border capital flows. I develop a quantitative, two-country, macroeconomic model in which banks face endogenous and occasionally binding leverage constraints. Banks lend funds to be invested in tradable or non-tradable sector capital and there is international financial integration in the market for bank liabilities. I focus on news about economic fundamentals as the key source of fluctuations. Specifically, in the case of positive news on the valuation of non-traded sector capital that turn out to be incorrect at a later date, the model generates an asymmetric, belief-driven boom-bust cycle that reproduces key features of the recent Eurozone crisis. Bank balance sheets amplify and propagate fluctuations through three channels when leverage constraints bind: First, amplified wealth effects induce jumps in import-demand (demand channel). Second, changes in the value of non-tradable sector assets alter bank lending to tradable sector firms (intra-national spillover channel). Third, domestic and foreign households re-adjust their savings in domestic banks, and capital flows further amplify fluctuations (international spillover channel). A common central bank’s unconventional policies of private asset purchases and liquidity facilities in response to unfulfilled expectations are successful at ameliorating the economic downturn.
Financial support from the Henry T. Buechel Fellowship is acknowledged.
Fri, 17 Feb 2017 00:00:00 GMThttp://hdl.handle.net/10023/103422017-02-17T00:00:00ZOzhan, G. KemalThis paper studies the role of the financial sector in affecting domestic resource allocation and cross-border capital flows. I develop a quantitative, two-country, macroeconomic model in which banks face endogenous and occasionally binding leverage constraints. Banks lend funds to be invested in tradable or non-tradable sector capital and there is international financial integration in the market for bank liabilities. I focus on news about economic fundamentals as the key source of fluctuations. Specifically, in the case of positive news on the valuation of non-traded sector capital that turn out to be incorrect at a later date, the model generates an asymmetric, belief-driven boom-bust cycle that reproduces key features of the recent Eurozone crisis. Bank balance sheets amplify and propagate fluctuations through three channels when leverage constraints bind: First, amplified wealth effects induce jumps in import-demand (demand channel). Second, changes in the value of non-tradable sector assets alter bank lending to tradable sector firms (intra-national spillover channel). Third, domestic and foreign households re-adjust their savings in domestic banks, and capital flows further amplify fluctuations (international spillover channel). A common central bank’s unconventional policies of private asset purchases and liquidity facilities in response to unfulfilled expectations are successful at ameliorating the economic downturn.Inferring cognitive heterogeneity from aggregate choiceshttp://hdl.handle.net/10023/10338
We study the problem of recovering the distribution of cognitive characteristics in a popula-tion of boundedly rational agents from the aggregate choices they make from a fixed menu of alternatives. Two models of limited attention are examined from this point of view, and it is shown that both “consideration probability” and “consideration capacity” distributions are substantially identified by aggregate choice shares. These models are applied to data on over-the-counter painkiller sales, yielding concurrent estimates that on average two or three out of the eight available products are considered in this market.
Manzini and Mariotti thank the ESRC for the financial support provided under grant ES/J012513/1.
Thu, 25 May 2017 00:00:00 GMThttp://hdl.handle.net/10023/103382017-05-25T00:00:00ZDardanoni, ValentinoManzini, PaolaMariotti, MarcoTyson, Christopher J.We study the problem of recovering the distribution of cognitive characteristics in a popula-tion of boundedly rational agents from the aggregate choices they make from a fixed menu of alternatives. Two models of limited attention are examined from this point of view, and it is shown that both “consideration probability” and “consideration capacity” distributions are substantially identified by aggregate choice shares. These models are applied to data on over-the-counter painkiller sales, yielding concurrent estimates that on average two or three out of the eight available products are considered in this market.Spatial competition and social welfare in the presence of non-monotonic network effectshttp://hdl.handle.net/10023/10041
We study a spatial duopoly and extend the literature by giving joint consideration to non-monotonic network effects and endogenous firm location decisions. We show that the presence of network effects (capturing, for example, in-store rather than online sales) improves welfare whenever the total market size is not too large. This effect is lost if network effects are specified in a monotonic fashion, in which case isolating consumers from one another always reduces welfare. We also provide a new rationale for a duopoly to be welfare-preferred to monopoly: in large markets, splitting demand between two firms can reduce utility losses due to crowding.
Fri, 02 Jun 2017 00:00:00 GMThttp://hdl.handle.net/10023/100412017-06-02T00:00:00ZSavorelli, LucaSeifert, JacobWe study a spatial duopoly and extend the literature by giving joint consideration to non-monotonic network effects and endogenous firm location decisions. We show that the presence of network effects (capturing, for example, in-store rather than online sales) improves welfare whenever the total market size is not too large. This effect is lost if network effects are specified in a monotonic fashion, in which case isolating consumers from one another always reduces welfare. We also provide a new rationale for a duopoly to be welfare-preferred to monopoly: in large markets, splitting demand between two firms can reduce utility losses due to crowding.Revisiting the optimal linear income tax with categorical transfershttp://hdl.handle.net/10023/10018
When individuals differ in both productivity and some categorical attribute, optimal linear/piecewise-linear tax expressions are written to capture cases where it is suboptimal to eliminate inequality in the average social marginal value of income between categorical groups. Simulations provide examples.
This work was supported by AXA Research Fund.
Tue, 01 Sep 2015 00:00:00 GMThttp://hdl.handle.net/10023/100182015-09-01T00:00:00ZSlack, Sean EdwardWhen individuals differ in both productivity and some categorical attribute, optimal linear/piecewise-linear tax expressions are written to capture cases where it is suboptimal to eliminate inequality in the average social marginal value of income between categorical groups. Simulations provide examples.The impact of World Bank and International Monetary Fund programme lending on health care delivery, health conditions and health status in sub-Saharan Africa, 1980 to 1992http://hdl.handle.net/10023/10013
The World Bank and the International Monetary Fund have been active in Africa for
several decades. In the early 1980s both institutions expanded the role that they play
in the restructuring of African economies through the introduction of structural
adjustment loans. These programme loans sought to provide the basis for sustainable
economic expansion following a period of near economic collapse in the region. In
the case of the Fund, public expenditure reducing and expenditure switching policies
were encouraged. The Bank, also, was active in these areas and focused on long-term
measures to restore efficiency to the ailing economies. These policies, although not
novel, were implemented on a large scale were perceived to have a pervasive
influence on the economic and social performance of African countries.
It was theorised by some that such programme lending would have a long-run
beneficial impact on social development. However, other authors, observers and
researchers have criticised the activities of the Bretton Woods institutions. First, the
loans have been heavily criticised in the past for the supposedly heavy handed nature
that Bank and Fund staff use in implementing their programmes. The main idea is
that the institutions have too much leverage when bargaining with African
governments to undertake reforms. Second, it has been said that the use of
programme loans will have adverse consequences for national welfare. UNICEF, the
main critic, has pointed out, and provided evidence, to indicate that vulnerable groups
in society may suffer under adjustment schemes.
This thesis looks at the areas of macroeconomic reforms and the impact that
they may have on one part of the social area: the health sector. The thesis examines
the pre-adjustment situation in Sub Saharan Africa and reviews the role and the tools
that the Bank and the Fund have at their disposal to tackle economic problems. The
thesis then moves on to explore the linkages between these policy weapons and
changes in health care development. In order to fully understand the implications for
Africa considerable attention is devoted to exploring the health problems that the
region faces and the health care delivery systems and health conditions that are
prevalent in many of the countries. The last part of the thesis provides an aggregate
study and a case study analysis of the impact of adjustment in Africa. Although, it is
determined that the impact, overall, has not been unfavourable, recommendations for
the future design of adjustment programmes is offered in the conclusion.
Sun, 01 Jan 1995 00:00:00 GMThttp://hdl.handle.net/10023/100131995-01-01T00:00:00ZEvans, Christopher J.The World Bank and the International Monetary Fund have been active in Africa for
several decades. In the early 1980s both institutions expanded the role that they play
in the restructuring of African economies through the introduction of structural
adjustment loans. These programme loans sought to provide the basis for sustainable
economic expansion following a period of near economic collapse in the region. In
the case of the Fund, public expenditure reducing and expenditure switching policies
were encouraged. The Bank, also, was active in these areas and focused on long-term
measures to restore efficiency to the ailing economies. These policies, although not
novel, were implemented on a large scale were perceived to have a pervasive
influence on the economic and social performance of African countries.
It was theorised by some that such programme lending would have a long-run
beneficial impact on social development. However, other authors, observers and
researchers have criticised the activities of the Bretton Woods institutions. First, the
loans have been heavily criticised in the past for the supposedly heavy handed nature
that Bank and Fund staff use in implementing their programmes. The main idea is
that the institutions have too much leverage when bargaining with African
governments to undertake reforms. Second, it has been said that the use of
programme loans will have adverse consequences for national welfare. UNICEF, the
main critic, has pointed out, and provided evidence, to indicate that vulnerable groups
in society may suffer under adjustment schemes.
This thesis looks at the areas of macroeconomic reforms and the impact that
they may have on one part of the social area: the health sector. The thesis examines
the pre-adjustment situation in Sub Saharan Africa and reviews the role and the tools
that the Bank and the Fund have at their disposal to tackle economic problems. The
thesis then moves on to explore the linkages between these policy weapons and
changes in health care development. In order to fully understand the implications for
Africa considerable attention is devoted to exploring the health problems that the
region faces and the health care delivery systems and health conditions that are
prevalent in many of the countries. The last part of the thesis provides an aggregate
study and a case study analysis of the impact of adjustment in Africa. Although, it is
determined that the impact, overall, has not been unfavourable, recommendations for
the future design of adjustment programmes is offered in the conclusion.Government size, misallocation and the resource cursehttp://hdl.handle.net/10023/9909
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10023/99092016-01-01T00:00:00ZStefanski, Radoslaw LucjanSubjective well-being, peer comparisons and optimal income taxationhttp://hdl.handle.net/10023/9871
Empirical evidence suggests that an important determinant of subjective well-being is how an individual’s consumption compares with that of their immediate peers. We introduce peer comparisons into the standard optimal tax framework and demonstrate that the optimal linear tax expression is adjusted in three key ways, the latter two of which are novel to this paper and act to lower the tax rate. First, the dependence of well-being on peer income introduces an externality that distorts labour supply above that which individuals would choose were they to recognise the interplay between their own choices and the Nash equilibrium level of peer consumption. The optimal tax rate is adjusted upwards to (partially) correct this distortion. Second, if individual labour supply is a function of peer consumption, there are ‘Keeping up with the Joneses’ multiplier effects that raise the Nash compensated labour supply elasticity above the individual labour supply elasticity. This implies a lower tax rate on efficiency grounds. Third, Nash indirect well-being is decreasing in the wage rate for workers with wages close to the reservation wage. To the extent that this lowers the covariance between gross earnings and the net social marginal value of income, this will act to lower the optimal tax rate.
Wed, 23 Nov 2016 00:00:00 GMThttp://hdl.handle.net/10023/98712016-11-23T00:00:00ZUlph, DavidSlack, Sean EdwardEmpirical evidence suggests that an important determinant of subjective well-being is how an individual’s consumption compares with that of their immediate peers. We introduce peer comparisons into the standard optimal tax framework and demonstrate that the optimal linear tax expression is adjusted in three key ways, the latter two of which are novel to this paper and act to lower the tax rate. First, the dependence of well-being on peer income introduces an externality that distorts labour supply above that which individuals would choose were they to recognise the interplay between their own choices and the Nash equilibrium level of peer consumption. The optimal tax rate is adjusted upwards to (partially) correct this distortion. Second, if individual labour supply is a function of peer consumption, there are ‘Keeping up with the Joneses’ multiplier effects that raise the Nash compensated labour supply elasticity above the individual labour supply elasticity. This implies a lower tax rate on efficiency grounds. Third, Nash indirect well-being is decreasing in the wage rate for workers with wages close to the reservation wage. To the extent that this lowers the covariance between gross earnings and the net social marginal value of income, this will act to lower the optimal tax rate.Impact of risk aversion and countervailing tax in oligopolyhttp://hdl.handle.net/10023/9837
The literature recognizes the qualitative effects of risk aversion on oligopolistic market performance, but less is known about their magnitudes. We quantitatively evaluate these effects in Cournot and Bertrand oligopolies where firms maximize mean-variance utilities under linear demand and costs. The impacts are very similar for the two types of oligopoly, but have opposite signs. The impacts of a firm’s risk aversion on outputs, prices, consumer surplus and social welfare can be expressed via potentially observable variables. Since these impacts resemble the effects of firms’ cost changes, a regulator can reduce or eliminate undesirable effects of risk aversion by changing firms’ costs with appropriate countervailing taxes.
Kobayashi’s work was partially supported by JSPS KAKENHI Grant Number JP15K03462.
Thu, 01 Dec 2016 00:00:00 GMThttp://hdl.handle.net/10023/98372016-12-01T00:00:00ZJin, Jim YongtaoKobayashi, ShinjiThe literature recognizes the qualitative effects of risk aversion on oligopolistic market performance, but less is known about their magnitudes. We quantitatively evaluate these effects in Cournot and Bertrand oligopolies where firms maximize mean-variance utilities under linear demand and costs. The impacts are very similar for the two types of oligopoly, but have opposite signs. The impacts of a firm’s risk aversion on outputs, prices, consumer surplus and social welfare can be expressed via potentially observable variables. Since these impacts resemble the effects of firms’ cost changes, a regulator can reduce or eliminate undesirable effects of risk aversion by changing firms’ costs with appropriate countervailing taxes.Into the mire : A closer look at fossil-fuel subsidieshttp://hdl.handle.net/10023/9833
Threatened by climate change, governments the world over are attempting to nudge markets in the direction of less carbon-intensive energy. Perversely, many of these governments continue to subsidize fossil fuels, distorting markets and raising emissions. Determing how much money is involved is difficult, as neither the providers nor the recipients of those subsidies want to own up to them. This paper builds on a unique method to extract fossil fuel subsidies from patterns in countries’ carbon emission-to-GDP ratios. This approach is useful since it: 1) overcomes the problem of scarce data; 2) derives a wider and more comparable measure of subsidies than existing measures and 3) allows for the performance of counterfactuals which help measure the impact of subsidies on emissions and growth. The resultant 170-country, 30-year database finds that the financial and the environmental costs of such subsidies are enormous, especially in China and the US. The overwhelming majority of the world’s fossil fuel subsidies stem from China, the US and the ex-USSR; as of 2010, this figure was $712 billion or nearly 80 per cent of the total world value of subsidies. For its part, Canada has been subsidizing rather than taxing fossil fuels since 1998. By 2010, Canadian subsidies sat at $13 billion, or 1.4 per cent of GDP. In that same year, the total direct and indirect financial costs of all such subsidies amounted to $1.82 trillion, or 3.8 per cent of global GDP. Aside from the money saved, in 2010 a world without subsidies would have had carbon emissons 36 per cent lower than they actually were. Any government looking to ease strained budgets and and make a significant (and cheap) contribution to the fight against climate change must consider slashing fossil fuel subsidies. As the data show, this is a sound decision – fiscally and environmentally.
Tue, 01 Mar 2016 00:00:00 GMThttp://hdl.handle.net/10023/98332016-03-01T00:00:00ZStefanski, Radoslaw (Radek)Threatened by climate change, governments the world over are attempting to nudge markets in the direction of less carbon-intensive energy. Perversely, many of these governments continue to subsidize fossil fuels, distorting markets and raising emissions. Determing how much money is involved is difficult, as neither the providers nor the recipients of those subsidies want to own up to them. This paper builds on a unique method to extract fossil fuel subsidies from patterns in countries’ carbon emission-to-GDP ratios. This approach is useful since it: 1) overcomes the problem of scarce data; 2) derives a wider and more comparable measure of subsidies than existing measures and 3) allows for the performance of counterfactuals which help measure the impact of subsidies on emissions and growth. The resultant 170-country, 30-year database finds that the financial and the environmental costs of such subsidies are enormous, especially in China and the US. The overwhelming majority of the world’s fossil fuel subsidies stem from China, the US and the ex-USSR; as of 2010, this figure was $712 billion or nearly 80 per cent of the total world value of subsidies. For its part, Canada has been subsidizing rather than taxing fossil fuels since 1998. By 2010, Canadian subsidies sat at $13 billion, or 1.4 per cent of GDP. In that same year, the total direct and indirect financial costs of all such subsidies amounted to $1.82 trillion, or 3.8 per cent of global GDP. Aside from the money saved, in 2010 a world without subsidies would have had carbon emissons 36 per cent lower than they actually were. Any government looking to ease strained budgets and and make a significant (and cheap) contribution to the fight against climate change must consider slashing fossil fuel subsidies. As the data show, this is a sound decision – fiscally and environmentally.East side story : historical pollution and persistent neighborhood sortinghttp://hdl.handle.net/10023/9784
Why are the East sides of former industrial cities like London or New York poorer and more deprived? We argue that this observation is the most visible consequence of the historically unequal distribution of air pollutants across neighborhoods. In this paper, we geolocate nearly 5,000 industrial chimneys in 70 English cities in 1880 and use an atmospheric dispersion model to recreate the spatial distribution of pollution. First, individual-level census data show that pollution induced neighborhood sorting during the course of the nineteenth century. Historical pollution patterns explain up to 15% of within-city deprivation in 1881. Second, these equilibria persist to this day even though the pollution that initially caused them has waned. A quantitative model shows the role of non-linearities and tipping-like dynamics in such persistence.
Tue, 01 Nov 2016 00:00:00 GMThttp://hdl.handle.net/10023/97842016-11-01T00:00:00ZHeblich, StephanTrew, AlexZylberberg, YanosWhy are the East sides of former industrial cities like London or New York poorer and more deprived? We argue that this observation is the most visible consequence of the historically unequal distribution of air pollutants across neighborhoods. In this paper, we geolocate nearly 5,000 industrial chimneys in 70 English cities in 1880 and use an atmospheric dispersion model to recreate the spatial distribution of pollution. First, individual-level census data show that pollution induced neighborhood sorting during the course of the nineteenth century. Historical pollution patterns explain up to 15% of within-city deprivation in 1881. Second, these equilibria persist to this day even though the pollution that initially caused them has waned. A quantitative model shows the role of non-linearities and tipping-like dynamics in such persistence.Aid and growth in Malawihttp://hdl.handle.net/10023/9783
We study the impact on the growth of foreign aid flows to districts in Malawi over the period 2000–13. To isolate a causal impact on growth, we employ two exogenous determinants of within-country aid disbursement: First, the ethnic affinity of a district with the sitting President; second, the portion of Parliamentarians in a district that are susceptible into induced political defections. Using these instruments, alone or together, we identify a robust and quantitatively significant role for aid flows in causing higher growth in light density. We find a hump-shaped growth response over the course of three years. Bilateral aid appears to be better in causing growth than multilateral aid while grants have more impact than loans.
Mon, 12 Jun 2017 00:00:00 GMThttp://hdl.handle.net/10023/97832017-06-12T00:00:00ZKhomba, Daniel ChrisTrew, AlexWe study the impact on the growth of foreign aid flows to districts in Malawi over the period 2000–13. To isolate a causal impact on growth, we employ two exogenous determinants of within-country aid disbursement: First, the ethnic affinity of a district with the sitting President; second, the portion of Parliamentarians in a district that are susceptible into induced political defections. Using these instruments, alone or together, we identify a robust and quantitatively significant role for aid flows in causing higher growth in light density. We find a hump-shaped growth response over the course of three years. Bilateral aid appears to be better in causing growth than multilateral aid while grants have more impact than loans.Relative equity market valuation conditions and acquirers’ gainshttp://hdl.handle.net/10023/9776
We examine whether the relative equity market valuation conditions (EMVCs) in the countries of merging firms help acquirers’ managers to time the announcements of both domestic and foreign targets. After controlling for several deal- and merging firm-specific features we find that the number of acquisitions and acquirers’ gains are higher during periods of high-EMVCs at home, irrespective of the domicile of the target. We also find that the higher gains of foreign target acquisitions realized during periods of high-EMVCs at home stem from acquiring targets based in the RoW (=World-G7), rather than the G6 (=G7-UK) group of countries. We argue that this is due to the low correlation of EMVCs between the UK (home) and the RoW group of countries. However, these gains disappear or even reverse during the post-announcement period. Moreover, acquisitions of targets domiciled in the RoW (G6) countries yield higher (lower) gains than acquisitions of domestic targets during periods of high-EMVCs at home. This suggests that the relative EMVCs between the merging firms’ countries allow acquirers’ managers to time the market and acquire targets at a discount, particularly in countries in which acquirers’ stocks are likely to be more overvalued than the targets’ stocks.
Sun, 01 Oct 2017 00:00:00 GMThttp://hdl.handle.net/10023/97762017-10-01T00:00:00ZBarbopoulos, LeonidasAndriosopoulos, DimitrisWe examine whether the relative equity market valuation conditions (EMVCs) in the countries of merging firms help acquirers’ managers to time the announcements of both domestic and foreign targets. After controlling for several deal- and merging firm-specific features we find that the number of acquisitions and acquirers’ gains are higher during periods of high-EMVCs at home, irrespective of the domicile of the target. We also find that the higher gains of foreign target acquisitions realized during periods of high-EMVCs at home stem from acquiring targets based in the RoW (=World-G7), rather than the G6 (=G7-UK) group of countries. We argue that this is due to the low correlation of EMVCs between the UK (home) and the RoW group of countries. However, these gains disappear or even reverse during the post-announcement period. Moreover, acquisitions of targets domiciled in the RoW (G6) countries yield higher (lower) gains than acquisitions of domestic targets during periods of high-EMVCs at home. This suggests that the relative EMVCs between the merging firms’ countries allow acquirers’ managers to time the market and acquire targets at a discount, particularly in countries in which acquirers’ stocks are likely to be more overvalued than the targets’ stocks.Fiscal policy multipliers in an RBC model with learninghttp://hdl.handle.net/10023/9677
Using the standard real business cycle model with lump-sum taxes, we analyze the impact of fiscal policy when agents form expectations using adaptive learning rather than rational expectations (RE). The output multipliers for government purchases are significantly higher under learning, and fall within empirical bounds reported in the literature, which is in sharp contrast to the implausibly low values under RE. Positive effects of fiscal policy are demonstrated during times of economic stress like the recent Great Recession. Finally it is shown how learning can lead to consumption and investment dynamics empirically documented during some episodes of “fiscal consolidations.”
This work was supported by ESRC Grant RES-062-23-2617 and National Science Foundation Grant no. SES-1025011
Tue, 11 Jul 2017 00:00:00 GMThttp://hdl.handle.net/10023/96772017-07-11T00:00:00ZMitra, KaushikEvans, George W.Honkapohja, SeppoUsing the standard real business cycle model with lump-sum taxes, we analyze the impact of fiscal policy when agents form expectations using adaptive learning rather than rational expectations (RE). The output multipliers for government purchases are significantly higher under learning, and fall within empirical bounds reported in the literature, which is in sharp contrast to the implausibly low values under RE. Positive effects of fiscal policy are demonstrated during times of economic stress like the recent Great Recession. Finally it is shown how learning can lead to consumption and investment dynamics empirically documented during some episodes of “fiscal consolidations.”Meritocracy, egalitarianism and the stability of majoritarian organizationshttp://hdl.handle.net/10023/9533
Egalitarianism and meritocracy are competing principles to distribute the joint benefits of cooperation. We examine the consequences of letting members of society vote between those two principles, in a context where individuals must join with others into coalitions of a certain size to become productive. Our setup induces a hedonic game of coalition formation. We study the existence of core stable partitions (organizational structures) of this game. We show that the inability of voters to commit to one distributional rule or another is a potential source of instability. But we also prove that, when stable organizational structures exist, they may be rich in form, and different than those predicted by alternative models of coalition formation. Non-segregated coalitions may arise within core stable structures. Stability is also compatible with the coexistence of meritocratic and egalitarian coalitions. These phenomena are robust, and persist under alternative variants of our initial model.
Fri, 01 May 2015 00:00:00 GMThttp://hdl.handle.net/10023/95332015-05-01T00:00:00ZBarberà, SalvadorBeviá, CarmenPonsatí, ClaraEgalitarianism and meritocracy are competing principles to distribute the joint benefits of cooperation. We examine the consequences of letting members of society vote between those two principles, in a context where individuals must join with others into coalitions of a certain size to become productive. Our setup induces a hedonic game of coalition formation. We study the existence of core stable partitions (organizational structures) of this game. We show that the inability of voters to commit to one distributional rule or another is a potential source of instability. But we also prove that, when stable organizational structures exist, they may be rich in form, and different than those predicted by alternative models of coalition formation. Non-segregated coalitions may arise within core stable structures. Stability is also compatible with the coexistence of meritocratic and egalitarian coalitions. These phenomena are robust, and persist under alternative variants of our initial model.Essays on international portfolio choices and capital flowshttp://hdl.handle.net/10023/9489
The goal of this thesis is to study the international portfolio choices of countries
in an asymmetric world. In practice, this corresponds to the salient facts of
country portfolios and the underlying structural asymmetries between developing
and developed countries in a financially integrated world. In the three main
chapters of the thesis, frameworks are developed to advance our understanding
of the way various country asymmetries contribute to the emergence of these
persistent phenomena in international capital markets.
The first essay studies the question of why developing countries experience net
equity inflows and bond outflows while developed countries experience net equity
outflows and bond inflows, the so-called ‘two-way capital flows’. The analysis is
based on an open-economy New Keynesian model of endogenous country portfolios
with representative agents in each country. The model is so general that
it allows one to perform an assessment of the roles of a long list of country
asymmetries in determining the pattern of two-way capital flows.
While steady-state net country portfolios are zero in the first essay, the second
and third essays consider the situations where this is not true. The second essay
presents an OLG model of an endowment economy with a country asymmetry in
households’ patience. Global imbalances in net positions emerge. Gross portfolio
positions are obtained as the sum of standard self-hedging and, moreover, the
hedging due to external imbalances. The valuation effects of external adjustments
between creditor and debtor countries are rationalized.
By introducing non-tradable risks, the third essay models a production OLG
economy with a country asymmetry in wealth division. Global imbalances in
net positions again arise. Gross portfolio positions are composed of self-hedging,
hedging of non-tradable income and hedging of external interest payments, which
accounts for the reality of asymmetric asset home bias, i.e. although assets are
locally biased everywhere, the pattern is more pronounced in creditor countries.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10023/94892016-01-01T00:00:00ZZhang, NingThe goal of this thesis is to study the international portfolio choices of countries
in an asymmetric world. In practice, this corresponds to the salient facts of
country portfolios and the underlying structural asymmetries between developing
and developed countries in a financially integrated world. In the three main
chapters of the thesis, frameworks are developed to advance our understanding
of the way various country asymmetries contribute to the emergence of these
persistent phenomena in international capital markets.
The first essay studies the question of why developing countries experience net
equity inflows and bond outflows while developed countries experience net equity
outflows and bond inflows, the so-called ‘two-way capital flows’. The analysis is
based on an open-economy New Keynesian model of endogenous country portfolios
with representative agents in each country. The model is so general that
it allows one to perform an assessment of the roles of a long list of country
asymmetries in determining the pattern of two-way capital flows.
While steady-state net country portfolios are zero in the first essay, the second
and third essays consider the situations where this is not true. The second essay
presents an OLG model of an endowment economy with a country asymmetry in
households’ patience. Global imbalances in net positions emerge. Gross portfolio
positions are obtained as the sum of standard self-hedging and, moreover, the
hedging due to external imbalances. The valuation effects of external adjustments
between creditor and debtor countries are rationalized.
By introducing non-tradable risks, the third essay models a production OLG
economy with a country asymmetry in wealth division. Global imbalances in
net positions again arise. Gross portfolio positions are composed of self-hedging,
hedging of non-tradable income and hedging of external interest payments, which
accounts for the reality of asymmetric asset home bias, i.e. although assets are
locally biased everywhere, the pattern is more pronounced in creditor countries.Four essays on UK takeovers : evidence from matching analysishttp://hdl.handle.net/10023/9488
In four empirical chapters, matching analysis is employed to estimate the effects of specific
contractual and regulatory arrangements on particular deal outcomes in the UK takeover
market. The first chapter highlights the positive effect of earnout financing on the acquiring
firms' returns in private target acquisitions. Furthermore, this chapter offers a detailed example
of how the non-parametric Propensity Score Matching, despite its growing popularity in
financial research, can lead to inaccurate inferences when relevant private-target-specific
factors are omitted from the analysis. The second chapter provides the first empirical
examination of the effect of the earnout's terms on the premium offered to the target firm's
shareholders, and how information asymmetry concerns influence this premium. Additionally,
the findings indicate that increases in the premia are negatively interpreted by the market in
non-earnout financed deals. However, this negative effect is neutralised in comparable earnout
financed deals. The third chapter provides the first empirical contribution that highlights the
deal- and firm-related factors that contribute to the growing reliance on the Scheme of
Arrangement, as a substitute for the Contractual Offer, in conducting UK public target deals.
Despite the concerns raised in the legal literature about the limited bargaining power of the
target shareholders under the Scheme, the robust conclusions indicate that such shareholders
manage to receive premia that are at least as high as the premia received by shareholders in
comparable Offer deals. The fourth chapter employs a hand-collected dataset that covers the
incidences of termination fee use in the UK takeover market. The main result is that, in the
period preceding the ban that The Panel on Takeovers and Mergers had imposed on termination
fees, the inclusion of these fees had a beneficial, or at worst neutral, effect on target
shareholders' wealth. Consequently, it is recommended that the Panel ends its ban.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/94882015-01-01T00:00:00ZAdra, SamerIn four empirical chapters, matching analysis is employed to estimate the effects of specific
contractual and regulatory arrangements on particular deal outcomes in the UK takeover
market. The first chapter highlights the positive effect of earnout financing on the acquiring
firms' returns in private target acquisitions. Furthermore, this chapter offers a detailed example
of how the non-parametric Propensity Score Matching, despite its growing popularity in
financial research, can lead to inaccurate inferences when relevant private-target-specific
factors are omitted from the analysis. The second chapter provides the first empirical
examination of the effect of the earnout's terms on the premium offered to the target firm's
shareholders, and how information asymmetry concerns influence this premium. Additionally,
the findings indicate that increases in the premia are negatively interpreted by the market in
non-earnout financed deals. However, this negative effect is neutralised in comparable earnout
financed deals. The third chapter provides the first empirical contribution that highlights the
deal- and firm-related factors that contribute to the growing reliance on the Scheme of
Arrangement, as a substitute for the Contractual Offer, in conducting UK public target deals.
Despite the concerns raised in the legal literature about the limited bargaining power of the
target shareholders under the Scheme, the robust conclusions indicate that such shareholders
manage to receive premia that are at least as high as the premia received by shareholders in
comparable Offer deals. The fourth chapter employs a hand-collected dataset that covers the
incidences of termination fee use in the UK takeover market. The main result is that, in the
period preceding the ban that The Panel on Takeovers and Mergers had imposed on termination
fees, the inclusion of these fees had a beneficial, or at worst neutral, effect on target
shareholders' wealth. Consequently, it is recommended that the Panel ends its ban.Three essays on the wealth effects of deferred payments in corporate takeovershttp://hdl.handle.net/10023/9464
In three papers, I employ parametric and nonparametric methods in order to further examine the
determinants of value creation in M&A deals financed with contingent earnout payments. The
first paper investigates the short-run wealth effects of earnouts in deals in which financial
advisors are counseling the acquiring firms. The results suggest that relative to using non-earnout
payments, acquirers enjoy higher abnormal returns from earnout use only when consulting
financial advisors. Specifically, once accounting for potential selection bias, advised earnout-financed deals significantly outperform deals that are financed with: (a) earnouts without the
involvement of financial advisors and (b) non-earnouts regardless of the involvement of financial
advisors. Thus, the likely ability of financial advisors to efficiently address the inherent
complexities of the design of earnouts leads to greater acquirer gains. The second paper
examines the impact of the acquiring firm’s informational environment on the announcement
period wealth effects of earnout-financing. The results suggest that under increased information
asymmetry over the acquiring firm, the market’s reaction to an earnout-financed deal mainly
reflects its inference that the acquirer’s stock is underpriced, rather than the deal’s synergy
potential. To this end, earnout acquirers are illustrated to be relatively undervalued prior to the
deal’s announcement. In contrast, the selection of earnouts by big acquirers with low information
asymmetry sends a strong signal for value creation that also prevents market participants from
inducing a size-related discount. Lastly, the third paper investigates the wealth effects of
earnouts in international changes of corporate control. The results suggest that when firms
choose to join a multinational network through the acquisition of a foreign company earnout-financing offers a major value-creating opportunity yielding greater announcement period
abnormal returns to acquirers relative to domestic and remaining cross-border deals. In contrast,
the likely presence of agency problems and monitoring costs appears to deteriorate the expected
synergy gains from non-initial earnout-financed international M&As.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/94642015-01-01T00:00:00ZAlexakis, DimitriosIn three papers, I employ parametric and nonparametric methods in order to further examine the
determinants of value creation in M&A deals financed with contingent earnout payments. The
first paper investigates the short-run wealth effects of earnouts in deals in which financial
advisors are counseling the acquiring firms. The results suggest that relative to using non-earnout
payments, acquirers enjoy higher abnormal returns from earnout use only when consulting
financial advisors. Specifically, once accounting for potential selection bias, advised earnout-financed deals significantly outperform deals that are financed with: (a) earnouts without the
involvement of financial advisors and (b) non-earnouts regardless of the involvement of financial
advisors. Thus, the likely ability of financial advisors to efficiently address the inherent
complexities of the design of earnouts leads to greater acquirer gains. The second paper
examines the impact of the acquiring firm’s informational environment on the announcement
period wealth effects of earnout-financing. The results suggest that under increased information
asymmetry over the acquiring firm, the market’s reaction to an earnout-financed deal mainly
reflects its inference that the acquirer’s stock is underpriced, rather than the deal’s synergy
potential. To this end, earnout acquirers are illustrated to be relatively undervalued prior to the
deal’s announcement. In contrast, the selection of earnouts by big acquirers with low information
asymmetry sends a strong signal for value creation that also prevents market participants from
inducing a size-related discount. Lastly, the third paper investigates the wealth effects of
earnouts in international changes of corporate control. The results suggest that when firms
choose to join a multinational network through the acquisition of a foreign company earnout-financing offers a major value-creating opportunity yielding greater announcement period
abnormal returns to acquirers relative to domestic and remaining cross-border deals. In contrast,
the likely presence of agency problems and monitoring costs appears to deteriorate the expected
synergy gains from non-initial earnout-financed international M&As.Essays in competition policy, innovation and banking regulationhttp://hdl.handle.net/10023/9456
This thesis investigates the optimal enforcement of competition policy in innovative
industries and in the banking sector. Chapter 2 analyses the welfare impact
of compulsory licensing in the context of unilateral refusals to license intellectual
property. When the risk-free rate is low, compulsory licensing is shown unambiguously
to increase consumer surplus. Compulsory licensing has an ambiguous
effect on total welfare, but is more likely to increase total welfare in industries
that are naturally less competitive. Compulsory licensing is also shown to be an
effective policy to protect competition per se. The chapter also demonstrates the
robustness of these results to alternative settings of R&D competition.
Chapter 3 develops a much more general framework for the study of optimal
competition policy enforcement in innovative industries. A major contribution of
this chapter is to separate carefully a firm's decision to innovate from its decision to
take some generic anti-competitive action. This allows us to differentiate between
firms' counterfactual behaviour, according to whether or not they would have
innovated in the absence of any potentially anti-competitive conduct. In contrast
to the existing literature, it is shown that the stringency of optimal policy will
be harsher towards firms that have innovated in addition to taking a given anticompetitive
action.
Chapter 4 develops a framework for competition policy in the banking sector,
which takes explicit account of capital regulation. In particular, conditions are
derived under which increases in the capital requirement increase the incentives of
banks to engage in a generic abuse of dominance in the loan market, and to exploit
depositors through the sale of ancillary financial products. Thus the central contribution
of this chapter is to clarify the conditions under which stability-focused
capital regulation conflicts with competition and consumer protection policy in
the banking sector.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10023/94562014-01-01T00:00:00ZSeifert, JacobThis thesis investigates the optimal enforcement of competition policy in innovative
industries and in the banking sector. Chapter 2 analyses the welfare impact
of compulsory licensing in the context of unilateral refusals to license intellectual
property. When the risk-free rate is low, compulsory licensing is shown unambiguously
to increase consumer surplus. Compulsory licensing has an ambiguous
effect on total welfare, but is more likely to increase total welfare in industries
that are naturally less competitive. Compulsory licensing is also shown to be an
effective policy to protect competition per se. The chapter also demonstrates the
robustness of these results to alternative settings of R&D competition.
Chapter 3 develops a much more general framework for the study of optimal
competition policy enforcement in innovative industries. A major contribution of
this chapter is to separate carefully a firm's decision to innovate from its decision to
take some generic anti-competitive action. This allows us to differentiate between
firms' counterfactual behaviour, according to whether or not they would have
innovated in the absence of any potentially anti-competitive conduct. In contrast
to the existing literature, it is shown that the stringency of optimal policy will
be harsher towards firms that have innovated in addition to taking a given anticompetitive
action.
Chapter 4 develops a framework for competition policy in the banking sector,
which takes explicit account of capital regulation. In particular, conditions are
derived under which increases in the capital requirement increase the incentives of
banks to engage in a generic abuse of dominance in the loan market, and to exploit
depositors through the sale of ancillary financial products. Thus the central contribution
of this chapter is to clarify the conditions under which stability-focused
capital regulation conflicts with competition and consumer protection policy in
the banking sector.Velocity in the long run : money and structural transformationhttp://hdl.handle.net/10023/9282
Monetary velocity declines as economies grow. We argue that this is due to the process of structural transformation - the shift of workers from agricultural to non-agricultural production associated with rising income. A calibrated, two-sector model of structural transformation with monetary and non-monetary trade accurately generates the long run monetary velocity of the US between 1869 and 2013 as well as the velocity of a panel of 92 countries between 1980 and 2010. Three lessons arise from our analysis: 1) Developments in agriculture, rather than non-agriculture, are key in driving monetary velocity; 2) Inflationary policies are disproportionately more costly in richer than in poorer countries; and 3) Nominal prices and inflation rates are not 'always and everywhere a monetary phenomenon': the composition of output influences money demand and hence the secular trends of price levels.
Thu, 28 Jul 2016 00:00:00 GMThttp://hdl.handle.net/10023/92822016-07-28T00:00:00ZMele, AntonioStefanski, Radoslaw (Radek)Monetary velocity declines as economies grow. We argue that this is due to the process of structural transformation - the shift of workers from agricultural to non-agricultural production associated with rising income. A calibrated, two-sector model of structural transformation with monetary and non-monetary trade accurately generates the long run monetary velocity of the US between 1869 and 2013 as well as the velocity of a panel of 92 countries between 1980 and 2010. Three lessons arise from our analysis: 1) Developments in agriculture, rather than non-agriculture, are key in driving monetary velocity; 2) Inflationary policies are disproportionately more costly in richer than in poorer countries; and 3) Nominal prices and inflation rates are not 'always and everywhere a monetary phenomenon': the composition of output influences money demand and hence the secular trends of price levels.Nominal Stability and Financial Globalizationhttp://hdl.handle.net/10023/9200
Over the past four decades, there has been a substantial increase in financial globalization, that is, rapid growth in gross external portfolio positions. There has also been a substantial fall in the variability of inflation. Many economists have conjectured that financial globalization contributed to the improved inflation performance. This paper explores the causal link running in the opposite direction. Using an open economy model with endogenous portfolio choice, it is shown that a monetary rule that reduces inflation variability tends to increase the size of gross external asset positions. This result appears to be robust across different modeling specifications.
Fri, 01 Aug 2014 00:00:00 GMThttp://hdl.handle.net/10023/92002014-08-01T00:00:00ZDevereux, Michael B.Senay, OzgeSutherland, AlanOver the past four decades, there has been a substantial increase in financial globalization, that is, rapid growth in gross external portfolio positions. There has also been a substantial fall in the variability of inflation. Many economists have conjectured that financial globalization contributed to the improved inflation performance. This paper explores the causal link running in the opposite direction. Using an open economy model with endogenous portfolio choice, it is shown that a monetary rule that reduces inflation variability tends to increase the size of gross external asset positions. This result appears to be robust across different modeling specifications.Legal uncertainty, competition law enforcement procedures and optimal penaltieshttp://hdl.handle.net/10023/8942
In this paper we make three contributions to the literature on optimal Competition Law enforcement procedures. A first contribution, of more general interest, is to formalise the concept of “legal uncertainty”, relying on ideas in the literature on Law and Economics, but associating legal uncertainty with the information structure of what firms know about the process by which potentially harmful actions are treated by competition authorities. What firms know is clearly distinct, though influenced, from the phenomenon of decision errors made by authorities. We use this framework to show that information structures with legal uncertainty need not imply lower welfare than information structures with legal certainty – a result echoing a similar finding obtained in a completely different context and under different assumptions in earlier Law and Economics literature (Kaplow and Shavell, 1992). Our second contribution is to revisit and significantly generalise the analysis in our previous paper, Katsoulacos and Ulph (2009), involving a welfare comparison of Per Se and Effects-Based legal standards. In that analysis we considered just a single information structure under an Effects-Based standard and penalties were exogenously fixed. Here we allow (a) for different information structures under an Effects-Based standard and (b) for endogenous penalties. We obtain two main results. Under all information structures (including complete legal uncertainty) an Effects-Based legal standard dominates a Per Se standard. Moreover, optimal penalties may be higher when there is legal uncertainty than when there is no legal uncertainty. These conclusions run counter to a number of prescriptions by legal scholars in the recent literature.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/89422015-01-01T00:00:00ZKatsoulacos, YannisUlph, David TregearIn this paper we make three contributions to the literature on optimal Competition Law enforcement procedures. A first contribution, of more general interest, is to formalise the concept of “legal uncertainty”, relying on ideas in the literature on Law and Economics, but associating legal uncertainty with the information structure of what firms know about the process by which potentially harmful actions are treated by competition authorities. What firms know is clearly distinct, though influenced, from the phenomenon of decision errors made by authorities. We use this framework to show that information structures with legal uncertainty need not imply lower welfare than information structures with legal certainty – a result echoing a similar finding obtained in a completely different context and under different assumptions in earlier Law and Economics literature (Kaplow and Shavell, 1992). Our second contribution is to revisit and significantly generalise the analysis in our previous paper, Katsoulacos and Ulph (2009), involving a welfare comparison of Per Se and Effects-Based legal standards. In that analysis we considered just a single information structure under an Effects-Based standard and penalties were exogenously fixed. Here we allow (a) for different information structures under an Effects-Based standard and (b) for endogenous penalties. We obtain two main results. Under all information structures (including complete legal uncertainty) an Effects-Based legal standard dominates a Per Se standard. Moreover, optimal penalties may be higher when there is legal uncertainty than when there is no legal uncertainty. These conclusions run counter to a number of prescriptions by legal scholars in the recent literature.Stabilization and commitment : forward guidance in economies with rational expectationshttp://hdl.handle.net/10023/8901
We construct a theory of forward guidance in economic policy making in order to provide a framework for explaining the role and strategic advantages of including forward guidance as an explicit part of policy design. We do this by setting up a general policy problem in which forward guidance plays a role, and then examine the consequences for performance when that guidance is withdrawn. We show that forward guidance provides enhanced controllability and stabilizability—especially where such properties may not otherwise be available. As a by-product, we find that forward guidance limits the scope and incentives for time-inconsistent behavior in an economy whose policy goals are ultimately reachable. It also adds to the credibility of a set of policies.
Tue, 05 Apr 2016 00:00:00 GMThttp://hdl.handle.net/10023/89012016-04-05T00:00:00ZHughes Hallett, AndrewAcocella, NicolaWe construct a theory of forward guidance in economic policy making in order to provide a framework for explaining the role and strategic advantages of including forward guidance as an explicit part of policy design. We do this by setting up a general policy problem in which forward guidance plays a role, and then examine the consequences for performance when that guidance is withdrawn. We show that forward guidance provides enhanced controllability and stabilizability—especially where such properties may not otherwise be available. As a by-product, we find that forward guidance limits the scope and incentives for time-inconsistent behavior in an economy whose policy goals are ultimately reachable. It also adds to the credibility of a set of policies.Asymmetric dominance, deferral and status quo bias in a behavioral model of choicehttp://hdl.handle.net/10023/8895
This paper proposes and axiomatically characterizes a model of choice that builds on the criterion of partial dominance and allows for two types of avoidant behavior: *choice deferral* and *status quo bias*. These phenomena are explained in a unified way that allows for a clear theoretical distinction between them to be made. The model also explains the *strengthening of the attraction effect* that has been observed when deferral is permissible. Unlike other models of status quo biased behavior, the one analyzed in this paper builds on a *unique*, reference-independent preference relation that is acyclic and generally incomplete. When this relation is complete, the model reduces to rational choice.
Mon, 01 Feb 2016 00:00:00 GMThttp://hdl.handle.net/10023/88952016-02-01T00:00:00ZGerasimou, GeorgiosThis paper proposes and axiomatically characterizes a model of choice that builds on the criterion of partial dominance and allows for two types of avoidant behavior: *choice deferral* and *status quo bias*. These phenomena are explained in a unified way that allows for a clear theoretical distinction between them to be made. The model also explains the *strengthening of the attraction effect* that has been observed when deferral is permissible. Unlike other models of status quo biased behavior, the one analyzed in this paper builds on a *unique*, reference-independent preference relation that is acyclic and generally incomplete. When this relation is complete, the model reduces to rational choice.Has the financial crisis changed the business cycle characteristics of the GIIPS countries?http://hdl.handle.net/10023/8878
Since the financial crisis erupted in 2008, the governments of Portugal, Ireland, Italy Greece and Spain (GIIPS) find themselves in a position where financing their debts has become increasingly difficult. As a result, these governments reduced government expenditure and/or increased taxes in order to reduce their deficits. Hence, whilst other countries in the Eurozone – notably Germany - enjoyed a recovery from the financial crisis, the GIIPS countries only just started to recover. It is therefore no surprise that the business cycles of the northern and southern European countries diverged, and there was and still is a real fear of deflation. This poses a risk for the Eurozone, as it makes the common monetary policy less effective. In this paper we analyse these business cycles in detail. We ask whether the financial crisis has changed the characteristics of the business cycles of the GIIPS countries. For example, the austerity measures in Greece may lead to a convergence of government spending between Germany and Greece and to greater convergence of business cycles in both countries. If it does, then there is some hope that the common monetary policy will return to being effective in the future. But it may not. The austerity measures could also lead to greater divergence between Greece and Germany, in which case leaving the monetary Union would not only be beneficial for Greece. It might be unavoidable.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10023/88782014-01-01T00:00:00ZHughes Hallett, AndrewRichter, ChristianSince the financial crisis erupted in 2008, the governments of Portugal, Ireland, Italy Greece and Spain (GIIPS) find themselves in a position where financing their debts has become increasingly difficult. As a result, these governments reduced government expenditure and/or increased taxes in order to reduce their deficits. Hence, whilst other countries in the Eurozone – notably Germany - enjoyed a recovery from the financial crisis, the GIIPS countries only just started to recover. It is therefore no surprise that the business cycles of the northern and southern European countries diverged, and there was and still is a real fear of deflation. This poses a risk for the Eurozone, as it makes the common monetary policy less effective. In this paper we analyse these business cycles in detail. We ask whether the financial crisis has changed the characteristics of the business cycles of the GIIPS countries. For example, the austerity measures in Greece may lead to a convergence of government spending between Germany and Greece and to greater convergence of business cycles in both countries. If it does, then there is some hope that the common monetary policy will return to being effective in the future. But it may not. The austerity measures could also lead to greater divergence between Greece and Germany, in which case leaving the monetary Union would not only be beneficial for Greece. It might be unavoidable.Boom goes the price : giant resource discoveries and real exchange rate appreciationhttp://hdl.handle.net/10023/8864
We estimate the effect of giant oil and gas discoveries on bilateral real exchange rates. The size and plausibly exogenous timing of such discoveries make them ideal for identifying the effects of an anticipated resource boom on prices. We find that a giant discovery with the value of a country's GDP increases the real exchange rate by 14% within 10 years following the discovery. The appreciation is nearly exclusively driven by an appreciation of the prices of non-tradable goods. We show that these empirical results are qualitatively and quantitatively in line with a calibrated model with forward looking behaviour and Dutch disease dynamics.
Sat, 21 May 2016 00:00:00 GMThttp://hdl.handle.net/10023/88642016-05-21T00:00:00ZHarding, TorfinnStefanski, Radoslaw (Radek)Toews, GerhardWe estimate the effect of giant oil and gas discoveries on bilateral real exchange rates. The size and plausibly exogenous timing of such discoveries make them ideal for identifying the effects of an anticipated resource boom on prices. We find that a giant discovery with the value of a country's GDP increases the real exchange rate by 14% within 10 years following the discovery. The appreciation is nearly exclusively driven by an appreciation of the prices of non-tradable goods. We show that these empirical results are qualitatively and quantitatively in line with a calibrated model with forward looking behaviour and Dutch disease dynamics.Social promotion in primary school : immediate and cumulated effects on attainmenthttp://hdl.handle.net/10023/8863
Does social promotion perpetuate shortfalls in student achievement, or can low-achieving students catch up with their peers when they are pushed ahead? Using data from Brazilian primary schools, this paper presents evidence of substantial catch up among socially promoted students. After documenting sorting across schools in response to the policy, in particular away from gated- promotion private schools, we show that social promotion cycles has no significant effect on municipality enrolment figures or on the percentage of students dropping out mid-year. Cohorts of students exposed to episodes of social pro- motion display higher rates of age-appropriate study than their peers who faced the threat of repetition each year: by age eleven, 5.6 fewer students out of 100 have fallen behind in their studies, while 5.1 fewer students out of 100 are two or more years delayed. These gains, which arise mechanically during the period of social promotion, are highly persistent over time – even through educational stages which are typically high-stakes. This evidence suggests that, absent the social promotion policy, retention rates in Brazilian primary schools are inefficiently high: many promoted students successfully pass gateway exams after being pushed ahead, and go on to complete junior primary school on time.
Tue, 26 Apr 2016 00:00:00 GMThttp://hdl.handle.net/10023/88632016-04-26T00:00:00ZLeighton, Margaret AliceSouza, PriscilaStraub, StéphaneDoes social promotion perpetuate shortfalls in student achievement, or can low-achieving students catch up with their peers when they are pushed ahead? Using data from Brazilian primary schools, this paper presents evidence of substantial catch up among socially promoted students. After documenting sorting across schools in response to the policy, in particular away from gated- promotion private schools, we show that social promotion cycles has no significant effect on municipality enrolment figures or on the percentage of students dropping out mid-year. Cohorts of students exposed to episodes of social pro- motion display higher rates of age-appropriate study than their peers who faced the threat of repetition each year: by age eleven, 5.6 fewer students out of 100 have fallen behind in their studies, while 5.1 fewer students out of 100 are two or more years delayed. These gains, which arise mechanically during the period of social promotion, are highly persistent over time – even through educational stages which are typically high-stakes. This evidence suggests that, absent the social promotion policy, retention rates in Brazilian primary schools are inefficiently high: many promoted students successfully pass gateway exams after being pushed ahead, and go on to complete junior primary school on time.Representing a democratic constituency in negotiations : delegation versus ratificationhttp://hdl.handle.net/10023/8812
We consider negotiations where one of the parties is a group that must send a representative to the bargaining table. We examine the trade-offs that the group faces in choosing between two different regimes for this representation: (i) Delegation where the representative is granted full authority to reach an agreement, and (ii) Ratification, where any agreement reached by the representative requires a posterior ratification vote. We show that when the group has flexibility—to select the delegate or to set the majority threshold for ratification—the majority of the group favors delegation. Only when the flexibility is limited or delegates are (sufficiently) unreliable will the majority of the group prefer ratification.
The authors acknowledge financial support from the Generalitat de Catalunya through grant SGR2009-1051, and from the Ministerio de Economía y Competitividad through grants ECO2009-08820 and ECO2012-34046
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/88122015-01-01T00:00:00ZCardona, D.Ponsatí, C.We consider negotiations where one of the parties is a group that must send a representative to the bargaining table. We examine the trade-offs that the group faces in choosing between two different regimes for this representation: (i) Delegation where the representative is granted full authority to reach an agreement, and (ii) Ratification, where any agreement reached by the representative requires a posterior ratification vote. We show that when the group has flexibility—to select the delegate or to set the majority threshold for ratification—the majority of the group favors delegation. Only when the flexibility is limited or delegates are (sufficiently) unreliable will the majority of the group prefer ratification.Local currency pricing, foreign monetary shocks and exchange rate policyhttp://hdl.handle.net/10023/8811
The implications of local currency pricing (LCP) for monetary regime choice are analysed for a country facing foreign monetary shocks. In this analysis expenditure switching is potentially welfare reducing. This contrasts with the existing LCP literature, which focuses on productivity shocks and thus analyses a world where expenditure switching is welfare enhancing. This paper shows that, when home and foreign producers follow LCP, expenditure switching is absent and a floating rate is preferred by the home country. But when only home producers follow LCP, expenditure switching is present and a fixed rate can be welfare enhancing for the home country.
This research was supported by ESRC [grant number ES/I024174/1]
Tue, 01 Sep 2015 00:00:00 GMThttp://hdl.handle.net/10023/88112015-09-01T00:00:00ZSenay, OzgeSutherland, AlanThe implications of local currency pricing (LCP) for monetary regime choice are analysed for a country facing foreign monetary shocks. In this analysis expenditure switching is potentially welfare reducing. This contrasts with the existing LCP literature, which focuses on productivity shocks and thus analyses a world where expenditure switching is welfare enhancing. This paper shows that, when home and foreign producers follow LCP, expenditure switching is absent and a floating rate is preferred by the home country. But when only home producers follow LCP, expenditure switching is present and a fixed rate can be welfare enhancing for the home country.Transaction costs and institutions : investments in exchangehttp://hdl.handle.net/10023/8722
This paper proposes a simple model for understanding transaction costs – their composition, size and policy implications. We distinguish between investments in institutions that facilitate exchange and the cost of conducting exchange itself. Institutional quality and market size are determined by the decisions of risk adverse agents and conditions are discussed under which the efficient allocation may be decentralized. We highlight a number of differences with models where transaction costs are exogenous, including the implications for taxation and measurement issues.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/87222015-01-01T00:00:00ZNolan, CharlesTrew, Alex WilliamThis paper proposes a simple model for understanding transaction costs – their composition, size and policy implications. We distinguish between investments in institutions that facilitate exchange and the cost of conducting exchange itself. Institutional quality and market size are determined by the decisions of risk adverse agents and conditions are discussed under which the efficient allocation may be decentralized. We highlight a number of differences with models where transaction costs are exogenous, including the implications for taxation and measurement issues.Consumer behaviour with environmental and social externalities : implications for analysis and policyhttp://hdl.handle.net/10023/8687
In this paper we summarise some of our recent work on consumer behaviour, drawing on recent developments in behavioural economics, particularly linked to sociology as much as psychology, in which consumers are embedded in a social context, so their behaviour is shaped by their interactions with other consumers. For the purpose of this paper we also allow consumption to cause environmental damage. Analysing the social context of consumption naturally lends itself to the use of game theoretic tools. We shall be concerned with two ways in which social interactions affect consumer preferences and behaviour: socially-embedded preferences, where the behaviour of other consumers affect an individual’s preferences and hence consumption (we consider two examples: conspicuous consumption and consumption norms) and socially-directed preferences where people display altruistic behaviour. Our aim is to show that building links between sociological and behavioural economic approaches to the study of consumer behaviour can lead to significant and surprising implications for conventional economic analysis and policy prescriptions, especially with respect to environmental policy.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/86872015-01-01T00:00:00ZDasgupta, P.Southerton, D.Ulph, A.Ulph, D.In this paper we summarise some of our recent work on consumer behaviour, drawing on recent developments in behavioural economics, particularly linked to sociology as much as psychology, in which consumers are embedded in a social context, so their behaviour is shaped by their interactions with other consumers. For the purpose of this paper we also allow consumption to cause environmental damage. Analysing the social context of consumption naturally lends itself to the use of game theoretic tools. We shall be concerned with two ways in which social interactions affect consumer preferences and behaviour: socially-embedded preferences, where the behaviour of other consumers affect an individual’s preferences and hence consumption (we consider two examples: conspicuous consumption and consumption norms) and socially-directed preferences where people display altruistic behaviour. Our aim is to show that building links between sociological and behavioural economic approaches to the study of consumer behaviour can lead to significant and surprising implications for conventional economic analysis and policy prescriptions, especially with respect to environmental policy.State dependent choicehttp://hdl.handle.net/10023/8662
We propose a theory of choices that are influenced by the psychological state of the agent. The central hypothesis is that the psychological state controls the urgency of the attributes sought by the decision maker in the available alternatives. While state dependent choice is less restricted than rational choice, our model does have empirical content, expressed by simple ‘revealed preference’ type of constraints on observable choice data. We demonstrate the applicability of simple versions of the framework to economic contexts. We show in particular that it can explain widely researched anomalies in the labour supply of taxi drivers.
Tue, 01 Sep 2015 00:00:00 GMThttp://hdl.handle.net/10023/86622015-09-01T00:00:00ZManzini, PaolaMariotti, MarcoWe propose a theory of choices that are influenced by the psychological state of the agent. The central hypothesis is that the psychological state controls the urgency of the attributes sought by the decision maker in the available alternatives. While state dependent choice is less restricted than rational choice, our model does have empirical content, expressed by simple ‘revealed preference’ type of constraints on observable choice data. We demonstrate the applicability of simple versions of the framework to economic contexts. We show in particular that it can explain widely researched anomalies in the labour supply of taxi drivers.The implications of unintended pregnancies for mental health in later lifehttp://hdl.handle.net/10023/8624
Despite decades of research on unintended pregnancies, we know little about the health implications for the women who experience them. Moreover, no study has examined the implications for women whose pregnancies occurred before Roe v. Wade was decided—nor whether the mental health consequences of these unintended pregnancies continue into later life. Using the Wisconsin Longitudinal Study, a 60-year ongoing survey, we examined associations between unwanted and mistimed pregnancies and mental health in later life, controlling for factors such as early life socioeconomic conditions, adolescent IQ, and personality. We found that in this cohort of mostly married and White women, who completed their pregnancies before the legalization of abortion, unwanted pregnancies were strongly associated with poorer mental health outcomes in later life.
Tue, 01 Mar 2016 00:00:00 GMThttp://hdl.handle.net/10023/86242016-03-01T00:00:00ZHerd, PamelaHiggins, JennySicinski, KamilMerkurieva, IrinaDespite decades of research on unintended pregnancies, we know little about the health implications for the women who experience them. Moreover, no study has examined the implications for women whose pregnancies occurred before Roe v. Wade was decided—nor whether the mental health consequences of these unintended pregnancies continue into later life. Using the Wisconsin Longitudinal Study, a 60-year ongoing survey, we examined associations between unwanted and mistimed pregnancies and mental health in later life, controlling for factors such as early life socioeconomic conditions, adolescent IQ, and personality. We found that in this cohort of mostly married and White women, who completed their pregnancies before the legalization of abortion, unwanted pregnancies were strongly associated with poorer mental health outcomes in later life.Late career job loss and the decision to retirehttp://hdl.handle.net/10023/8615
This paper provides an empirical analysis of the effect of involuntary job loss on the lifetime income and labor supply of older workers. I develop and estimate a dynamic programming model of retirement and savings with costly job search and exogenous layoffs. The structural estimates from the Health and Retirement Study data show that older displaced workers lose up to one and a half years of pre-displacement earnings over the remaining lifetime. Most of this loss (80%) is due to the permanent wage penalty following displacement, while the rest is explained by search frictions. Involuntary job loss makes an average worker retire fifteen months earlier. However, workers who were approaching retirement at the onset of the Great Recession will increase their labor supply by approximately five months in response to the joint impact of changes in the value of household assets and the probabilities of losing and finding a job.
My work with restricted data was supported by the Social Sciences Research Services at UW-Madison.
Tue, 12 Apr 2016 00:00:00 GMThttp://hdl.handle.net/10023/86152016-04-12T00:00:00ZMerkurieva, IrinaThis paper provides an empirical analysis of the effect of involuntary job loss on the lifetime income and labor supply of older workers. I develop and estimate a dynamic programming model of retirement and savings with costly job search and exogenous layoffs. The structural estimates from the Health and Retirement Study data show that older displaced workers lose up to one and a half years of pre-displacement earnings over the remaining lifetime. Most of this loss (80%) is due to the permanent wage penalty following displacement, while the rest is explained by search frictions. Involuntary job loss makes an average worker retire fifteen months earlier. However, workers who were approaching retirement at the onset of the Great Recession will increase their labor supply by approximately five months in response to the joint impact of changes in the value of household assets and the probabilities of losing and finding a job.Welfare economics and bounded rationality : The case for model-based approacheshttp://hdl.handle.net/10023/8577
In this paper we examine the problems facing a policy maker who observes inconsistent choices made by agents who are boundedly rational. We contrast a model-less and a model-based approach to welfare economics. We make the case for the model-based approach and examine its advantages as well as some problematic issues associated with it.
Mariotti gratefully acknowledges financial support through a Leverhulme Fellowship. Both authors acknowledge financial support from the ESRC through grant RES-000-22-3474.
Mon, 01 Dec 2014 00:00:00 GMThttp://hdl.handle.net/10023/85772014-12-01T00:00:00ZManzini, PaolaMariotti, MarcoIn this paper we examine the problems facing a policy maker who observes inconsistent choices made by agents who are boundedly rational. We contrast a model-less and a model-based approach to welfare economics. We make the case for the model-based approach and examine its advantages as well as some problematic issues associated with it.Examining monetary policy transmission in the People's Republic of China – structural change models with a Monetary Policy Indexhttp://hdl.handle.net/10023/8576
This paper estimates augmented versions of the Investment–Saving curve for the People's Republic of China in an attempt to examine the relationship between monetary policy and the real economy. It endeavors to account for any structural break, nonlinearity, or asymmetry in the transmission process by estimating a breakpoint model and a Markov switching model. The Investment–Saving curve equations are estimated using a Monetary Policy Index, which has been calculated using the Kalman filter. This index will account for the various monetary policy tools, both quantitative and qualitative, that the People's Bank of China has used over the period 1991–2014. The results of this paper suggest that monetary policy has an asymmetric affect depending on the level of output in relation to potential, and that the People's Republic of China's exchange rate policy has restricted the effectiveness of the People's Bank of China's monetary policy response.
The financial support of the Irish Research Council and The Paul Tansey Economics Postgraduate Research Scholarship is greatly appreciated.
Tue, 01 Mar 2016 00:00:00 GMThttp://hdl.handle.net/10023/85762016-03-01T00:00:00ZEgan, Paul GerardLeddin, Anthony J.This paper estimates augmented versions of the Investment–Saving curve for the People's Republic of China in an attempt to examine the relationship between monetary policy and the real economy. It endeavors to account for any structural break, nonlinearity, or asymmetry in the transmission process by estimating a breakpoint model and a Markov switching model. The Investment–Saving curve equations are estimated using a Monetary Policy Index, which has been calculated using the Kalman filter. This index will account for the various monetary policy tools, both quantitative and qualitative, that the People's Bank of China has used over the period 1991–2014. The results of this paper suggest that monetary policy has an asymmetric affect depending on the level of output in relation to potential, and that the People's Republic of China's exchange rate policy has restricted the effectiveness of the People's Bank of China's monetary policy response.Dual Random Utility Maximisationhttp://hdl.handle.net/10023/8488
Dual Random Utility Maximisation (dRUM) is Random Utility Maximisation when utility depends on only two states. This class has many relevant behavioural interpretations and practical applications. We show that dRUM is (generically) the only stochastic choice rule that satisfies Regularity and two new properties: Con- stant Expansion (if the choice probability of an alternative is the same across two menus, then it is the same in the merged menu), and Negative Expansion (if the choice probability of an alternative is less than one and differs across two menus, then it vanishes in the merged menu). We extend the theory to menu-dependent state probabilities. This accommodates prominent violations of Regularity such as the attraction, similarity and compromise effects.
Sun, 12 Mar 2017 00:00:00 GMThttp://hdl.handle.net/10023/84882017-03-12T00:00:00ZManzini, PaolaMariotti, MarcoDual Random Utility Maximisation (dRUM) is Random Utility Maximisation when utility depends on only two states. This class has many relevant behavioural interpretations and practical applications. We show that dRUM is (generically) the only stochastic choice rule that satisfies Regularity and two new properties: Con- stant Expansion (if the choice probability of an alternative is the same across two menus, then it is the same in the merged menu), and Negative Expansion (if the choice probability of an alternative is less than one and differs across two menus, then it vanishes in the merged menu). We extend the theory to menu-dependent state probabilities. This accommodates prominent violations of Regularity such as the attraction, similarity and compromise effects.Size invariant measures of association : characterization and difficultieshttp://hdl.handle.net/10023/8443
A measure of association on cross-classification tables is row-size invariant if it is unaffected by the multiplication of all entries in a row by the same positive number. It is class-size invariant if it is unaffected by the multiplication of all entries in a class (i.e., a row or a column). We prove that every class-size invariant measure of association assigns to each cross-classification table a number which depends only on the cross-product ratios of its 2×2 subtables. We submit that the degree of association should increase when mass is shifted from cells containing a proportion of observations lower than what is expected under statistical independence to cells containing a proportion higher than expected–provided that total mass in each class remains unchanged. We prove that no continuous row-size invariant measure of association satisfies this monotonicity axiom if there are at least four rows.
Sprumont acknowledges support from the Fonds de Recherche sur la Soci été et la Culture of Québec.
Fri, 01 May 2015 00:00:00 GMThttp://hdl.handle.net/10023/84432015-05-01T00:00:00ZNegri, MargheritaSprumont, YvesA measure of association on cross-classification tables is row-size invariant if it is unaffected by the multiplication of all entries in a row by the same positive number. It is class-size invariant if it is unaffected by the multiplication of all entries in a class (i.e., a row or a column). We prove that every class-size invariant measure of association assigns to each cross-classification table a number which depends only on the cross-product ratios of its 2×2 subtables. We submit that the degree of association should increase when mass is shifted from cells containing a proportion of observations lower than what is expected under statistical independence to cells containing a proportion higher than expected–provided that total mass in each class remains unchanged. We prove that no continuous row-size invariant measure of association satisfies this monotonicity axiom if there are at least four rows.Partially dominant choicehttp://hdl.handle.net/10023/8427
This paper proposes and analyzes a model of context-dependent choice with stable but incomplete preferences that is based on the idea of partial dominance: An alternative is chosen from a menu if it is not worse than anything in the menu and is also better than something else. This choice procedure provides a simple explanation of the attraction/decoy effect. It reduces to rational choice when preferences are complete in two ways that are made precise. Some preference identification and choice consistency properties associated with this model are analyzed, and certain ways in which its predictions differ from those of other recently proposed models of the attraction effect are also discussed.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10023/84272016-01-01T00:00:00ZGerasimou, GeorgiosThis paper proposes and analyzes a model of context-dependent choice with stable but incomplete preferences that is based on the idea of partial dominance: An alternative is chosen from a menu if it is not worse than anything in the menu and is also better than something else. This choice procedure provides a simple explanation of the attraction/decoy effect. It reduces to rational choice when preferences are complete in two ways that are made precise. Some preference identification and choice consistency properties associated with this model are analyzed, and certain ways in which its predictions differ from those of other recently proposed models of the attraction effect are also discussed.Country Portfolios, collateral constraints and optimal monetary policyhttp://hdl.handle.net/10023/8131
Recent literature shows that, when international financial trade is absent, optimal policy deviates significantly from strict inflation targeting, but when there is trade in equities and bonds, optimal policy is close to strict inflation targeting. A separate line of literature shows that collateral constraints can imply that cross-border portfolio holdings act as a shock transmission mechanism which significantly undermines risk sharing. This raises an important question: does asset trade in the presence of collateral constraints imply a greater role for monetary policy as a risk sharing device? This paper finds that the combination of asset trade with collateral constraints does imply a potentially large welfare gain from optimal policy (relative to inflation targeting). However, the welfare gain of optimal policy is even larger when there is no international asset trade (but collateral constraints bind within each country). In other words, the risk sharing role of asset trade tends to reduce the welfare gains from policy optimisation even when collateral constraints act as a shock transmission mechanism. This is true even when there are large and persistent collateral constraint shocks.
This research is supported by ESRC Award Number ES/I024174/1.
Fri, 29 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10023/81312016-01-29T00:00:00ZSenay, OzgeSutherland, Alan JamesRecent literature shows that, when international financial trade is absent, optimal policy deviates significantly from strict inflation targeting, but when there is trade in equities and bonds, optimal policy is close to strict inflation targeting. A separate line of literature shows that collateral constraints can imply that cross-border portfolio holdings act as a shock transmission mechanism which significantly undermines risk sharing. This raises an important question: does asset trade in the presence of collateral constraints imply a greater role for monetary policy as a risk sharing device? This paper finds that the combination of asset trade with collateral constraints does imply a potentially large welfare gain from optimal policy (relative to inflation targeting). However, the welfare gain of optimal policy is even larger when there is no international asset trade (but collateral constraints bind within each country). In other words, the risk sharing role of asset trade tends to reduce the welfare gains from policy optimisation even when collateral constraints act as a shock transmission mechanism. This is true even when there are large and persistent collateral constraint shocks.Optimal substantive standards for competition authoritieshttp://hdl.handle.net/10023/8121
Recent years have witnessed a significant resurgence in the debate concerning the optimal substantive standard to be used in the enforcement of competition law. One of the arguments proposed for using a Consumer Surplus standard, is that, when firms can choose from a number of mutually exclusive actions, it may induce firms to adopt actions that lead to a higher level of total welfare than would a Total Welfare standard. This important basic insight, initially due to Lyons (2002), has been discussed and extended in the recent literature always in the context of mergers. In this paper we generalise and re-examine this argument for any potentially anti-competitive action – we have in particular in mind actions often challenged as attempted monopolisation (abuse of dominance) or vertical restraints, taken by firms in different environments. We show that in the absence of any efficiencies the two standards lead to exactly the same outcomes but a choice between them becomes important in the presence of efficiencies. With positive marginal-cost reducing efficiencies we confirm the presence of what we term a Lyons-effect in our more general setting. We then examine how the choice of standard depends on a number of relevant parameters. Most important in terms of their policy implications are the results that the Consumer Surplus standard will be the optimal choice, when the extant market power is significant, when the size of marginal cost-reducing efficiency effects is large and when the difference in the market power raising effects of mutually exclusive actions is large. These results are important since they suggest that in all cases where significant extant market power is a prerequisite for the enforcement of Competition Law it is best to use a Consumer Surplus standard.
Initial research was funded by an ESRC grant RES-052-23-221I “Optimal Enforcement and Decision Structures for Competition Policy” and subsequently it has been co-financed by the European Union (European Social Fund – ESF) and Greek National funds through the Operational Program “Education and Lifelong Learning” of the National Strategic reference Framework (NSRF) – Research funding program: ARISTEIA – Competition, Law Enforcement and Growth.
Thu, 01 Sep 2016 00:00:00 GMThttp://hdl.handle.net/10023/81212016-09-01T00:00:00ZKatsoulacos, YannisMetsiou, EleniUlph, David TregearRecent years have witnessed a significant resurgence in the debate concerning the optimal substantive standard to be used in the enforcement of competition law. One of the arguments proposed for using a Consumer Surplus standard, is that, when firms can choose from a number of mutually exclusive actions, it may induce firms to adopt actions that lead to a higher level of total welfare than would a Total Welfare standard. This important basic insight, initially due to Lyons (2002), has been discussed and extended in the recent literature always in the context of mergers. In this paper we generalise and re-examine this argument for any potentially anti-competitive action – we have in particular in mind actions often challenged as attempted monopolisation (abuse of dominance) or vertical restraints, taken by firms in different environments. We show that in the absence of any efficiencies the two standards lead to exactly the same outcomes but a choice between them becomes important in the presence of efficiencies. With positive marginal-cost reducing efficiencies we confirm the presence of what we term a Lyons-effect in our more general setting. We then examine how the choice of standard depends on a number of relevant parameters. Most important in terms of their policy implications are the results that the Consumer Surplus standard will be the optimal choice, when the extant market power is significant, when the size of marginal cost-reducing efficiency effects is large and when the difference in the market power raising effects of mutually exclusive actions is large. These results are important since they suggest that in all cases where significant extant market power is a prerequisite for the enforcement of Competition Law it is best to use a Consumer Surplus standard.Testing for mild explosivity and bubbles in LME non-ferrous metals priceshttp://hdl.handle.net/10023/8118
This paper applies the mildly explosive/multiple bubbles testing methodology developed by Phillips, Shi and Yu (2015a, International Economic Review, forthcoming) to examine the recent time series behaviour of the main six London Metal Exchange (LME) non-ferrous metals prices. We detect periods of mild explosivity in the cash and three-month futures price series in each of copper, nickel, lead, zinc and tin, but not in aluminium. We argue that convenience yield, though the formal counterpart to dividend yield in commodity markets, is not a useful basis on which to assess whether observed explosivity is indicative of bubbles (namely, departures of prices from their fundamental values). We construct other measures that provide evidence that suggests the observed explosivity in the non-ferrous metals market can be associated with tight physical markets.
Figuerola-Ferretti thanks the Spanish Ministry of Education and Science for support under grants MICINN ECO2010-19357, ECO2012-36559 and ECO2013-46395, and McCrorie, The Carnegie Trust for the Universities of Scotland under grant no. 31935.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/81182015-01-01T00:00:00ZFiguerola-Ferretti, IsabelGilbert, Christopher L.McCrorie, RoderickThis paper applies the mildly explosive/multiple bubbles testing methodology developed by Phillips, Shi and Yu (2015a, International Economic Review, forthcoming) to examine the recent time series behaviour of the main six London Metal Exchange (LME) non-ferrous metals prices. We detect periods of mild explosivity in the cash and three-month futures price series in each of copper, nickel, lead, zinc and tin, but not in aluminium. We argue that convenience yield, though the formal counterpart to dividend yield in commodity markets, is not a useful basis on which to assess whether observed explosivity is indicative of bubbles (namely, departures of prices from their fundamental values). We construct other measures that provide evidence that suggests the observed explosivity in the non-ferrous metals market can be associated with tight physical markets.Optimal monetary policy, exchange rate misalignments and incomplete financial marketshttp://hdl.handle.net/10023/8096
Recent literature on monetary policy in open economies shows that, when international financial trade is restricted to a single non-contingent bond, there are significant internal and external trade-offs that prevent optimal policy from simultaneously closing all welfare gaps. This implies an optimal policy which deviates from inflation targeting in order to offset real exchange rate misalignments. These simple models are, however, not good representations of modern financial markets. This paper therefore develops a more general and realistic two-country model of incomplete markets, where, in the presence of a wide range of stochastic shocks, there is international trade in nominal bonds denominated in the currencies of the two countries and equity claims on profit streams in the two countries. The analysis shows that, as in the recent literature, optimal policy deviates from inflation targeting in order to offset exchange rate misalignments, but the welfare benefits of optimal policy relative to inflation targeting are quantitatively smaller than found in simpler models of financial incompleteness. It is nevertheless found that optimal policy implies quantitatively significant stabilisation of the real exchange rate gap and trade balance gap compared to inflation targeting.
This research is supported by ESRC award number ES/I024174/1.
Wed, 27 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10023/80962016-01-27T00:00:00ZSenay, OzgeSutherland, Alan JamesRecent literature on monetary policy in open economies shows that, when international financial trade is restricted to a single non-contingent bond, there are significant internal and external trade-offs that prevent optimal policy from simultaneously closing all welfare gaps. This implies an optimal policy which deviates from inflation targeting in order to offset real exchange rate misalignments. These simple models are, however, not good representations of modern financial markets. This paper therefore develops a more general and realistic two-country model of incomplete markets, where, in the presence of a wide range of stochastic shocks, there is international trade in nominal bonds denominated in the currencies of the two countries and equity claims on profit streams in the two countries. The analysis shows that, as in the recent literature, optimal policy deviates from inflation targeting in order to offset exchange rate misalignments, but the welfare benefits of optimal policy relative to inflation targeting are quantitatively smaller than found in simpler models of financial incompleteness. It is nevertheless found that optimal policy implies quantitatively significant stabilisation of the real exchange rate gap and trade balance gap compared to inflation targeting.Measuring the effectiveness of anti-cartel interventions : a conceptual frameworkhttp://hdl.handle.net/10023/8052
This paper develops a model of the birth and death of cartels in the presence of enforcement activities by a Competition Authority (CA). We distinguish three sets of interventions: (a) detecting, prosecuting and penalising cartels; (b) actions that aim to stop cartel activity in the short-term, immediately following successful prosecution; (c) actions that aim to prevent the re-emergence of prosecuted cartels in the longer term. The last two intervention activities have not been analysed in the existing literature. In addition we take account of the structure and toughness of penalties. In this framework the enforcement activity of a CA causes industries in which cartels form to oscillate between periods of competitive pricing and periods of cartel pricing. We determine the impact of CA activity on deterred, impeded, and suffered harm. We derive measures of both the total and the marginal effects on welfare resulting from competition authority interventions and show how these break down into measures of the Direct Effect of interventions (i.e. the effect due to cartel activity being impeded) and two Indirect/Behavioural Effects – on Deterrence and Pricing. Finally, we calibrate the model and estimate the fraction of the harm that CAs remove as well as the magnitude of total and marginal welfare effects of anti-cartel interventions.
Mon, 21 Dec 2015 00:00:00 GMThttp://hdl.handle.net/10023/80522015-12-21T00:00:00ZKatsoulacos, YannisMotchenkova, EvgeniaUlph, David TregearThis paper develops a model of the birth and death of cartels in the presence of enforcement activities by a Competition Authority (CA). We distinguish three sets of interventions: (a) detecting, prosecuting and penalising cartels; (b) actions that aim to stop cartel activity in the short-term, immediately following successful prosecution; (c) actions that aim to prevent the re-emergence of prosecuted cartels in the longer term. The last two intervention activities have not been analysed in the existing literature. In addition we take account of the structure and toughness of penalties. In this framework the enforcement activity of a CA causes industries in which cartels form to oscillate between periods of competitive pricing and periods of cartel pricing. We determine the impact of CA activity on deterred, impeded, and suffered harm. We derive measures of both the total and the marginal effects on welfare resulting from competition authority interventions and show how these break down into measures of the Direct Effect of interventions (i.e. the effect due to cartel activity being impeded) and two Indirect/Behavioural Effects – on Deterrence and Pricing. Finally, we calibrate the model and estimate the fraction of the harm that CAs remove as well as the magnitude of total and marginal welfare effects of anti-cartel interventions.Endogenous price flexibility and optimal monetary policyhttp://hdl.handle.net/10023/8047
Much of the literature on optimal monetary policy uses models in which the degree of nominal price flexibility is exogenous. There are, however, good reasons to suppose that the degree of price flexibility adjusts endogenously to changes in monetary conditions. This article extends the standard new Keynesian model to incorporate an endogenous degree of price flexibility. The model shows that endogenizing the degree of price flexibility tends to shift optimal monetary policy towards complete inflation stabilization, even when shocks take the form of cost-push disturbances. This contrasts with the standard result obtained in models with exogenous price flexibility, which show that optimal monetary policy should allow some degree of inflation volatility to stabilize the welfare-relevant output gap.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10023/80472014-01-01T00:00:00ZSenay, OzgeSutherland, AlanMuch of the literature on optimal monetary policy uses models in which the degree of nominal price flexibility is exogenous. There are, however, good reasons to suppose that the degree of price flexibility adjusts endogenously to changes in monetary conditions. This article extends the standard new Keynesian model to incorporate an endogenous degree of price flexibility. The model shows that endogenizing the degree of price flexibility tends to shift optimal monetary policy towards complete inflation stabilization, even when shocks take the form of cost-push disturbances. This contrasts with the standard result obtained in models with exogenous price flexibility, which show that optimal monetary policy should allow some degree of inflation volatility to stabilize the welfare-relevant output gap.Endogenous infrastructure development and spatial takeoffhttp://hdl.handle.net/10023/8013
Infrastructure development can affect the spatial distribution of economic activity and, by consequence, aggregate structural transformation and growth. The growth of trade and specialization of regions, in turn, affects the demand for infrastructure. This paper develops a model in which the evolution of the transport sector occurs alongside the growth in trade and output of agricultural and manufacturing firms. Simulation output captures aspects of the historical record of England and Wales over c.1710-1881. A number of counterfactuals demonstrate the role that the timing and spatial distribution of infrastructure development plays in determining the timing and pace of takeoff. There can be a role for policy in accelerating takeoff through improving infrastructure, but the spatial distribution of that improvement matters.
I am grateful for support from the Institute for New Economic Thinking grant # INO15-00025. Current status: Revise and Resubmit at AEJ Macro
Fri, 17 Nov 2017 00:00:00 GMThttp://hdl.handle.net/10023/80132017-11-17T00:00:00ZTrew, Alex WilliamInfrastructure development can affect the spatial distribution of economic activity and, by consequence, aggregate structural transformation and growth. The growth of trade and specialization of regions, in turn, affects the demand for infrastructure. This paper develops a model in which the evolution of the transport sector occurs alongside the growth in trade and output of agricultural and manufacturing firms. Simulation output captures aspects of the historical record of England and Wales over c.1710-1881. A number of counterfactuals demonstrate the role that the timing and spatial distribution of infrastructure development plays in determining the timing and pace of takeoff. There can be a role for policy in accelerating takeoff through improving infrastructure, but the spatial distribution of that improvement matters.Procedures for eliciting time preferenceshttp://hdl.handle.net/10023/8000
We study three procedures to elicit attitudes towards delayed payments: the Becker-DeGroot-Marschak procedure; the second price auction; and the multiple price list. The payment mechanisms associated with these methods are widely considered as incentive compatible, thus if preferences satisfy Procedure Invariance, which is also widely (and often implicitly) assumed, they should yield identical time preference distributions. We find instead that the monetary discount rates elicited using the Becker-DeGroot-Marschak procedure are significantly lower than those elicited with a multiple price list. We show that the behavior we observe is consistent with an existing psychological explanation of preference reversals.
Precursors of this paper were originally circulated under the titles ‘The elicitation of time preferences’ and ‘A Case of Framing Effects: The Elicitation of Time Preferences’ -these received partial financial support through the ESRC grant RES-000-22-1636 (Manzini and Mariotti). Further funding was provided by CEEL.
Tue, 20 Oct 2015 00:00:00 GMThttp://hdl.handle.net/10023/80002015-10-20T00:00:00ZFreeman, DavidManzini, PaolaMariotti, MarcoMittone, LuigiWe study three procedures to elicit attitudes towards delayed payments: the Becker-DeGroot-Marschak procedure; the second price auction; and the multiple price list. The payment mechanisms associated with these methods are widely considered as incentive compatible, thus if preferences satisfy Procedure Invariance, which is also widely (and often implicitly) assumed, they should yield identical time preference distributions. We find instead that the monetary discount rates elicited using the Becker-DeGroot-Marschak procedure are significantly lower than those elicited with a multiple price list. We show that the behavior we observe is consistent with an existing psychological explanation of preference reversals.A characterization of risk-neutral and ambiguity-averse behaviorhttp://hdl.handle.net/10023/7992
This paper studies a decision maker who chooses monetary bets/investment portfolios under pure uncertainty. Necessary and sufficient conditions on his preferences over these objects are provided for his choice behavior to be guided by the maxmin expected value rule, and therefore to exhibit both "risk neutrality" and ambiguity aversion. This result is obtained as an extension of a simple re-characterization of de Finetti's theorem on maximization of subjective expected value.
Wed, 09 Dec 2015 00:00:00 GMThttp://hdl.handle.net/10023/79922015-12-09T00:00:00ZGerasimou, GeorgiosThis paper studies a decision maker who chooses monetary bets/investment portfolios under pure uncertainty. Necessary and sufficient conditions on his preferences over these objects are provided for his choice behavior to be guided by the maxmin expected value rule, and therefore to exhibit both "risk neutrality" and ambiguity aversion. This result is obtained as an extension of a simple re-characterization of de Finetti's theorem on maximization of subjective expected value.Essays on corruption and development issueshttp://hdl.handle.net/10023/7784
Corruption is widely considered to have adverse effects on economic development through its negative impact on the volume and quality of public investment and the efficiency of government services. Conversely, many of these macro variables are determinants of corruption. However, there are few studies of this two-way interaction at the macro level. This thesis aims to extend the current literature on corruption and development by explicit investigation of two diverse channels through which corruption and economic development interact, namely women's share in politics and pollution. For each variable, the thesis presents a theoretical model in which corruption and economic development are determined endogenously in a dynamic general equilibrium framework. We have four main results. First, female bureaucrats commit fewer corrupt acts than male bureaucrats because they have lower incentives to be corrupt. Second, corruption affects pollution directly by reducing pollution abatement resources and indirectly through its impact on development. As pollution and development appear to have an inverse U-shaped relationship, the total effect of corruption on pollution depends on the economy's level of income. Third, we confirm a simultaneous relationship between corruption and development. Fourth, for sufficiently low income levels, corruption and poverty may be permanent features of the economy. In addition to the two theoretical models, the thesis also presents an empirical investigation of the causal effect of women's share in parliament on corruption using panel data and gender quotas as instruments for women's share in parliament. Our results overturn the consensus since we find no causal effect of women's share in parliament on corruption, except in a particular case of Africa with reserved seats quotas.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/77842015-01-01T00:00:00ZLauw, ErvenCorruption is widely considered to have adverse effects on economic development through its negative impact on the volume and quality of public investment and the efficiency of government services. Conversely, many of these macro variables are determinants of corruption. However, there are few studies of this two-way interaction at the macro level. This thesis aims to extend the current literature on corruption and development by explicit investigation of two diverse channels through which corruption and economic development interact, namely women's share in politics and pollution. For each variable, the thesis presents a theoretical model in which corruption and economic development are determined endogenously in a dynamic general equilibrium framework. We have four main results. First, female bureaucrats commit fewer corrupt acts than male bureaucrats because they have lower incentives to be corrupt. Second, corruption affects pollution directly by reducing pollution abatement resources and indirectly through its impact on development. As pollution and development appear to have an inverse U-shaped relationship, the total effect of corruption on pollution depends on the economy's level of income. Third, we confirm a simultaneous relationship between corruption and development. Fourth, for sufficiently low income levels, corruption and poverty may be permanent features of the economy. In addition to the two theoretical models, the thesis also presents an empirical investigation of the causal effect of women's share in parliament on corruption using panel data and gender quotas as instruments for women's share in parliament. Our results overturn the consensus since we find no causal effect of women's share in parliament on corruption, except in a particular case of Africa with reserved seats quotas.Partial knowledge restrictions on the two-stage threshold model of choicehttp://hdl.handle.net/10023/7774
In the context of the two-stage threshold model of decision making, with the agent’s choices determined by the interaction of three “structural variables,” we study the restrictions on behavior that arise when one or more variables are exogenously known. Our results supply necessary and sufficient conditions for consistency with the model for all possible states of partial knowledge, and for both single- and multi- valued choice functions.
Thu, 05 Mar 2015 00:00:00 GMThttp://hdl.handle.net/10023/77742015-03-05T00:00:00ZManzini, PaolaMariotti, MarcoTyson, Christopher J.In the context of the two-stage threshold model of decision making, with the agent’s choices determined by the interaction of three “structural variables,” we study the restrictions on behavior that arise when one or more variables are exogenously known. Our results supply necessary and sufficient conditions for consistency with the model for all possible states of partial knowledge, and for both single- and multi- valued choice functions.A case of framing effects : the elicitation of time preferenceshttp://hdl.handle.net/10023/7773
We compare three methods for the elicitation of time preferences in an experimental setting: the Becker-DeGroot-Marschak procedure (BDM); the second price auction; and the multiple price list format. The first two methods have been used rarely to elicit time preferences. All methods used are perfectly equivalent from a decision theoretic point of view, and they should induce the same `truthful' revelation i dominant strategies. In spite of this, we find that framing does matter: the money discount rates elicited with the multiple price list tend to be higher than those elicited with the other two methods. In addition, our results shed some light on attitudes towards time, and they permit a broad classification of subjects depending on how the size of the elicited values varies with the time horizon.
Mon, 21 Jul 2014 00:00:00 GMThttp://hdl.handle.net/10023/77732014-07-21T00:00:00ZManzini, PaolaMariotti, MarcoWe compare three methods for the elicitation of time preferences in an experimental setting: the Becker-DeGroot-Marschak procedure (BDM); the second price auction; and the multiple price list format. The first two methods have been used rarely to elicit time preferences. All methods used are perfectly equivalent from a decision theoretic point of view, and they should induce the same `truthful' revelation i dominant strategies. In spite of this, we find that framing does matter: the money discount rates elicited with the multiple price list tend to be higher than those elicited with the other two methods. In addition, our results shed some light on attitudes towards time, and they permit a broad classification of subjects depending on how the size of the elicited values varies with the time horizon.Stochastic complementarityhttp://hdl.handle.net/10023/7771
The Hicksian definition of complementarity and substitutability may not apply in contexts in which agents are not utility maximisers or where price or income variations, whether implicit or explicit, are not available. We look for tools to identify complementarity and substitutability satisfying the following criteria: they are behavioural (based only on observable choice data); model-free (valid whether the agent is rational or not); and they do not rely on price or income variation. We uncover a conflict between properties that any complementarity notion should intuitively possess. We discuss three different possible resolutions of the conflict.
Revised March 2016, September 2016 and March 2017
Fri, 30 Sep 2016 00:00:00 GMThttp://hdl.handle.net/10023/77712016-09-30T00:00:00ZManzini, PaolaMariotti, MarcoÜlkü, LevantThe Hicksian definition of complementarity and substitutability may not apply in contexts in which agents are not utility maximisers or where price or income variations, whether implicit or explicit, are not available. We look for tools to identify complementarity and substitutability satisfying the following criteria: they are behavioural (based only on observable choice data); model-free (valid whether the agent is rational or not); and they do not rely on price or income variation. We uncover a conflict between properties that any complementarity notion should intuitively possess. We discuss three different possible resolutions of the conflict.The major decision : Labor market implications of the timing of specialization in collegehttp://hdl.handle.net/10023/7770
College students in the United States choose their major much later than their counterparts in Europe. American colleges also typically allow students to choose when they wish to make their major decision. In this paper we estimate the benefits of such a policy: specifically, whether additional years of multi-disciplinary education help students make a better choice of specialization, and at what cost in foregone specialized human capital. We first document that, in the cross section, students who choose their major later are more likely to change fields on the labor market. We then build and estimate a dynamic model of college education where the optimal timing of specialization reflects a tradeoff between discovering comparative advantage and acquiring occupation-specific skills. Multi-disciplinary education allows students to learn about their comparative advantage, while specialized education is more highly valued in occupations related to that field. Estimates suggest that delaying specialization is informative, although noisy. Working in the field of comparative advantage accounts for up to 20% of a well-matched worker’s earnings. While education is transferable across fields with only a 10% penalty, workers who wish to change fields incur a large, one-time cost. We then use these estimates to compare the current college system to one which imposes specialization at college entry. In this counterfactual, the number of workers who switch fields drops from 24% to 20%; however, the share of workers who are not working in the field of their comparative advantage rises substantially, from 23% to 30%. Overall, expected earnings fall by 1.5%.
Fri, 16 Oct 2015 00:00:00 GMThttp://hdl.handle.net/10023/77702015-10-16T00:00:00ZBridet, LucLeighton, Margaret AliceCollege students in the United States choose their major much later than their counterparts in Europe. American colleges also typically allow students to choose when they wish to make their major decision. In this paper we estimate the benefits of such a policy: specifically, whether additional years of multi-disciplinary education help students make a better choice of specialization, and at what cost in foregone specialized human capital. We first document that, in the cross section, students who choose their major later are more likely to change fields on the labor market. We then build and estimate a dynamic model of college education where the optimal timing of specialization reflects a tradeoff between discovering comparative advantage and acquiring occupation-specific skills. Multi-disciplinary education allows students to learn about their comparative advantage, while specialized education is more highly valued in occupations related to that field. Estimates suggest that delaying specialization is informative, although noisy. Working in the field of comparative advantage accounts for up to 20% of a well-matched worker’s earnings. While education is transferable across fields with only a 10% penalty, workers who wish to change fields incur a large, one-time cost. We then use these estimates to compare the current college system to one which imposes specialization at college entry. In this counterfactual, the number of workers who switch fields drops from 24% to 20%; however, the share of workers who are not working in the field of their comparative advantage rises substantially, from 23% to 30%. Overall, expected earnings fall by 1.5%.Free trade vs. autarky under asymmetric Cournot oligopolyhttp://hdl.handle.net/10023/7769
The paper compares free trade with autarky in an asymmetric multi-country world with Cournot competition, constant returns and linear demand. We first derive conditions for free trade to hurt a country’s consumers, to benefit its firms, to induce it to export, to increase its output, and to raise its welfare. We further show these conditions are linked in a clear order, with each one implying the next. We then demonstrate that with different reservation prices trade can reduce world output and total consumer surplus as well as world welfare and correct oversights in earlier findings by Dong and Yuan (2010).
Thu, 01 Oct 2015 00:00:00 GMThttp://hdl.handle.net/10023/77692015-10-01T00:00:00ZAmir, RabahJin, Jim YongtaoTröge, MichaelThe paper compares free trade with autarky in an asymmetric multi-country world with Cournot competition, constant returns and linear demand. We first derive conditions for free trade to hurt a country’s consumers, to benefit its firms, to induce it to export, to increase its output, and to raise its welfare. We further show these conditions are linked in a clear order, with each one implying the next. We then demonstrate that with different reservation prices trade can reduce world output and total consumer surplus as well as world welfare and correct oversights in earlier findings by Dong and Yuan (2010).A basic income can raise employment and welfare for a majorityhttp://hdl.handle.net/10023/7768
With growing interest in a universal basic income (BI), we provide new results for a majority to benefit from replacing (some) unemployment benefits with BI. Given any income distribution and an extensive margin, such a replacement always benefits those remaining unemployed, raises utilitarian welfare, and benefits a poor - or even a working - majority. Similar results follow with involuntary unemployment, and joint distributions of wages and costs of work. Moreover, using quasi-linear utility with intensive margins, marginal introduction of BI can still benefit a large proportion of the poor whose productivities are below the average, without raising unemployment.
Mon, 01 Jun 2015 00:00:00 GMThttp://hdl.handle.net/10023/77682015-06-01T00:00:00ZFitzRoy, Felix RJin, Jim YongtaoWith growing interest in a universal basic income (BI), we provide new results for a majority to benefit from replacing (some) unemployment benefits with BI. Given any income distribution and an extensive margin, such a replacement always benefits those remaining unemployed, raises utilitarian welfare, and benefits a poor - or even a working - majority. Similar results follow with involuntary unemployment, and joint distributions of wages and costs of work. Moreover, using quasi-linear utility with intensive margins, marginal introduction of BI can still benefit a large proportion of the poor whose productivities are below the average, without raising unemployment.On the microeconomic foundations of linear demand for diferentiated productshttp://hdl.handle.net/10023/7767
This paper provides a thorough exploration of the microeconomic foundations for the multivariate linear demand function for di¤erentiated products that is widely used in industrial organization. A key finding is that strict concavity of the quadratic utility function is critical for the demand system to be well defined. Otherwise, the true demand function may be quite complex: Multi-valued, non-linear and income-dependent. The solution of the first order conditions for the consumer problem, which we call a local demand function, may have quite pathological properties. We uncover failures of duality relationships between substitute products and complementary products, as well as the incompatibility between high levels of complementarity and concavity. The two-good case emerges as a special case with strong but non-robust properties. A key implication is that all conclusions derived via the use of linear demand that does not satisfy the law of Demand ought to be regarded with some suspiscion.
Thu, 09 Jul 2015 00:00:00 GMThttp://hdl.handle.net/10023/77672015-07-09T00:00:00ZAmir, RabahEricksonz, PhilipJin, Jim YongtaoThis paper provides a thorough exploration of the microeconomic foundations for the multivariate linear demand function for di¤erentiated products that is widely used in industrial organization. A key finding is that strict concavity of the quadratic utility function is critical for the demand system to be well defined. Otherwise, the true demand function may be quite complex: Multi-valued, non-linear and income-dependent. The solution of the first order conditions for the consumer problem, which we call a local demand function, may have quite pathological properties. We uncover failures of duality relationships between substitute products and complementary products, as well as the incompatibility between high levels of complementarity and concavity. The two-good case emerges as a special case with strong but non-robust properties. A key implication is that all conclusions derived via the use of linear demand that does not satisfy the law of Demand ought to be regarded with some suspiscion.Banking and industrializationhttp://hdl.handle.net/10023/7529
We exploit employment data from 10,528 parishes across nineteenth century England and Wales and find that a one standard deviation increase in finance employment increases the annualized growth rate of secondary labour by 0.8 percentage points. An endogenous growth model with finance and structural transformation motivates the empirical approach. Since initial banking access in 1817 may have been endogenously determined, we use instrumental variables to predict the location of country banks founded before the industrial take-off could possibly be expected. Distance and subsectoral analysis suggest that the effect of finance is highly localized and particularly strong for intermediate secondary sectors.
Fri, 01 Dec 2017 00:00:00 GMThttp://hdl.handle.net/10023/75292017-12-01T00:00:00ZHeblich, StephanTrew, Alex WilliamWe exploit employment data from 10,528 parishes across nineteenth century England and Wales and find that a one standard deviation increase in finance employment increases the annualized growth rate of secondary labour by 0.8 percentage points. An endogenous growth model with finance and structural transformation motivates the empirical approach. Since initial banking access in 1817 may have been endogenously determined, we use instrumental variables to predict the location of country banks founded before the industrial take-off could possibly be expected. Distance and subsectoral analysis suggest that the effect of finance is highly localized and particularly strong for intermediate secondary sectors.A comment on "Can relaxation of beliefs rationalize the winner's curse? An experimental study"http://hdl.handle.net/10023/7366
Ivanov, Levin, and Niederle (2010) use a common-value second-price auction experiment to reject beliefs-based explanations for the winner's curse. ILN's conclusion, however, stems from the misuse of theoretical arguments. Beliefs-based models are even compatible with some observations from ILN's experiment.
Date of Acceptance: 01/06/2014 (Manuscript Revised)
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/73662015-01-01T00:00:00ZCosta-Gomes, M.A.Shimoji, M.Ivanov, Levin, and Niederle (2010) use a common-value second-price auction experiment to reject beliefs-based explanations for the winner's curse. ILN's conclusion, however, stems from the misuse of theoretical arguments. Beliefs-based models are even compatible with some observations from ILN's experiment.Measuring Tax Complexityhttp://hdl.handle.net/10023/7270
Mon, 10 Aug 2015 00:00:00 GMThttp://hdl.handle.net/10023/72702015-08-10T00:00:00ZUlph, David TregearRegional development of the Aswan region of Egypt with special reference to the Aswan High Damhttp://hdl.handle.net/10023/7127
This study is concerned with the problems of regional development.
In modern times, the different institutions within the nation-state have
multiplied in number and increased in size and complexity so that it is
becoming more and more difficult for these institutions,
functioning
centrally, to achieve economic and social progress and to create efficient political and administrative systems. Local diversities and
interests as well as national goals need to be observed and coordinated
in order to achieve the required progress. Accordingly many countries
are now tending to develop regional systems to suit their particular
conditions,
the aim being to lessen the risk of the central institutions'
monopolizing political, economic and social powers, and at the same time
to keep individual regions integrated into a single coherent unit for
the good of the nation as a whole and for the good of the regions
themselves.
The present work comprises two parts. Part One deals with
definitions and some general problems of regional development. For the
purpose of exemplifying these generalisations, as well as glancing at the
background of Aswan Region, we shall refer at this stage to some cases
from Egypt.
Part Two deals with regional development in the Aswan Region of
Egypt. This Region may provide a useful example of economic and social
development related to planned growth. The Aswan High Dam and the
intensive development programmes in the Region play an important role in
the changes that are taking place both in that Region and in the rest
of Egypt. Part Two will also examine the background of Aswan Region,
describing the High Dam and evaluating its consequences, then evaluating
the regional development of Aswan Region and considering how far the concept of regional planning is applicable to the activities taking
place there.
The study,
it is emphasized, is intended to be primarily a descriptive and analytical one, and no attempt is made to construct mathematical
regional and interregional models.
Mon, 01 Jan 1973 00:00:00 GMThttp://hdl.handle.net/10023/71271973-01-01T00:00:00ZHammouda, I. S.This study is concerned with the problems of regional development.
In modern times, the different institutions within the nation-state have
multiplied in number and increased in size and complexity so that it is
becoming more and more difficult for these institutions,
functioning
centrally, to achieve economic and social progress and to create efficient political and administrative systems. Local diversities and
interests as well as national goals need to be observed and coordinated
in order to achieve the required progress. Accordingly many countries
are now tending to develop regional systems to suit their particular
conditions,
the aim being to lessen the risk of the central institutions'
monopolizing political, economic and social powers, and at the same time
to keep individual regions integrated into a single coherent unit for
the good of the nation as a whole and for the good of the regions
themselves.
The present work comprises two parts. Part One deals with
definitions and some general problems of regional development. For the
purpose of exemplifying these generalisations, as well as glancing at the
background of Aswan Region, we shall refer at this stage to some cases
from Egypt.
Part Two deals with regional development in the Aswan Region of
Egypt. This Region may provide a useful example of economic and social
development related to planned growth. The Aswan High Dam and the
intensive development programmes in the Region play an important role in
the changes that are taking place both in that Region and in the rest
of Egypt. Part Two will also examine the background of Aswan Region,
describing the High Dam and evaluating its consequences, then evaluating
the regional development of Aswan Region and considering how far the concept of regional planning is applicable to the activities taking
place there.
The study,
it is emphasized, is intended to be primarily a descriptive and analytical one, and no attempt is made to construct mathematical
regional and interregional models.Choosing on influencehttp://hdl.handle.net/10023/6791
Interaction, the act of mutual influence between two or more individuals, is an essential part of daily life and economic decisions. Yet, micro-foundations of interaction are unexplored. This paper presents a fi rst attempt to this purpose. We study a decision procedure for interacting agents. According to our model, interaction occurs since individuals seek influence for those issues that they cannot solve on their own. Following a choice-theoretic approach, we provide simple properties that aid to detect interacting individuals. In this case, revealed preference analysis not only grants the underlying preferences but also the influence acquired. Our baseline model is based on two interacting individuals, though we extend the analysis to multi-individual environments.
Tue, 07 Apr 2015 00:00:00 GMThttp://hdl.handle.net/10023/67912015-04-07T00:00:00ZCuhadaroglu, TugceInteraction, the act of mutual influence between two or more individuals, is an essential part of daily life and economic decisions. Yet, micro-foundations of interaction are unexplored. This paper presents a fi rst attempt to this purpose. We study a decision procedure for interacting agents. According to our model, interaction occurs since individuals seek influence for those issues that they cannot solve on their own. Following a choice-theoretic approach, we provide simple properties that aid to detect interacting individuals. In this case, revealed preference analysis not only grants the underlying preferences but also the influence acquired. Our baseline model is based on two interacting individuals, though we extend the analysis to multi-individual environments.The inventor balance and the functional specialization in global inventive activitieshttp://hdl.handle.net/10023/6789
We study the functional specialization whereby some countries contribute relatively more inventors vs. organizations in the production of inventions at a global scale. We propose a conceptual framework to explain this type of functional specialization, which posits the presence of feedbacks between two distinct sub-systems, each one providing inventors and organizations. We quantify the phenomenon by means of a new metric, the “inventor balance”, which we compute using patent data. We show that the observed imbalances, which are often conspicuous, are determined by several factors: the innovativeness of a country relative to its level of economic development, relative factor endowments, the degree of technological specialization and, last, cultural traits. We argue that the “inventor balance” is a useful indicator for policy makers, and its routine analysis could lead to better informed innovation policies.
Mon, 19 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/67892015-01-19T00:00:00ZSavorelli, LucaPicci, LucioWe study the functional specialization whereby some countries contribute relatively more inventors vs. organizations in the production of inventions at a global scale. We propose a conceptual framework to explain this type of functional specialization, which posits the presence of feedbacks between two distinct sub-systems, each one providing inventors and organizations. We quantify the phenomenon by means of a new metric, the “inventor balance”, which we compute using patent data. We show that the observed imbalances, which are often conspicuous, are determined by several factors: the innovativeness of a country relative to its level of economic development, relative factor endowments, the degree of technological specialization and, last, cultural traits. We argue that the “inventor balance” is a useful indicator for policy makers, and its routine analysis could lead to better informed innovation policies.Factors affecting the financial success of motion pictures : what is the role of star power?http://hdl.handle.net/10023/6786
In the mid-1940s, American film industry was on its way up to its golden era as studios started mass-producing iconic feature films. The escalating increase in popularity of Hollywood stars was actively suggested for its direct links to box office success by academics. Using data collected in 2007, this paper carries out an empirical investigation on how different factors, including star power, affect the revenue of ‘home-run’ movies in Hollywood. Due to the subjective nature of star power, two different approaches were used: (1) number of nominations and wins of Academy Awards by the key players, and (2) average lifetime gross revenue of films involving the key players preceding the sample year. It is found that number of Academy awards nominations and wins was not statistically significant in generating box office revenue, whereas star power based on the second approach was statistically significant. Other significant factors were critics’ reviews, screen coverage and top distributor, while number of Academy awards, MPAA-rating, seasonality, being a sequel and popular genre were not statistically significant.
Mon, 12 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/67862015-01-12T00:00:00ZSelvaretnam, GeethanjaliYang, Jen-YuanIn the mid-1940s, American film industry was on its way up to its golden era as studios started mass-producing iconic feature films. The escalating increase in popularity of Hollywood stars was actively suggested for its direct links to box office success by academics. Using data collected in 2007, this paper carries out an empirical investigation on how different factors, including star power, affect the revenue of ‘home-run’ movies in Hollywood. Due to the subjective nature of star power, two different approaches were used: (1) number of nominations and wins of Academy Awards by the key players, and (2) average lifetime gross revenue of films involving the key players preceding the sample year. It is found that number of Academy awards nominations and wins was not statistically significant in generating box office revenue, whereas star power based on the second approach was statistically significant. Other significant factors were critics’ reviews, screen coverage and top distributor, while number of Academy awards, MPAA-rating, seasonality, being a sequel and popular genre were not statistically significant.Measuring tax complexityhttp://hdl.handle.net/10023/6755
This paper critically examines a number of issues relating to the measurement of tax complexity. It starts with an analysis of the concept of tax complexity, distinguishing tax design complexity and operational complexity. It considers the consequences/costs of complexity, and then examines the rationale for measuring complexity. Finally it applies the analysis to an examination of an index of complexity developed by the UK Office of Tax Simplification (OTS).
Mon, 01 Dec 2014 00:00:00 GMThttp://hdl.handle.net/10023/67552014-12-01T00:00:00ZUlph, David TregearThis paper critically examines a number of issues relating to the measurement of tax complexity. It starts with an analysis of the concept of tax complexity, distinguishing tax design complexity and operational complexity. It considers the consequences/costs of complexity, and then examines the rationale for measuring complexity. Finally it applies the analysis to an examination of an index of complexity developed by the UK Office of Tax Simplification (OTS).Choice, deferral and consistencyhttp://hdl.handle.net/10023/6754
In this paper we study decision making in situations where the individual's preferences are not assumed to be complete. First, we identify conditions that are necessary and sufficient for choice behavior in general domains to be consistent with maximization of a possibly incomplete preference relation. In this model of maximally dominant choice, the agent defers/avoids choosing at those and only those menus where a most preferred option does not exist. This allows for simple explanations of conflict-induced deferral and choice overload. It also suggests a criterion for distinguishing between indifference and incomparability based on observable data. A simple extension of this model also incorporates decision costs and provides a theoretical framework that is compatible with the experimental design that we propose to elicit possibly incomplete preferences in the lab. The design builds on the introduction of monetary costs that induce choice of a most preferred feasible option if one exists and deferral otherwise. Based on this design we found evidence suggesting that a quarter of the subjects in our study had incomplete preferences, and that these made significantly more consistent choices than a group of subjects who were forced to choose. The latter effect, however, is mitigated once data on indifferences are accounted for.
Gerasimou and Costa-Gomes gratefully acknowledge financial support from the British Academy (Grant SG122338)
Fri, 26 Dec 2014 00:00:00 GMThttp://hdl.handle.net/10023/67542014-12-26T00:00:00ZCosta-Gomes, MiguelCueva, CarlosGerasimou, GeorgiosIn this paper we study decision making in situations where the individual's preferences are not assumed to be complete. First, we identify conditions that are necessary and sufficient for choice behavior in general domains to be consistent with maximization of a possibly incomplete preference relation. In this model of maximally dominant choice, the agent defers/avoids choosing at those and only those menus where a most preferred option does not exist. This allows for simple explanations of conflict-induced deferral and choice overload. It also suggests a criterion for distinguishing between indifference and incomparability based on observable data. A simple extension of this model also incorporates decision costs and provides a theoretical framework that is compatible with the experimental design that we propose to elicit possibly incomplete preferences in the lab. The design builds on the introduction of monetary costs that induce choice of a most preferred feasible option if one exists and deferral otherwise. Based on this design we found evidence suggesting that a quarter of the subjects in our study had incomplete preferences, and that these made significantly more consistent choices than a group of subjects who were forced to choose. The latter effect, however, is mitigated once data on indifferences are accounted for.Numerical analysis and multi-precision computational methods applied to the extant problems of Asian option pricing and simulating stable distributions and unit root densitieshttp://hdl.handle.net/10023/6539
This thesis considers new methods that exploit recent developments in computer technology to address three extant problems in the area of Finance and Econometrics. The problem of Asian option pricing has endured for the last two decades in spite of many attempts to find a robust solution across all parameter values. All recently proposed methods are shown to fail when computations are conducted using standard machine precision because as more and more accuracy is forced upon the problem, round-off error begins to propagate. Using recent methods from numerical analysis based on multi-precision arithmetic, we show using the Mathematica platform that all extant methods have efficacy when computations use sufficient arithmetic precision. This creates the proper framework to compare and contrast the methods based on criteria such as computational speed for a given accuracy. Numerical methods based on a deformation of the Bromwich contour in the Geman-Yor Laplace transform are found to perform best provided the normalized strike price is above a given threshold; otherwise methods based on Euler approximation are preferred.
The same methods are applied in two other contexts: the simulation of stable distributions and the computation of unit root densities in Econometrics. The stable densities are all nested in a general function called a Fox H function. The same computational difficulties as above apply when using only double-precision arithmetic but are again solved using higher arithmetic precision. We also consider simulating the densities of infinitely divisible distributions associated with hyperbolic functions. Finally, our methods are applied to unit root densities. Focusing on the two fundamental densities, we show our methods perform favorably against the extant methods of Monte Carlo simulation, the Imhof algorithm and some analytical expressions derived principally by Abadir. Using Mathematica, the main two-dimensional Laplace transform in this context is reduced to a one-dimensional problem.
Mon, 01 Dec 2014 00:00:00 GMThttp://hdl.handle.net/10023/65392014-12-01T00:00:00ZCao, LiangThis thesis considers new methods that exploit recent developments in computer technology to address three extant problems in the area of Finance and Econometrics. The problem of Asian option pricing has endured for the last two decades in spite of many attempts to find a robust solution across all parameter values. All recently proposed methods are shown to fail when computations are conducted using standard machine precision because as more and more accuracy is forced upon the problem, round-off error begins to propagate. Using recent methods from numerical analysis based on multi-precision arithmetic, we show using the Mathematica platform that all extant methods have efficacy when computations use sufficient arithmetic precision. This creates the proper framework to compare and contrast the methods based on criteria such as computational speed for a given accuracy. Numerical methods based on a deformation of the Bromwich contour in the Geman-Yor Laplace transform are found to perform best provided the normalized strike price is above a given threshold; otherwise methods based on Euler approximation are preferred.
The same methods are applied in two other contexts: the simulation of stable distributions and the computation of unit root densities in Econometrics. The stable densities are all nested in a general function called a Fox H function. The same computational difficulties as above apply when using only double-precision arithmetic but are again solved using higher arithmetic precision. We also consider simulating the densities of infinitely divisible distributions associated with hyperbolic functions. Finally, our methods are applied to unit root densities. Focusing on the two fundamental densities, we show our methods perform favorably against the extant methods of Monte Carlo simulation, the Imhof algorithm and some analytical expressions derived principally by Abadir. Using Mathematica, the main two-dimensional Laplace transform in this context is reduced to a one-dimensional problem.Testing the tunnel effect : comparison, age and happiness in UK and German panelshttp://hdl.handle.net/10023/6518
In contrast to previous results combining all ages, we find positive effects of comparison income on happiness for the under 45s and negative effects for those over 45. In the UK, these coefficients are several times the magnitude of own income effects. In West Germany, they cancel out to give no effect of comparison income on life satisfaction in the whole sample when controlling for fixed effects, time-in-panel, and age-groupings. Pooled OLS estimation gives the usual negative comparison effect in the whole sample for both West Germany and the UK. The residual age-happiness relationship is hump-shaped in all three countries. Results are consistent with a simple life cycle model of relative income under uncertainty.
Tue, 30 Dec 2014 00:00:00 GMThttp://hdl.handle.net/10023/65182014-12-30T00:00:00ZFitzRoy, Felix RNolan, MichaelSteinhardt, MaxUlph, David TregearIn contrast to previous results combining all ages, we find positive effects of comparison income on happiness for the under 45s and negative effects for those over 45. In the UK, these coefficients are several times the magnitude of own income effects. In West Germany, they cancel out to give no effect of comparison income on life satisfaction in the whole sample when controlling for fixed effects, time-in-panel, and age-groupings. Pooled OLS estimation gives the usual negative comparison effect in the whole sample for both West Germany and the UK. The residual age-happiness relationship is hump-shaped in all three countries. Results are consistent with a simple life cycle model of relative income under uncertainty.Conventional and unconventional monetary policy in a DSGE model with an interbank market frictionhttp://hdl.handle.net/10023/6372
This thesis examines both conventional and unconventional monetary policies in
a DSGE model with an interbank market friction. The recent crisis during 2007-2009
affected economies worldwide and forced central banks to implement not just conventional
monetary policies, but also direct interventions in ﬁnancial markets. We investigate a
DSGE model with ﬁnancial frictions, to test conventional and unconventional monetary
policies.
The thesis starts by using the Gertler and Kiyotaki (2010)’s modelling framework,
to examine eight different shocks under imperfect interbank market conditions. Unlike
Gertler and Kiyotaki (2010) who consider the two extreme cases for the banking system, I
ﬁrstly extend the analysis to a case in between the two extreme cases that they examined.
The shocks considered include supply and demand shocks and also two shocks from the
ﬁnancial system itself (an interbank market shock and a shock to the deposit market).
It is found that a negative shock to the interbank market has only a moderate impact to
the banking system. However, a shock to the deposit market has a much stronger impact.
Even though the impacts of these shocks are not large it is shown that theﬁnancial frictions
magnify the effects of other shocks.
The model is extended to include price stickiness. A modiﬁed Taylor rule is analysed
to test how conventional monetary policy should respond to the shocks in the presence of
ﬁnancial frictions. Speciﬁcally the credit spread is added as a third term in the monetary
policy rule. The stabilising properties of the policy rule are analysed and a welfare analysis is conducted. The model is further developed to include unconventional monetary policy
in the form of direct lending to private sector ﬁrms from the central bank. A policy rule
for unconventional policy is tested and its stabilising and welfare properties are analysed.
Fri, 27 Jun 2014 00:00:00 GMThttp://hdl.handle.net/10023/63722014-06-27T00:00:00ZChen, JinyuThis thesis examines both conventional and unconventional monetary policies in
a DSGE model with an interbank market friction. The recent crisis during 2007-2009
affected economies worldwide and forced central banks to implement not just conventional
monetary policies, but also direct interventions in ﬁnancial markets. We investigate a
DSGE model with ﬁnancial frictions, to test conventional and unconventional monetary
policies.
The thesis starts by using the Gertler and Kiyotaki (2010)’s modelling framework,
to examine eight different shocks under imperfect interbank market conditions. Unlike
Gertler and Kiyotaki (2010) who consider the two extreme cases for the banking system, I
ﬁrstly extend the analysis to a case in between the two extreme cases that they examined.
The shocks considered include supply and demand shocks and also two shocks from the
ﬁnancial system itself (an interbank market shock and a shock to the deposit market).
It is found that a negative shock to the interbank market has only a moderate impact to
the banking system. However, a shock to the deposit market has a much stronger impact.
Even though the impacts of these shocks are not large it is shown that theﬁnancial frictions
magnify the effects of other shocks.
The model is extended to include price stickiness. A modiﬁed Taylor rule is analysed
to test how conventional monetary policy should respond to the shocks in the presence of
ﬁnancial frictions. Speciﬁcally the credit spread is added as a third term in the monetary
policy rule. The stabilising properties of the policy rule are analysed and a welfare analysis is conducted. The model is further developed to include unconventional monetary policy
in the form of direct lending to private sector ﬁrms from the central bank. A policy rule
for unconventional policy is tested and its stabilising and welfare properties are analysed.Keeping up with the Joneses : who loses out?http://hdl.handle.net/10023/5771
This paper investigates how well-being varies with individual wage rates when individuals care about relative consumption and so there are Veblen effects – Keeping up with the Joneses – leading individuals to over-work. In the case where individuals compare themselves with their peers – those with the same wage-rate - it is shown that Keeping up with the Joneses leads some individuals to work who otherwise would have chosen not to. Moreover for these individuals well-being is a decreasing function of the wage rate - contrary to standard theory. So those who are worst-off in society are no longer those on the lowest wage.
Mon, 01 Dec 2014 00:00:00 GMThttp://hdl.handle.net/10023/57712014-12-01T00:00:00ZUlph, David TregearThis paper investigates how well-being varies with individual wage rates when individuals care about relative consumption and so there are Veblen effects – Keeping up with the Joneses – leading individuals to over-work. In the case where individuals compare themselves with their peers – those with the same wage-rate - it is shown that Keeping up with the Joneses leads some individuals to work who otherwise would have chosen not to. Moreover for these individuals well-being is a decreasing function of the wage rate - contrary to standard theory. So those who are worst-off in society are no longer those on the lowest wage.Beliefs and actions in the trust game : creating instrumental variables to estimate the causal effecthttp://hdl.handle.net/10023/5701
In many economic contexts, an elusive variable of interest is the agent's belief about relevant events, e.g. about other agents' behavior. A growing number of surveys and experiments ask participants to state beliefs explicitly but little is known about the causal relation between beliefs and actions. This paper discusses the possibility of creating exogenous instrumental variables for belief statements, by informing the agent about exogenous manipulations of the relevant events. We conduct trust game experiments where the amount sent back by the second player (trustee) is exogenously varied. The procedure allows detecting causal links from beliefs to actions under plausible assumptions. The IV-estimated effect is significant, confirming the causal role of beliefs.
We are grateful for financial support from the U.K. Economic and Social Research Council (ESRC-RES-1973), the European Research Council (ERC-263412) and the ELSE centre at UCL.
Sat, 01 Nov 2014 00:00:00 GMThttp://hdl.handle.net/10023/57012014-11-01T00:00:00ZCosta-Gomes, MiguelHuck, SteffanWeizsaecker, GeorgIn many economic contexts, an elusive variable of interest is the agent's belief about relevant events, e.g. about other agents' behavior. A growing number of surveys and experiments ask participants to state beliefs explicitly but little is known about the causal relation between beliefs and actions. This paper discusses the possibility of creating exogenous instrumental variables for belief statements, by informing the agent about exogenous manipulations of the relevant events. We conduct trust game experiments where the amount sent back by the second player (trustee) is exogenously varied. The procedure allows detecting causal links from beliefs to actions under plausible assumptions. The IV-estimated effect is significant, confirming the causal role of beliefs.Moral behaviour, altruism and environmental policyhttp://hdl.handle.net/10023/5577
Free-riding is often associated with self-interested behaviour. However if there is a global mixed pollutant, free-riding will arise if individuals calculate that their emissions are negligible relative to the total, so total emissions and hence any damage that they and others suffer will be unaffected by whatever consumption choice they make. In this context consumer behaviour and the optimal environmental tax are independent of the degree of altruism. For behaviour to change, individuals need to make their decisions in a different way. We propose a new theory of moral behaviour whereby individuals recognise that they will be worse off by not acting in their own self-interest, and balance this cost off against the hypothetical moral value of adopting a Kantian form of behaviour, that is by calculating the consequences of their action by asking what would happen if everyone else acted in the same way as they did. We show that: (a) if individuals behave this way, then altruism matters and the greater the degree of altruism the more individuals cut back their consumption of a ’dirty’ good; (b) nevertheless the optimal environmental tax is exactly the same as that emerging from classical analysis where individuals act in self-interested fashion.
Marc Daube gratefully acknowledges financial support from the Economic and Social Research Council, grant number ES/J500136/1
Mon, 01 Feb 2016 00:00:00 GMThttp://hdl.handle.net/10023/55772016-02-01T00:00:00ZDaube, Marc Philip KlausUlph, David TregearFree-riding is often associated with self-interested behaviour. However if there is a global mixed pollutant, free-riding will arise if individuals calculate that their emissions are negligible relative to the total, so total emissions and hence any damage that they and others suffer will be unaffected by whatever consumption choice they make. In this context consumer behaviour and the optimal environmental tax are independent of the degree of altruism. For behaviour to change, individuals need to make their decisions in a different way. We propose a new theory of moral behaviour whereby individuals recognise that they will be worse off by not acting in their own self-interest, and balance this cost off against the hypothetical moral value of adopting a Kantian form of behaviour, that is by calculating the consequences of their action by asking what would happen if everyone else acted in the same way as they did. We show that: (a) if individuals behave this way, then altruism matters and the greater the degree of altruism the more individuals cut back their consumption of a ’dirty’ good; (b) nevertheless the optimal environmental tax is exactly the same as that emerging from classical analysis where individuals act in self-interested fashion.Consumption decisions when people value conformityhttp://hdl.handle.net/10023/5568
In this paper we assume that for some commodities individuals may wish to adjust their levels of consumption from their normal Marshallian levels so as to match the consumption levels of a group of other individuals, in order to signal that they conform to the consumption norms of that group. Unlike Veblen’s concept of conspicuous consumption this can mean that some individuals may reduce their consumption of the relevant commodities. We model this as a three-stage game in which individuals first decide whether or not they wish to adhere to a norm, then decide which norm they wish to adhere to, and finally decide their actual consumption. We present a number of examples of the resulting equilibria, and then discuss the potential policy implications of this model.
Wed, 01 Oct 2014 00:00:00 GMThttp://hdl.handle.net/10023/55682014-10-01T00:00:00ZUlph, David TregearUlph, AlistairIn this paper we assume that for some commodities individuals may wish to adjust their levels of consumption from their normal Marshallian levels so as to match the consumption levels of a group of other individuals, in order to signal that they conform to the consumption norms of that group. Unlike Veblen’s concept of conspicuous consumption this can mean that some individuals may reduce their consumption of the relevant commodities. We model this as a three-stage game in which individuals first decide whether or not they wish to adhere to a norm, then decide which norm they wish to adhere to, and finally decide their actual consumption. We present a number of examples of the resulting equilibria, and then discuss the potential policy implications of this model.Penalizing cartels : the case for basing penalties on price overchargehttp://hdl.handle.net/10023/5567
In this paper we set out the welfare economics based case for imposing cartel penalties on the cartel overcharge rather than on the more conventional bases of revenue or profits (illegal gains). To do this we undertake a systematic comparison of a penalty based on the cartel overcharge with three other penalty regimes: fixed penalties; penalties based on revenue, and penalties based on profits. Our analysis is the first to compare these regimes in terms of their impact on both (i) the prices charged by those cartels that do form; and (ii) the number of stable cartels that form (deterrence). We show that the class of penalties based on profits is identical to the class of fixed penalties in all welfare-relevant respects. For the other three types of penalty we show that, for those cartels that do form, penalties based on the overcharge produce lower prices than those based on profit) while penalties based on revenue produce the highest prices. Further, in conjunction with the above result, our analysis of cartel stability (and thus deterrence), shows that penalties based on the overcharge out-perform those based on profits, which in turn out-perform those based on revenue in terms of their impact on each of the following welfare criteria: (a) average overcharge; (b) average consumer surplus; (c) average total welfare.
Wed, 24 Sep 2014 00:00:00 GMThttp://hdl.handle.net/10023/55672014-09-24T00:00:00ZUlph, David TregearKatsoulacos, YannisMotchenkova, EvgeniaIn this paper we set out the welfare economics based case for imposing cartel penalties on the cartel overcharge rather than on the more conventional bases of revenue or profits (illegal gains). To do this we undertake a systematic comparison of a penalty based on the cartel overcharge with three other penalty regimes: fixed penalties; penalties based on revenue, and penalties based on profits. Our analysis is the first to compare these regimes in terms of their impact on both (i) the prices charged by those cartels that do form; and (ii) the number of stable cartels that form (deterrence). We show that the class of penalties based on profits is identical to the class of fixed penalties in all welfare-relevant respects. For the other three types of penalty we show that, for those cartels that do form, penalties based on the overcharge produce lower prices than those based on profit) while penalties based on revenue produce the highest prices. Further, in conjunction with the above result, our analysis of cartel stability (and thus deterrence), shows that penalties based on the overcharge out-perform those based on profits, which in turn out-perform those based on revenue in terms of their impact on each of the following welfare criteria: (a) average overcharge; (b) average consumer surplus; (c) average total welfare.Keeping up with the Joneses : who loses out?http://hdl.handle.net/10023/5566
This paper investigates how well-being varies with individual wage rates when individuals care about relative consumption and so there are Veblen effects – Keeping up with the Joneses – leading individuals to over-work. In the case where individuals compare themselves with their peers – those with the same wage-rate - it is shown that Keeping up with the Joneses leads some individuals to work who otherwise would have chosen not to. Moreover for these individuals well-being is a decreasing function of the wage rate - contrary to standard theory. So those who are worst-off in society are no longer those on the lowest wage.
Sat, 20 Sep 2014 00:00:00 GMThttp://hdl.handle.net/10023/55662014-09-20T00:00:00ZUlph, David TregearThis paper investigates how well-being varies with individual wage rates when individuals care about relative consumption and so there are Veblen effects – Keeping up with the Joneses – leading individuals to over-work. In the case where individuals compare themselves with their peers – those with the same wage-rate - it is shown that Keeping up with the Joneses leads some individuals to work who otherwise would have chosen not to. Moreover for these individuals well-being is a decreasing function of the wage rate - contrary to standard theory. So those who are worst-off in society are no longer those on the lowest wage.Optimal universal and categorical benefits with classification errors and imperfect enforcementhttp://hdl.handle.net/10023/5565
We determine the optimal combination of a universal benefit, B, and categorical benefit, C, for an economy in which individuals differ in both their ability to work – modelled as an exogenous zero quantity constraint on labour supply – and, conditional on being able to work, their productivity at work. C is targeted at those unable to work, and is conditioned in two dimensions: ex-ante an individual must be unable to work to be awarded the benefit , whilst ex-post a recipient must not subsequently work. However, the ex-ante conditionality may be imperfectly enforced due to Type I(false rejection) and Type II (false award) classification errors, whilst, in addition, the ex post conditionality may be imperfectly enforced. If there are no classification errors – and thus no enforcement issues – it is always optimal to set C>0, whilst B=0 only if the benefit budget is sufficiently small. However, when classification errors occur, B=0 only if there are no Type I errors and the benefit budget is sufficiently small, while the conditions under which C>0 depend on the enforcement of the ex-post conditionality. We consider two discrete alternatives. Under No Enforcement C>0 only if the test administering C has some discriminatory power. In addition, social welfare is decreasing in the propensity to make each type of error. However, under Full Enforcement C>0 for all levels of discriminatory power, including that of no discriminatory power. Furthermore, whilst social welfare is decreasing in the propensity to make Type I errors, there are certain conditions under which it is increasing in the propensity to make Type II errors. This implies that there may be conditions under which it would be welfare enhancing to lower the chosen eligibility threshold – supporting the suggestion by Goodin (1985) to “err on the side of kindness”.
Sun, 03 Aug 2014 00:00:00 GMThttp://hdl.handle.net/10023/55652014-08-03T00:00:00ZUlph, David TregearSlack, Sean EdwardWe determine the optimal combination of a universal benefit, B, and categorical benefit, C, for an economy in which individuals differ in both their ability to work – modelled as an exogenous zero quantity constraint on labour supply – and, conditional on being able to work, their productivity at work. C is targeted at those unable to work, and is conditioned in two dimensions: ex-ante an individual must be unable to work to be awarded the benefit , whilst ex-post a recipient must not subsequently work. However, the ex-ante conditionality may be imperfectly enforced due to Type I(false rejection) and Type II (false award) classification errors, whilst, in addition, the ex post conditionality may be imperfectly enforced. If there are no classification errors – and thus no enforcement issues – it is always optimal to set C>0, whilst B=0 only if the benefit budget is sufficiently small. However, when classification errors occur, B=0 only if there are no Type I errors and the benefit budget is sufficiently small, while the conditions under which C>0 depend on the enforcement of the ex-post conditionality. We consider two discrete alternatives. Under No Enforcement C>0 only if the test administering C has some discriminatory power. In addition, social welfare is decreasing in the propensity to make each type of error. However, under Full Enforcement C>0 for all levels of discriminatory power, including that of no discriminatory power. Furthermore, whilst social welfare is decreasing in the propensity to make Type I errors, there are certain conditions under which it is increasing in the propensity to make Type II errors. This implies that there may be conditions under which it would be welfare enhancing to lower the chosen eligibility threshold – supporting the suggestion by Goodin (1985) to “err on the side of kindness”.Legal uncertainty, competition law enforcement procedures and optimal penaltieshttp://hdl.handle.net/10023/5564
In this paper we make three contributions to the literature on optimal Competition Law enforcement procedures. The first (which is of general interest beyond competition policy) is to clarify the concept of “legal uncertainty”, relating it to ideas in the literature on Law and Economics, but formalising the concept through various information structures which specify the probability that each firm attaches – at the time it takes an action – to the possibility of its being deemed anti-competitive were it to be investigated by a Competition Authority. We show that the existence of Type I and Type II decision errors by competition authorities is neither necessary nor sufficient for the existence of legal uncertainty, and that information structures with legal uncertainty can generate higher welfare than information structures with legal certainty – a result echoing a similar finding obtained in a completely different context and under different assumptions in earlier Law and Economics literature (Kaplow and Shavell, 1992). Our second contribution is to revisit and significantly generalise the analysis in our previous paper, Katsoulacos and Ulph (2009), involving a welfare comparison of Per Se and Effects- Based legal standards. In that analysis we considered just a single information structure under an Effects-Based standard and also penalties were exogenously fixed. Here we allow for (a) different information structures under an Effects-Based standard and (b) endogenous penalties. We obtain two main results: (i) considering all information structures a Per Se standard is never better than an Effects-Based standard; (ii) optimal penalties may be higher when there is legal uncertainty than when there is no legal uncertainty.
Tue, 01 Jul 2014 00:00:00 GMThttp://hdl.handle.net/10023/55642014-07-01T00:00:00ZUlph, David TregearKatsoulacos, YannisIn this paper we make three contributions to the literature on optimal Competition Law enforcement procedures. The first (which is of general interest beyond competition policy) is to clarify the concept of “legal uncertainty”, relating it to ideas in the literature on Law and Economics, but formalising the concept through various information structures which specify the probability that each firm attaches – at the time it takes an action – to the possibility of its being deemed anti-competitive were it to be investigated by a Competition Authority. We show that the existence of Type I and Type II decision errors by competition authorities is neither necessary nor sufficient for the existence of legal uncertainty, and that information structures with legal uncertainty can generate higher welfare than information structures with legal certainty – a result echoing a similar finding obtained in a completely different context and under different assumptions in earlier Law and Economics literature (Kaplow and Shavell, 1992). Our second contribution is to revisit and significantly generalise the analysis in our previous paper, Katsoulacos and Ulph (2009), involving a welfare comparison of Per Se and Effects- Based legal standards. In that analysis we considered just a single information structure under an Effects-Based standard and also penalties were exogenously fixed. Here we allow for (a) different information structures under an Effects-Based standard and (b) endogenous penalties. We obtain two main results: (i) considering all information structures a Per Se standard is never better than an Effects-Based standard; (ii) optimal penalties may be higher when there is legal uncertainty than when there is no legal uncertainty.Moral behaviour, altruism and environmental policyhttp://hdl.handle.net/10023/5563
Free-riding is often associated with self-interested behaviour. However if there is a global mixed pollutant, free-riding will arise if individuals calculate that their emissions are negligible relative to the total, so total emissions and hence any damage that they and others suffer will be unaffected by whatever consumption choice they make. In this context consumer behaviour and the optimal environmental tax are independent of the degree of altruism. For behaviour to change, individuals need to make their decisions in a different way. We propose a new theory of moral behaviour whereby individuals recognise that they will be worse off by not acting in their own self-interest, and balance this cost off against the hypothetical moral value of adopting a Kantian form of behaviour, that is by calculating the consequences of their action by asking what would happen if everyone else acted in the same way as they did. We show that: (a) if individuals behave this way, then altruism matters and the greater the degree of altruism the more individuals cut back their consumption of a ’dirty’ good; (b) nevertheless the optimal environmental tax is exactly the same as that emerging from classical analysis where individuals act in self-interested fashion.
This work was supported by funding from the ESRC
Mon, 03 Feb 2014 00:00:00 GMThttp://hdl.handle.net/10023/55632014-02-03T00:00:00ZUlph, David TregearDaube, Marc Philip KlausFree-riding is often associated with self-interested behaviour. However if there is a global mixed pollutant, free-riding will arise if individuals calculate that their emissions are negligible relative to the total, so total emissions and hence any damage that they and others suffer will be unaffected by whatever consumption choice they make. In this context consumer behaviour and the optimal environmental tax are independent of the degree of altruism. For behaviour to change, individuals need to make their decisions in a different way. We propose a new theory of moral behaviour whereby individuals recognise that they will be worse off by not acting in their own self-interest, and balance this cost off against the hypothetical moral value of adopting a Kantian form of behaviour, that is by calculating the consequences of their action by asking what would happen if everyone else acted in the same way as they did. We show that: (a) if individuals behave this way, then altruism matters and the greater the degree of altruism the more individuals cut back their consumption of a ’dirty’ good; (b) nevertheless the optimal environmental tax is exactly the same as that emerging from classical analysis where individuals act in self-interested fashion.Decision errors, legal uncertainty and welfare : a general treatmenthttp://hdl.handle.net/10023/5562
This paper provides a general treatment of the implications for welfare of legal uncertainty. We distinguish legal uncertainty from decision errors: though the former can be influenced by the latter, the latter are neither necessary nor sufficient for the existence of legal uncertainty. We show that an increase in decision errors will always reduce welfare. However, for any given level of decision errors, information structures involving more legal uncertainty can improve welfare. This holds always, even when there is complete legal uncertainty, when sanctions on socially harmful actions are set at their optimal level. This transforms radically one’s perception about the “costs” of legal uncertainty. We also provide general proofs for two results, previously established under restrictive assumptions. The first is that Effects-Based enforcement procedures may welfare dominate Per Se (or object-based) procedures and will always do so when sanctions are optimally set. The second is that optimal sanctions may well be higher under enforcement procedures involving more legal uncertainty.
Sat, 01 Feb 2014 00:00:00 GMThttp://hdl.handle.net/10023/55622014-02-01T00:00:00ZUlph, David TregearKatsoulacos, YannisThis paper provides a general treatment of the implications for welfare of legal uncertainty. We distinguish legal uncertainty from decision errors: though the former can be influenced by the latter, the latter are neither necessary nor sufficient for the existence of legal uncertainty. We show that an increase in decision errors will always reduce welfare. However, for any given level of decision errors, information structures involving more legal uncertainty can improve welfare. This holds always, even when there is complete legal uncertainty, when sanctions on socially harmful actions are set at their optimal level. This transforms radically one’s perception about the “costs” of legal uncertainty. We also provide general proofs for two results, previously established under restrictive assumptions. The first is that Effects-Based enforcement procedures may welfare dominate Per Se (or object-based) procedures and will always do so when sanctions are optimally set. The second is that optimal sanctions may well be higher under enforcement procedures involving more legal uncertainty.Consumer behaviour in a social context : implications for environmental policyhttp://hdl.handle.net/10023/5561
In this paper we summarise some of our recent work on consumer behaviour, drawing on recent developments in behavioural economics, in which consumers are embedded in a social context, so their behaviour is shaped by their interactions with other consumers. For the purpose of this paper we also allow consumption to cause environmental damage. Analysing the social context of consumption naturally lends itself to the use of game theoretic tools, and indicates that we seek to develop links between economics and sociology rather than economics and psychology, which has been the more predominant field for work in behavioural economics. We shall be concerned with three sets of issues: conspicuous consumption, consumption norms and altruistic behaviour. Our aim is to show that building links between sociological and economic approaches to the study of consumer behaviour can lead to significant and surprising implications for conventional economic policy prescriptions, especially with respect to environmental policy.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10023/55612014-01-01T00:00:00ZUlph, David TregearDasgupta, ParthaSoutherton, DaleUlph, AlistairIn this paper we summarise some of our recent work on consumer behaviour, drawing on recent developments in behavioural economics, in which consumers are embedded in a social context, so their behaviour is shaped by their interactions with other consumers. For the purpose of this paper we also allow consumption to cause environmental damage. Analysing the social context of consumption naturally lends itself to the use of game theoretic tools, and indicates that we seek to develop links between economics and sociology rather than economics and psychology, which has been the more predominant field for work in behavioural economics. We shall be concerned with three sets of issues: conspicuous consumption, consumption norms and altruistic behaviour. Our aim is to show that building links between sociological and economic approaches to the study of consumer behaviour can lead to significant and surprising implications for conventional economic policy prescriptions, especially with respect to environmental policy.Liquidity traps and expectation dynamics : fiscal stimulus or fiscal austerity?http://hdl.handle.net/10023/5541
We examine global dynamics under infinite-horizon learning in New Keynesian models where the interest-rate rule is subject to the zero lower bound. The intended steady state is locally but not globally stable. Unstable deflationary paths emerge after large pessimistic shocks to expectations. For large expectation shocks that push interest rates to the zero bound, a temporary fiscal stimulus, or in some cases a policy of fiscal austerity, will insulate the economy from deflation traps if the policy is appropriately tailored in magnitude and duration. A fiscal stimulus "switching rule," which automatically kicks in without discretionary fine-tuning, can be equally effective.
Financial support from National Science Foundation Grant no. SES-1025011 is gratefully acknowledged.
Fri, 01 Aug 2014 00:00:00 GMThttp://hdl.handle.net/10023/55412014-08-01T00:00:00ZBenhabib, JessEvans, George WHonkapohja, SeppoWe examine global dynamics under infinite-horizon learning in New Keynesian models where the interest-rate rule is subject to the zero lower bound. The intended steady state is locally but not globally stable. Unstable deflationary paths emerge after large pessimistic shocks to expectations. For large expectation shocks that push interest rates to the zero bound, a temporary fiscal stimulus, or in some cases a policy of fiscal austerity, will insulate the economy from deflation traps if the policy is appropriately tailored in magnitude and duration. A fiscal stimulus "switching rule," which automatically kicks in without discretionary fine-tuning, can be equally effective.Stability and competitive equilibrium in trading networkshttp://hdl.handle.net/10023/5519
We introduce a model in which agents in a network can trade via bilateral contracts. We find that when continuous transfers are allowed and utilities are quasilinear, the full substitutability of preferences is sufficient to guarantee the existence of stable outcomes for any underlying network structure. Furthermore, the set of stable outcomes is essentially equivalent to the set of competitive equilibria, and all stable outcomes are in the core and are efficient. By contrast, for any domain of preferences strictly larger than that of full substitutability, the existence of stable outcomes and competitive equilibria cannot be guaranteed.
Kominers thanks the National Science Foundation (grant CCF-1216095 and a graduate research fellowship), the Yahoo! Key Scientific Challenges Program, the John M. Olin Center (a Terence M. Considine Fellowship), the American Mathematical Society, and the Simons Foundation for support. Nichifor thanks the Netherlands Organisation for Scientific Research (grant VIDI-452-06-013) and the Scottish Institute for Research in Economics for support. Ostrovsky thanks the Alfred P. Sloan Foundation for support. Westkamp thanks the German Science Foundation for support.
Tue, 01 Oct 2013 00:00:00 GMThttp://hdl.handle.net/10023/55192013-10-01T00:00:00ZHatfield, John WilliamKominers, Scott DukeNichifor, AlexandruOstrovsky, MichaelWestkamp, AlexanderWe introduce a model in which agents in a network can trade via bilateral contracts. We find that when continuous transfers are allowed and utilities are quasilinear, the full substitutability of preferences is sufficient to guarantee the existence of stable outcomes for any underlying network structure. Furthermore, the set of stable outcomes is essentially equivalent to the set of competitive equilibria, and all stable outcomes are in the core and are efficient. By contrast, for any domain of preferences strictly larger than that of full substitutability, the existence of stable outcomes and competitive equilibria cannot be guaranteed.Finance and balanced growthhttp://hdl.handle.net/10023/5354
We study the relationships between various concepts of financial development and balanced economic growth. A model of endogenous growth that incorporates roles for both financial efficiency and access to financial services permits a better understanding of the relationship between the size of the financial sector (value added) and growth. Higher financial value added results from some, but not all, kinds of finance-driven growth. If greater access rather than greater efficiency generates higher growth, then value added and growth can be positively correlated. We present some preliminary empirical results that support the importance of access alongside efficiency in explaining cross-country variations in growth.
Sun, 01 Jun 2014 00:00:00 GMThttp://hdl.handle.net/10023/53542014-06-01T00:00:00ZTrew, Alex WilliamWe study the relationships between various concepts of financial development and balanced economic growth. A model of endogenous growth that incorporates roles for both financial efficiency and access to financial services permits a better understanding of the relationship between the size of the financial sector (value added) and growth. Higher financial value added results from some, but not all, kinds of finance-driven growth. If greater access rather than greater efficiency generates higher growth, then value added and growth can be positively correlated. We present some preliminary empirical results that support the importance of access alongside efficiency in explaining cross-country variations in growth.Essays on governance, public finance, and economic developmenthttp://hdl.handle.net/10023/5282
This thesis is composed of three distinct but related essays. The first essay
studies the role of the size of the economy in mitigating the impact of
public sector corruption on economic development. The analysis is based on
a dynamic general equilibrium model in which growth occurs endogenously
through the invention and manufacture of new intermediate goods that are
used in the production of output. Potential innovators decide to enter the
market considering the fraction of future profits that may be lost to corruption.
We find that depending on the number of times bribes are demanded,
the size of the economy may be an important factor in determining the effects
of corruption on innovation and economic growth.
The second essay presents an occupational choice model in which a household
can choose either formal or informal entrepreneurship or at the subsistence
livelihood. Credit market constraints and initial wealth conditions
(bequest) determine an agent’s occupational choice. Corruption arises when
bureaucrats exchange investment permits for bribes. Corruption worsens
credit market constraints. Equilibrium with corruption is characterised by
an increase (decrease) in informal (formal) entrepreneurship and a decrease in
formal entrepreneurship wealth. Since corruption-induced credit constrained
households choose informal entrepreneurship as opposed to subsistence livelihood
income in the formal sector, the informal economy is shown to mitigate
the extent of income inequality.
The third essay explains the role of bureaucratic corruption in undermining
public service delivery, public finance, and economic development
through incentivising tax evasion. The analysis is based on a dynamic general
equilibrium model in which a taxable household observes the quality of
public services and decides whether or not to fulfil his tax obligation. Bureaucratic
corruption compromises the quality of public services such that a
taxable household develops incentives to evade tax payment. We show that
corruption-induced tax evasion increases the likelihood of a budget deficit,
renders tax payable increase counter-productive, and aggravates the negative
effect of bureaucratic corruption on economic development.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10023/52822014-01-01T00:00:00ZOkumu, Ibrahim MikeThis thesis is composed of three distinct but related essays. The first essay
studies the role of the size of the economy in mitigating the impact of
public sector corruption on economic development. The analysis is based on
a dynamic general equilibrium model in which growth occurs endogenously
through the invention and manufacture of new intermediate goods that are
used in the production of output. Potential innovators decide to enter the
market considering the fraction of future profits that may be lost to corruption.
We find that depending on the number of times bribes are demanded,
the size of the economy may be an important factor in determining the effects
of corruption on innovation and economic growth.
The second essay presents an occupational choice model in which a household
can choose either formal or informal entrepreneurship or at the subsistence
livelihood. Credit market constraints and initial wealth conditions
(bequest) determine an agent’s occupational choice. Corruption arises when
bureaucrats exchange investment permits for bribes. Corruption worsens
credit market constraints. Equilibrium with corruption is characterised by
an increase (decrease) in informal (formal) entrepreneurship and a decrease in
formal entrepreneurship wealth. Since corruption-induced credit constrained
households choose informal entrepreneurship as opposed to subsistence livelihood
income in the formal sector, the informal economy is shown to mitigate
the extent of income inequality.
The third essay explains the role of bureaucratic corruption in undermining
public service delivery, public finance, and economic development
through incentivising tax evasion. The analysis is based on a dynamic general
equilibrium model in which a taxable household observes the quality of
public services and decides whether or not to fulfil his tax obligation. Bureaucratic
corruption compromises the quality of public services such that a
taxable household develops incentives to evade tax payment. We show that
corruption-induced tax evasion increases the likelihood of a budget deficit,
renders tax payable increase counter-productive, and aggravates the negative
effect of bureaucratic corruption on economic development.Dominance solvable games with multiple payoff criteriahttp://hdl.handle.net/10023/5102
Two logically distinct and permissive extensions of iterative weak dominance are introduced for games with possibly vector-valued payoffs. The first, iterative partial dominance, builds on an easy-to-check condition but may lead to solutions that do not include any (generalized) Nash equilibria. However, the second and intuitively more demanding extension, iterative essential dominance, is shown to be an equilibrium refinement. The latter result includes Moulin's (1979) classic theorem as a special case when all players' payoffs are real-valued. Therefore, essential dominance solvability can be a useful solution concept for making sharper predictions in multicriteria games that feature a plethora of equilibria.
Fri, 25 Jul 2014 00:00:00 GMThttp://hdl.handle.net/10023/51022014-07-25T00:00:00ZGerasimou, GeorgiosTwo logically distinct and permissive extensions of iterative weak dominance are introduced for games with possibly vector-valued payoffs. The first, iterative partial dominance, builds on an easy-to-check condition but may lead to solutions that do not include any (generalized) Nash equilibria. However, the second and intuitively more demanding extension, iterative essential dominance, is shown to be an equilibrium refinement. The latter result includes Moulin's (1979) classic theorem as a special case when all players' payoffs are real-valued. Therefore, essential dominance solvability can be a useful solution concept for making sharper predictions in multicriteria games that feature a plethora of equilibria.University funding : impact on teaching and researchhttp://hdl.handle.net/10023/4823
We address the following question: how does a higher education funding system influence the trade-off that universities make between research and teaching? We do so by constructing a model that allows universities to choose actively the quality of their teaching and research when faced with different funding systems characterised by the pivotal role of the university funding budget constraint. In particular, we derive the feasible sets that face universities under such systems and show how, as the parameters of the system (the research block grant element, the research quality premium and the incentives-triggering quality threshold) are varied, the nature of the university system itself changes. Different ‘cultures’ of the university system emerge such as the ‘research elite’ and the ‘binary divide’.
Thu, 12 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10023/48232012-01-12T00:00:00ZBeath, John APoyago-Theotoky, JoannaUlph, David TregearWe address the following question: how does a higher education funding system influence the trade-off that universities make between research and teaching? We do so by constructing a model that allows universities to choose actively the quality of their teaching and research when faced with different funding systems characterised by the pivotal role of the university funding budget constraint. In particular, we derive the feasible sets that face universities under such systems and show how, as the parameters of the system (the research block grant element, the research quality premium and the incentives-triggering quality threshold) are varied, the nature of the university system itself changes. Different ‘cultures’ of the university system emerge such as the ‘research elite’ and the ‘binary divide’.Optimal climate change policies when governments cannot commithttp://hdl.handle.net/10023/4818
We analyse the optimal design of climate change policies when a government wants to encourage the private sector to undertake significant immediate investment in developing cleaner technologies, but the relevant carbon taxes (or other environmental policies) that would incentivise such investment by firms will be set in the future. We assume that the current government cannot commit to long-term carbon taxes, and so both it and the private sector face the possibility that the government in power in the future may give different (relative) weight to environmental damage costs. We show that this lack of commitment has a significant asymmetric effect: it increases the incentive of the current government to have the investment undertaken, but reduces the incentive of the private sector to invest. Consequently the current government may need to use additional policy instruments – such as R&D subsidies – to stimulate the required investment.
Tue, 01 Oct 2013 00:00:00 GMThttp://hdl.handle.net/10023/48182013-10-01T00:00:00ZUlph, AlistairUlph, David TregearWe analyse the optimal design of climate change policies when a government wants to encourage the private sector to undertake significant immediate investment in developing cleaner technologies, but the relevant carbon taxes (or other environmental policies) that would incentivise such investment by firms will be set in the future. We assume that the current government cannot commit to long-term carbon taxes, and so both it and the private sector face the possibility that the government in power in the future may give different (relative) weight to environmental damage costs. We show that this lack of commitment has a significant asymmetric effect: it increases the incentive of the current government to have the investment undertaken, but reduces the incentive of the private sector to invest. Consequently the current government may need to use additional policy instruments – such as R&D subsidies – to stimulate the required investment.The liberal ethics of non-interference and the Pareto principlehttp://hdl.handle.net/10023/4665
We analyse the liberal ethics of noninterference applied to social choice. A liberal principle capturing noninterfering views of society and inspired by John Stuart Mill's conception of liberty, is examined. The principle captures the idea that society should not penalise agents after changes in their situation that do not affect others. An impossibility for liberal approaches is highlighted: every social decision rule that satisfies unanimity and a general principle of noninterference must be dictatorial. This raises some important issues for liberal approaches in social choice and political philosophy.
Tue, 01 Apr 2014 00:00:00 GMThttp://hdl.handle.net/10023/46652014-04-01T00:00:00ZMariotti, MarcoVeneziani, RobertoWe analyse the liberal ethics of noninterference applied to social choice. A liberal principle capturing noninterfering views of society and inspired by John Stuart Mill's conception of liberty, is examined. The principle captures the idea that society should not penalise agents after changes in their situation that do not affect others. An impossibility for liberal approaches is highlighted: every social decision rule that satisfies unanimity and a general principle of noninterference must be dictatorial. This raises some important issues for liberal approaches in social choice and political philosophy.Competing for attention : is the showiest also the best?http://hdl.handle.net/10023/4664
We introduce attention games. Alternatives ranked by quality (producers, politicians, sexual partners...) desire to be chosen and compete for the imperfect attention of a chooser by investing in their own salience. We prove that if alternatives can control the attention they get, then "the showiest is the best": the equilibrium ordering of salience (weakly) reproduces the quality ranking and the best alternative is the one that gets picked most often. This result also holds under more general conditions. However, if those conditions fail, then even the worst alternative can be picked most often.
Tue, 01 Apr 2014 00:00:00 GMThttp://hdl.handle.net/10023/46642014-04-01T00:00:00ZManzini, PaolaMariotti, MarcoWe introduce attention games. Alternatives ranked by quality (producers, politicians, sexual partners...) desire to be chosen and compete for the imperfect attention of a chooser by investing in their own salience. We prove that if alternatives can control the attention they get, then "the showiest is the best": the equilibrium ordering of salience (weakly) reproduces the quality ranking and the best alternative is the one that gets picked most often. This result also holds under more general conditions. However, if those conditions fail, then even the worst alternative can be picked most often.Nominal stability and financial globalizationhttp://hdl.handle.net/10023/4615
Over the past four decades, advanced economies experienced a large growth in gross external portfolio positions. This phenomenon has been described as Financial Globalization. Over roughly the same time frame, most of these countries also saw a substantial fall in the level and variability of inflation. Many economists have conjectured that financial globalization contributed to the improved performance in the level and predictability of inflation. In this paper, we explore the causal link running in the opposite direction. We show that a monetary policy rule which reduces inflation variability leads to an increase in the size of gross external positions, both in equity and bond portfolios. This appears to be a robust prediction of open economy macro models with endogenous portfolio choice. It holds across different modeling specifications and parameterizations. We also present preliminary empirical evidence which shows a negative relationship between inflation volatility and the size of gross external positions.
Mon, 30 Sep 2013 00:00:00 GMThttp://hdl.handle.net/10023/46152013-09-30T00:00:00ZDevereux, Michael B.Senay, OzgeSutherland, AlanOver the past four decades, advanced economies experienced a large growth in gross external portfolio positions. This phenomenon has been described as Financial Globalization. Over roughly the same time frame, most of these countries also saw a substantial fall in the level and variability of inflation. Many economists have conjectured that financial globalization contributed to the improved performance in the level and predictability of inflation. In this paper, we explore the causal link running in the opposite direction. We show that a monetary policy rule which reduces inflation variability leads to an increase in the size of gross external positions, both in equity and bond portfolios. This appears to be a robust prediction of open economy macro models with endogenous portfolio choice. It holds across different modeling specifications and parameterizations. We also present preliminary empirical evidence which shows a negative relationship between inflation volatility and the size of gross external positions.A salience theory of choice errorshttp://hdl.handle.net/10023/4551
We study a psychologically based foundation for choice errors. The decision maker applies a preference ranking after forming a ?consideration set?prior to choosing an alternative. Membership of the consideration set is determined both by the alternative speci?c salience and by the rationality of the agent (his general propensity to consider all alternatives). The model turns out to include a logit formulation as a special case. In general, it has a rich set of implications both for exogenous parameters and for a situation in which alternatives can a¤ect their own salience (salience games). Such implications are relevant to assess the link between ?revealed? preferences and ?true?preferences: for example, less rational agents may paradoxically express their preference through choice more truthfully than more rational agents.
Thu, 01 Apr 2010 00:00:00 GMThttp://hdl.handle.net/10023/45512010-04-01T00:00:00ZManzini, PaolaMariotti, MarcoWe study a psychologically based foundation for choice errors. The decision maker applies a preference ranking after forming a ?consideration set?prior to choosing an alternative. Membership of the consideration set is determined both by the alternative speci?c salience and by the rationality of the agent (his general propensity to consider all alternatives). The model turns out to include a logit formulation as a special case. In general, it has a rich set of implications both for exogenous parameters and for a situation in which alternatives can a¤ect their own salience (salience games). Such implications are relevant to assess the link between ?revealed? preferences and ?true?preferences: for example, less rational agents may paradoxically express their preference through choice more truthfully than more rational agents.Delay and haircuts in sovereign debt : recovery and sustainabilityhttp://hdl.handle.net/10023/4550
One of the striking aspects of recent sovereign debt restructurings is, conditional on default, delay length is positively correlated with the size of 'haircut', which is size of creditor losses. In this paper, we develop an incomplete information model of debt restructuring where the prospect of uncertain economic recovery and the signalling about sustainability concerns together generate multi-period delay. The results from our analysis show that there is a correlation between delay length and size of haircut. Such results are supported by evidence. We show that Pareto ranking of equilibria, conditional on default, can be altered once we take into account the ex ante incentive of sovereign debtor. We use our results to evaluate proposals advocated to ensure orderly resolution of sovereign debt crises.
Sun, 01 Aug 2010 00:00:00 GMThttp://hdl.handle.net/10023/45502010-08-01T00:00:00ZGhosal, SayantanMiller, MarcusThampanishvong, KannikaOne of the striking aspects of recent sovereign debt restructurings is, conditional on default, delay length is positively correlated with the size of 'haircut', which is size of creditor losses. In this paper, we develop an incomplete information model of debt restructuring where the prospect of uncertain economic recovery and the signalling about sustainability concerns together generate multi-period delay. The results from our analysis show that there is a correlation between delay length and size of haircut. Such results are supported by evidence. We show that Pareto ranking of equilibria, conditional on default, can be altered once we take into account the ex ante incentive of sovereign debtor. We use our results to evaluate proposals advocated to ensure orderly resolution of sovereign debt crises.Ordering policy rules with an unconditional welfare measurehttp://hdl.handle.net/10023/4549
The unconditional expectation of social welfare is often used to assess alternative macroeconomic policy rules in applied quantitative research. It is shown that it is generally possible to derive a linear-quadratic problem that approximates the exact non-linear problem where the unconditional expectation of the objective is maximised and the steady-state is distorted. Thus, the measure of policy performance is a linear combination of second moments of economic variables which is relatively easy to compute numerically, and can be used to rank alternative policy rules. The approach is applied to a simple Calvo-type model under various monetary policy rules.
Submitted
Tue, 01 Mar 2011 00:00:00 GMThttp://hdl.handle.net/10023/45492011-03-01T00:00:00ZDamjanovic, VladislavDamjanovic, TatianaNolan, CharlesThe unconditional expectation of social welfare is often used to assess alternative macroeconomic policy rules in applied quantitative research. It is shown that it is generally possible to derive a linear-quadratic problem that approximates the exact non-linear problem where the unconditional expectation of the objective is maximised and the steady-state is distorted. Thus, the measure of policy performance is a linear combination of second moments of economic variables which is relatively easy to compute numerically, and can be used to rank alternative policy rules. The approach is applied to a simple Calvo-type model under various monetary policy rules.Improving Students’ Learning Aspirations Beyond Post-Primary Education : A First Account of Two Non-Formal Education Programmes in Middle-Income Countrieshttp://hdl.handle.net/10023/4548
Non-formal education programmes are active in a number of developing countries. These programmes offer vulnerable students an opportunity to pursue their education although they were excluded for various reasons from the formal education systems. This paper examines the impact of two programmes (one in Mauritius, and one in Thailand) on their participants’ aspirations towards learning. We develop a methodology to measure the perception of students regarding their learning experience. More than a third of them, for example, believe that there is no barrier to their education. Most acknowledge the role of their teachers in raising their aspirations towards their educational achievement. When compared to male students, female students seem to value more the role of their education.
Thu, 01 Jul 2010 00:00:00 GMThttp://hdl.handle.net/10023/45482010-07-01T00:00:00ZArico, Fabio RiccardoLasselle, LaurenceThampanishvong, KannikaNon-formal education programmes are active in a number of developing countries. These programmes offer vulnerable students an opportunity to pursue their education although they were excluded for various reasons from the formal education systems. This paper examines the impact of two programmes (one in Mauritius, and one in Thailand) on their participants’ aspirations towards learning. We develop a methodology to measure the perception of students regarding their learning experience. More than a third of them, for example, believe that there is no barrier to their education. Most acknowledge the role of their teachers in raising their aspirations towards their educational achievement. When compared to male students, female students seem to value more the role of their education.Optimal climate change policies when governments cannot commithttp://hdl.handle.net/10023/4547
We analyse the optimal design of climate change policies when a government wants to encourage the private sector to undertake significant immediate investment in developing cleaner technologies, but the relevant carbon taxes (or other environmental policies) that would incentivise such investment by firms will be set in the future. We assume that the current government cannot commit to long-term carbon taxes, and so both it and the private sector face the possibility that the government in power in the future may give different (relative) weight to environmental damage costs. We show that this lack of commitment has a significant asymmetric effect: it increases the incentive of the current government to have the investment undertaken, but reduces the incentive of the private sector to invest. Consequently the current government may need to use additional policy instruments – such as R&D subsidies – to stimulate the required investment.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/10023/45472011-01-01T00:00:00ZUlph, David TregearUlph, AlistairWe analyse the optimal design of climate change policies when a government wants to encourage the private sector to undertake significant immediate investment in developing cleaner technologies, but the relevant carbon taxes (or other environmental policies) that would incentivise such investment by firms will be set in the future. We assume that the current government cannot commit to long-term carbon taxes, and so both it and the private sector face the possibility that the government in power in the future may give different (relative) weight to environmental damage costs. We show that this lack of commitment has a significant asymmetric effect: it increases the incentive of the current government to have the investment undertaken, but reduces the incentive of the private sector to invest. Consequently the current government may need to use additional policy instruments – such as R&D subsidies – to stimulate the required investment.Age, life-satisfaction and relative income : insights from the UK and Germanyhttp://hdl.handle.net/10023/4546
We first confirm previous results with the German Socio-Economic Panel by Layard et al. (2010), and obtain strong negative effects of comparison income. However, when we split the sample by age, we find quite different results for reference income. The effects on life- satisfaction are positive and significant for those under 45, consistent with Hirschman’s (1973) ‘tunnel effect’, and only negative (and larger than in the full sample) for those over 45, when relative deprivation dominates. Thus for young respondents, reference income’s signalling role, indicating potential future prospects, can outweigh relative deprivation effects. Own-income effects are also larger for the older sample, and of greater magnitude than the comparison income effect. In East Germany the reference income effects are insignificant for all. With data from the British Household Panel Survey, we confirm standard results when encompassing all ages, but reference income loses significance in both age groups, and most surprisingly, even own income becomes insignificant for those over 45, while education has significant negative effects.
Sat, 01 Oct 2011 00:00:00 GMThttp://hdl.handle.net/10023/45462011-10-01T00:00:00ZFitzRoy, Felix RNolan, MichaelSteinhardt, Max F.We first confirm previous results with the German Socio-Economic Panel by Layard et al. (2010), and obtain strong negative effects of comparison income. However, when we split the sample by age, we find quite different results for reference income. The effects on life- satisfaction are positive and significant for those under 45, consistent with Hirschman’s (1973) ‘tunnel effect’, and only negative (and larger than in the full sample) for those over 45, when relative deprivation dominates. Thus for young respondents, reference income’s signalling role, indicating potential future prospects, can outweigh relative deprivation effects. Own-income effects are also larger for the older sample, and of greater magnitude than the comparison income effect. In East Germany the reference income effects are insignificant for all. With data from the British Household Panel Survey, we confirm standard results when encompassing all ages, but reference income loses significance in both age groups, and most surprisingly, even own income becomes insignificant for those over 45, while education has significant negative effects.Spatial takeoff in the first industrial revolutionhttp://hdl.handle.net/10023/4536
Using the framework of Desmet and Rossi-Hansberg (forthcoming), we present a model of spatial takeoff that is calibrated using spatially-disaggregated occupational data for England in c.1710. The model predicts changes in the spatial distribution of agricultural and manufacturing employment which match data for c.1817 and 1861. The model also matches a number of aggregate changes that characterise the first industrial revolution. Using counterfactual geographical distributions, we show that the initial concentration of productivity can matter for whether and when an industrial takeoff occurs. Subsidies to innovation in either sector can bring forward the date of takeoff while subsidies to the use of land by manufacturing firms can significantly delay a takeoff because it decreases spatial concentration of activity.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10023/45362014-01-01T00:00:00ZTrew, Alex WilliamUsing the framework of Desmet and Rossi-Hansberg (forthcoming), we present a model of spatial takeoff that is calibrated using spatially-disaggregated occupational data for England in c.1710. The model predicts changes in the spatial distribution of agricultural and manufacturing employment which match data for c.1817 and 1861. The model also matches a number of aggregate changes that characterise the first industrial revolution. Using counterfactual geographical distributions, we show that the initial concentration of productivity can matter for whether and when an industrial takeoff occurs. Subsidies to innovation in either sector can bring forward the date of takeoff while subsidies to the use of land by manufacturing firms can significantly delay a takeoff because it decreases spatial concentration of activity.Imperfect attention and menu evaluationhttp://hdl.handle.net/10023/4535
We model the choice behaviour of an agent who suffers from imperfect attention but is otherwise von Neumann Morgenstern rational. We define inattention axiomatically through preference over menus and endowed alternatives: an agent is inattentive if it is better to be endowed with an alternative a than to be allowed to pick a from a menu in which a is is the best alternative. This property and vNM rationality on the domain of menus and alternatives imply that the agent notices each alternative with a given menu-dependent probability (attention parameter) and maximises a menu independent utility function over the alternatives he notices. Preference for flexibility restricts the model to menu independent attention parameters as in Manzini and Mariotti (2013). Our theory explains anomalies (e.g. the attraction effect) that other prominent stochastic choice theories cannot accommodate.
Original version created Oct 2013
Sat, 01 Mar 2014 00:00:00 GMThttp://hdl.handle.net/10023/45352014-03-01T00:00:00ZManzini, PaolaMariotti, MarcoWe model the choice behaviour of an agent who suffers from imperfect attention but is otherwise von Neumann Morgenstern rational. We define inattention axiomatically through preference over menus and endowed alternatives: an agent is inattentive if it is better to be endowed with an alternative a than to be allowed to pick a from a menu in which a is is the best alternative. This property and vNM rationality on the domain of menus and alternatives imply that the agent notices each alternative with a given menu-dependent probability (attention parameter) and maximises a menu independent utility function over the alternatives he notices. Preference for flexibility restricts the model to menu independent attention parameters as in Manzini and Mariotti (2013). Our theory explains anomalies (e.g. the attraction effect) that other prominent stochastic choice theories cannot accommodate.Consumption inequality and discount rate heterogeneityhttp://hdl.handle.net/10023/4534
Although standard incomplete market models can account for the magnitude of the rise in consumption inequality over the life cycle, they generate unrealistically concave age profiles of consumption inequality and unrealistically less wealth inequality. In this paper, I investigate the role of discount rate heterogeneity on consumption inequality in the context of incomplete market life cycle models. The distribution of discount rates is estimated using moments from the wealth distribution. I find that the model with heterogeneous income profiles (HIP) and discount rate heterogeneity can successfully account for the empirical age profile of consumption inequality, both in its magnitude and in its non-concave shape. Generating realistic wealth inequality, this simulated model also highlights the importance of ex ante heterogeneities as main sources of life time inequality.
Fri, 01 Mar 2013 00:00:00 GMThttp://hdl.handle.net/10023/45342013-03-01T00:00:00ZSun, GangAlthough standard incomplete market models can account for the magnitude of the rise in consumption inequality over the life cycle, they generate unrealistically concave age profiles of consumption inequality and unrealistically less wealth inequality. In this paper, I investigate the role of discount rate heterogeneity on consumption inequality in the context of incomplete market life cycle models. The distribution of discount rates is estimated using moments from the wealth distribution. I find that the model with heterogeneous income profiles (HIP) and discount rate heterogeneity can successfully account for the empirical age profile of consumption inequality, both in its magnitude and in its non-concave shape. Generating realistic wealth inequality, this simulated model also highlights the importance of ex ante heterogeneities as main sources of life time inequality.Testing the Tunnel Effect : comparison, age and happiness in UK and German panelshttp://hdl.handle.net/10023/4533
In contrast to previous results combining all ages we find positive effects of comparison income on happiness for the under 45s, and negative effects for those over 45. In the BHPS these coefficients are several times the magnitude of own income effects. In GSOEP they cancel to give no effect of effect of comparison income on life satisfaction in the whole sample, when controlling for fixed effects, and time-in-panel, and with flexible, age-group dummies. The residual age-happiness relationship is hump-shaped in all three countries. Results are consistent with a simple life cycle model of relative income under uncertainty.
Sat, 01 Jun 2013 00:00:00 GMThttp://hdl.handle.net/10023/45332013-06-01T00:00:00ZFitzRoy, Felix RNolan, Michael A.Steinhardt, Max F.Ulph, David TregearIn contrast to previous results combining all ages we find positive effects of comparison income on happiness for the under 45s, and negative effects for those over 45. In the BHPS these coefficients are several times the magnitude of own income effects. In GSOEP they cancel to give no effect of effect of comparison income on life satisfaction in the whole sample, when controlling for fixed effects, and time-in-panel, and with flexible, age-group dummies. The residual age-happiness relationship is hump-shaped in all three countries. Results are consistent with a simple life cycle model of relative income under uncertainty.A behavioural model of choice in the presence of decision conflicthttp://hdl.handle.net/10023/4532
This paper proposes a model of choice that does not assume completeness of the decision maker’s preferences. The model explains in a natural way, and within a unified framework choice when preference-incomparable options are present, four behavioural phenomena: the attraction effect, choice deferral, the strengthening of the attraction e.ect when deferral is permissible, and status quo bias. The key element in the proposed decision rule is that an individual chooses an alternative from a menu if it is worse than no other alternative in that menu and is also better than at least one. Utility-maximising behaviour is included as a special case when preferences are complete. The relevance of the partial dominance idea underlying the proposed choice procedure is illustrated with an intuitive generalisation of weakly dominated strategies and their iterated deletion in games with vector payoffs.
Wed, 01 May 2013 00:00:00 GMThttp://hdl.handle.net/10023/45322013-05-01T00:00:00ZGerasimou, GeorgiosThis paper proposes a model of choice that does not assume completeness of the decision maker’s preferences. The model explains in a natural way, and within a unified framework choice when preference-incomparable options are present, four behavioural phenomena: the attraction effect, choice deferral, the strengthening of the attraction e.ect when deferral is permissible, and status quo bias. The key element in the proposed decision rule is that an individual chooses an alternative from a menu if it is worse than no other alternative in that menu and is also better than at least one. Utility-maximising behaviour is included as a special case when preferences are complete. The relevance of the partial dominance idea underlying the proposed choice procedure is illustrated with an intuitive generalisation of weakly dominated strategies and their iterated deletion in games with vector payoffs.ASEAN Free Trade Area (AFTA) : how far have we come? : analysis and evidence on effects of AFTAhttp://hdl.handle.net/10023/4475
This thesis addresses issues concerning trade effects of a particular RTA: AFTA. In the
first part of the thesis, 2 different but related gravity frameworks are constructed as to
evaluate the independent effects of AFTA on relevant countries’ trade flows. The first
paper proposes examining ‘AFTA-effects’ on members’ trade, specifically within the
AFTA context. This aims to distinguish trade effects that AFTA has had on early and
delayed members’ trading patterns. The panel ‘Gravity Model’ is constructed, pointing
to control for several biases commonly observed in the cross-section model. Although
the result implies that early members do share trade benefits from AFTA more than
non-members, the overall ‘AFTA-effects’ on the membership’s trade have not been
benign. Another paper measures ‘AFTA-effects’ on both members’ and non-members’
trade. This aims to assess whether AFTA has played a role as an export base for the
international market. In this case, ‘AFTA-effects’ appeared positive. Such effects are
driven by an enhancement in extra-export bias, suggesting that the membership’s
exports to outside destinations have increased post-AFTA. The last paper provides a
theoretical framework addressing the incidence of RTA-membership expansion. The
fact that AFTA was gradually established and empirical results indicating AFTA’s
impacts on members and non-members brings about the idea that bloc-membership
expansion could plausibly be explained by the economic effects that these countries
have received. The corollaries of trading with/without RTA-membership of a potential
member’s gains of trade and welfare levels are related to the decision towards
membership. Even though welfare effects are not always greater, the RTA-membership
status surely benefits member countries in gains from trade more than non-members. This can be perceived as one of the important reasons to explain the widespread
regionalism worldwide and why joining the RTA is often seen as a safe haven strategy
for a country.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10023/44752013-01-01T00:00:00ZNiyomsuk, OrachatThis thesis addresses issues concerning trade effects of a particular RTA: AFTA. In the
first part of the thesis, 2 different but related gravity frameworks are constructed as to
evaluate the independent effects of AFTA on relevant countries’ trade flows. The first
paper proposes examining ‘AFTA-effects’ on members’ trade, specifically within the
AFTA context. This aims to distinguish trade effects that AFTA has had on early and
delayed members’ trading patterns. The panel ‘Gravity Model’ is constructed, pointing
to control for several biases commonly observed in the cross-section model. Although
the result implies that early members do share trade benefits from AFTA more than
non-members, the overall ‘AFTA-effects’ on the membership’s trade have not been
benign. Another paper measures ‘AFTA-effects’ on both members’ and non-members’
trade. This aims to assess whether AFTA has played a role as an export base for the
international market. In this case, ‘AFTA-effects’ appeared positive. Such effects are
driven by an enhancement in extra-export bias, suggesting that the membership’s
exports to outside destinations have increased post-AFTA. The last paper provides a
theoretical framework addressing the incidence of RTA-membership expansion. The
fact that AFTA was gradually established and empirical results indicating AFTA’s
impacts on members and non-members brings about the idea that bloc-membership
expansion could plausibly be explained by the economic effects that these countries
have received. The corollaries of trading with/without RTA-membership of a potential
member’s gains of trade and welfare levels are related to the decision towards
membership. Even though welfare effects are not always greater, the RTA-membership
status surely benefits member countries in gains from trade more than non-members. This can be perceived as one of the important reasons to explain the widespread
regionalism worldwide and why joining the RTA is often seen as a safe haven strategy
for a country.Antitrust penalties and the implications of empirical evidence on cartel overchargeshttp://hdl.handle.net/10023/4310
In this paper we provide a number of extensions to the theory of antitrust fines and we use these, with existing and new datasets, to contribute to a better understanding of the current fining policies of Competition Authorities. In particular, we extend the theory linking cartel overcharges to optimal fines by introducing a number of additional considerations that authorities should take into account in setting fines and that are ignored by the existing literature. We then use existing empirical evidence on cartels and a new dataset relating to Abuse of Dominance cases to show that existing levels of fines are within the range supported by calculations of optimal fines. We then examine the reverse issue of how the toughness of the antitrust regime affects the level of cartel overcharges. We show that the effects are highly ambiguous, thus questioning some of the recent empirical findings on this issue, and the potential benefits of raising penalties.
Research for this paper was supported by a grant from the Economic and Social Research Council RES-062-23-2211
Fri, 01 Nov 2013 00:00:00 GMThttp://hdl.handle.net/10023/43102013-11-01T00:00:00ZKatsoulacos, YannisUlph, David TregearIn this paper we provide a number of extensions to the theory of antitrust fines and we use these, with existing and new datasets, to contribute to a better understanding of the current fining policies of Competition Authorities. In particular, we extend the theory linking cartel overcharges to optimal fines by introducing a number of additional considerations that authorities should take into account in setting fines and that are ignored by the existing literature. We then use existing empirical evidence on cartels and a new dataset relating to Abuse of Dominance cases to show that existing levels of fines are within the range supported by calculations of optimal fines. We then examine the reverse issue of how the toughness of the antitrust regime affects the level of cartel overcharges. We show that the effects are highly ambiguous, thus questioning some of the recent empirical findings on this issue, and the potential benefits of raising penalties.Coordination in public good provision : how individual volunteering is impacted by the volunteering of othershttp://hdl.handle.net/10023/4223
In this analysis, we examine the relationship between an individual's decision to volunteer and the average level of volunteering in the community where the individual resides. Our theoretical model is based on a coordination game , in which volunteering by others is informative regarding the bene t from volunteering. We demonstrate that the interaction between this information and one's private information makes it more likely that he or she will volunteer, given a higher level of contributions by his or her peers. We complement this theoretical work with an empirical analysis using Cen- sus 2000 Summary File 3 and Current Population Survey (CPS) 2004-2007 September supplement le data. We control for various individual and community characteristics, and employ robustness checks to verify the results of the baseline analysis. We addi- tionally use an innovative instrumental variables strategy to account for re ection bias and endogeneity caused by selective sorting by individuals into neighborhoods, which allows us to argue for a causal interpretation. The empirical results in the baseline, as well as all robustness analyses, verify the main result of our theoretical model, and we employ a more general structure to further strengthen our results.
An older version is published as a Carlo Alberto Notebook (ISSN: 2279-9362) #209. This version is currently under peer review by an academic journal
Mon, 01 Jul 2013 00:00:00 GMThttp://hdl.handle.net/10023/42232013-07-01T00:00:00ZDiasakos, TheodorosNeymotin, FlorenceIn this analysis, we examine the relationship between an individual's decision to volunteer and the average level of volunteering in the community where the individual resides. Our theoretical model is based on a coordination game , in which volunteering by others is informative regarding the bene t from volunteering. We demonstrate that the interaction between this information and one's private information makes it more likely that he or she will volunteer, given a higher level of contributions by his or her peers. We complement this theoretical work with an empirical analysis using Cen- sus 2000 Summary File 3 and Current Population Survey (CPS) 2004-2007 September supplement le data. We control for various individual and community characteristics, and employ robustness checks to verify the results of the baseline analysis. We addi- tionally use an innovative instrumental variables strategy to account for re ection bias and endogeneity caused by selective sorting by individuals into neighborhoods, which allows us to argue for a causal interpretation. The empirical results in the baseline, as well as all robustness analyses, verify the main result of our theoretical model, and we employ a more general structure to further strengthen our results.Rationalizable suicides : evidence from changes in inmates' expected sentence lengthhttp://hdl.handle.net/10023/4222
Published as a Carlo Alberto Notebook (ISSN: 2279-9362) #247
Wed, 01 Aug 2012 00:00:00 GMThttp://hdl.handle.net/10023/42222012-08-01T00:00:00ZCampanello, NadiaDiasakos, TheodorosMastrobuoni, GiovanniLabour market frictions, monetary policy and durable goodshttp://hdl.handle.net/10023/4100
The standard two-sector monetary business cycle model suﬀers from an important deﬁciency. Since durable good prices are more ﬂexible than non-durable good prices, optimising households build up the stock of durable goods at low cost after a monetary contraction. Consequently, sectoral outputs move in opposite directions. This paper ﬁnds that labour market frictions help to understand the so-called sectoral “comovement puzzle”. Our benchmark model with staggered Right-to-Manage wage bargaining closely matches the empirical elasticities of output, employment and hours per worker across sectors. The model with Nash bargaining, in contrast, predicts that ﬁrms adjust employment exclusively along the extensive margin.
Wed, 06 Jun 2012 00:00:00 GMThttp://hdl.handle.net/10023/41002012-06-06T00:00:00ZDi Pace, Federico NicolasHertweck, MatthiasThe standard two-sector monetary business cycle model suﬀers from an important deﬁciency. Since durable good prices are more ﬂexible than non-durable good prices, optimising households build up the stock of durable goods at low cost after a monetary contraction. Consequently, sectoral outputs move in opposite directions. This paper ﬁnds that labour market frictions help to understand the so-called sectoral “comovement puzzle”. Our benchmark model with staggered Right-to-Manage wage bargaining closely matches the empirical elasticities of output, employment and hours per worker across sectors. The model with Nash bargaining, in contrast, predicts that ﬁrms adjust employment exclusively along the extensive margin.The technological specialization of countries : an analysis of patent datahttp://hdl.handle.net/10023/4099
New methods of analysis of patent statistics allow assessing country profiles of technological specialization for the period 1990-2006. We witness a modest decrease in levels of specialization, which we show to be negatively influenced by country size and degree of internationalization of inventive activities.
Mon, 01 Jul 2013 00:00:00 GMThttp://hdl.handle.net/10023/40992013-07-01T00:00:00ZPicci, LucioSavorelli, LucaNew methods of analysis of patent statistics allow assessing country profiles of technological specialization for the period 1990-2006. We witness a modest decrease in levels of specialization, which we show to be negatively influenced by country size and degree of internationalization of inventive activities.Tobacco taxes and smoking bans impact differently on obesity and eating habitshttp://hdl.handle.net/10023/4098
Policy interventions aimed at affecting a specific behavior may also indirectly affect individual choices in other domains. In this paper we study the direct effect of tobacco excise taxes and smoking bans on smoking behavior, and the indirect effect on eating behavior and body weight. Using very detailed clinical data on individual health, smoking, and dietary habits, we show that antismoking policies are effective in reducing smoking, but their consequences on eating behavior dramatically depend on the specific implemented policy. Increasing excise taxes on tobacco decreases body weight and caloric intake, and it improves the quality of eaten food. Smoking bans, instead, do not significantly affect body weight, although they impact on the diet composition. Smoking bans in restaurants induce a significant rise in the quality of food and in daily caloric intake. Conversely, smoking bans in bars negatively affect the quality of the daily diet, as individuals eat more fats and less fibers, and drink more alcohol and caffeine.
Mon, 01 Apr 2013 00:00:00 GMThttp://hdl.handle.net/10023/40982013-04-01T00:00:00ZDragone, DavideManaresi, FrancescoSavorelli, LucaPolicy interventions aimed at affecting a specific behavior may also indirectly affect individual choices in other domains. In this paper we study the direct effect of tobacco excise taxes and smoking bans on smoking behavior, and the indirect effect on eating behavior and body weight. Using very detailed clinical data on individual health, smoking, and dietary habits, we show that antismoking policies are effective in reducing smoking, but their consequences on eating behavior dramatically depend on the specific implemented policy. Increasing excise taxes on tobacco decreases body weight and caloric intake, and it improves the quality of eaten food. Smoking bans, instead, do not significantly affect body weight, although they impact on the diet composition. Smoking bans in restaurants induce a significant rise in the quality of food and in daily caloric intake. Conversely, smoking bans in bars negatively affect the quality of the daily diet, as individuals eat more fats and less fibers, and drink more alcohol and caffeine.Obesity and smoking : can we catch two birds with one tax?http://hdl.handle.net/10023/4097
The debate on tobacco and fat taxes often treats smoking and eating as independent behaviors. However, the available evidence shows that they are interdependent, which implies that policies against smoking or obesity may have larger scope than expected. To address this issue, we propose a dynamic rational model where eating and smoking are simultaneous choices that jointly affect body weight and addiction to smoking. Focusing on direct and cross-price effects, we compare tobacco taxes and food taxes and we show that a single policy tool can reduce both smoking and body weight. In particular, food taxes can be more effective than tobacco taxes at simultaneously fighting obesity and smoking.
Mon, 01 Jul 2013 00:00:00 GMThttp://hdl.handle.net/10023/40972013-07-01T00:00:00ZDragone, DavideManaresi, FrancescoSavorelli, LucaThe debate on tobacco and fat taxes often treats smoking and eating as independent behaviors. However, the available evidence shows that they are interdependent, which implies that policies against smoking or obesity may have larger scope than expected. To address this issue, we propose a dynamic rational model where eating and smoking are simultaneous choices that jointly affect body weight and addiction to smoking. Focusing on direct and cross-price effects, we compare tobacco taxes and food taxes and we show that a single policy tool can reduce both smoking and body weight. In particular, food taxes can be more effective than tobacco taxes at simultaneously fighting obesity and smoking.Complete markets strikes back : Revisiting risk sharing tests under discount rate heterogeneityhttp://hdl.handle.net/10023/4093
Recent risk sharing tests strongly reject the hypothesis of complete markets, because in the data: (1) the individual consumption comoves with income and (2) the consumption dispersion increases over the life cycle. In this paper, I revisit the implications of these risk sharing tests in the context of a complete market model with discount rate heterogeneity, which is extended to introduce the individual choices of effort in education. I find that a complete market model with discount rate heterogeneity can pass both types of the risk sharing tests. The endogenous positive correlation between income growth rate and patience makes the individual consumption comove with income, even if the markets are complete. I also show that this model is quantitatively admissible to account for both the observed comovement of consumption and income and the increase of consumption dispersion over the life cycle.
Fri, 01 Feb 2013 00:00:00 GMThttp://hdl.handle.net/10023/40932013-02-01T00:00:00ZSun, GangRecent risk sharing tests strongly reject the hypothesis of complete markets, because in the data: (1) the individual consumption comoves with income and (2) the consumption dispersion increases over the life cycle. In this paper, I revisit the implications of these risk sharing tests in the context of a complete market model with discount rate heterogeneity, which is extended to introduce the individual choices of effort in education. I find that a complete market model with discount rate heterogeneity can pass both types of the risk sharing tests. The endogenous positive correlation between income growth rate and patience makes the individual consumption comove with income, even if the markets are complete. I also show that this model is quantitatively admissible to account for both the observed comovement of consumption and income and the increase of consumption dispersion over the life cycle.Stochastic choice and consideration setshttp://hdl.handle.net/10023/4092
We model a boundedly rational agent who suffers from limited attention. The agent considers each feasible alternative with a given (unobservable) probability, the attention parameter, and then chooses the alternative that maximises a preference relation within the set of considered alternatives. We show that this random choice rule is the only one for which the impact of removing an alternative on the choice probability of any other alternative is asymmetric and menu independent. Both the preference relation and the attention parameters are identified uniquely by stochastic choice data.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10023/40922014-01-01T00:00:00ZManzini, PaolaMariotti, MarcoWe model a boundedly rational agent who suffers from limited attention. The agent considers each feasible alternative with a given (unobservable) probability, the attention parameter, and then chooses the alternative that maximises a preference relation within the set of considered alternatives. We show that this random choice rule is the only one for which the impact of removing an alternative on the choice probability of any other alternative is asymmetric and menu independent. Both the preference relation and the attention parameters are identified uniquely by stochastic choice data.A consistent nonparametric bootstrap test of exogeneityhttp://hdl.handle.net/10023/4091
This paper proposes a novel way of testing exogeneity of an explanatory variable without any parametric assumptions in the presence of a "conditional" instrumental variable. A testable implication is derived that if an explanatory variable is endogenous, the conditional distribution of the outcome given the endogenous variable is not independent of its instrumental variable(s). The test rejects the null hypothesis with probability one if the explanatory variable is endogenous and it detects alternatives converging to the null at a rate n^{-1/2}. We propose a consistent nonparametric bootstrap test to implement this testable implication. We show that the proposed bootstrap test can be asymptotically justified in the sense that it produces asymptotically correct size under the null of exogeneity, and it has unit power asymptotically. Our nonparametric test can be applied to the cases in which the outcome is generated by an additively non-separable structural relation or in which the outcome is discrete, which has not been studied in the literature.
Sun, 01 Sep 2013 00:00:00 GMThttp://hdl.handle.net/10023/40912013-09-01T00:00:00ZLee, JinhyunThis paper proposes a novel way of testing exogeneity of an explanatory variable without any parametric assumptions in the presence of a "conditional" instrumental variable. A testable implication is derived that if an explanatory variable is endogenous, the conditional distribution of the outcome given the endogenous variable is not independent of its instrumental variable(s). The test rejects the null hypothesis with probability one if the explanatory variable is endogenous and it detects alternatives converging to the null at a rate n^{-1/2}. We propose a consistent nonparametric bootstrap test to implement this testable implication. We show that the proposed bootstrap test can be asymptotically justified in the sense that it produces asymptotically correct size under the null of exogeneity, and it has unit power asymptotically. Our nonparametric test can be applied to the cases in which the outcome is generated by an additively non-separable structural relation or in which the outcome is discrete, which has not been studied in the literature.Sharp bounds on heterogeneous individual treatment responseshttp://hdl.handle.net/10023/4090
This paper discusses how to identify individual-specific causal effects of an ordered discrete endogenous variable. The counterfactual heterogeneous causal information is recovered by identifying the partial differences of a structural relation. The proposed refutable nonparametric local restrictions exploit the fact that the pattern of endogeneity may vary across the level of the unobserved variable. The restrictions adopted in this paper impose a sense of order to an unordered binary endogeneous variable. This allows for a unified structural approach to studying various treatment effects when self-selection on unobservables is present. The usefulness of the identification results is illustrated using the data on the Vietnam-era veterans. The empirical findings reveal that when other observable characteristics are identical, military service had positive impacts for individuals with low (unobservable) earnings potential, while it had negative impacts for those with high earnings potential. This heterogeneity would not be detected by average effects which would underestimate the actual effects because different signs would be cancelled out. This partial identification result can be used to test homogeneity in response. When homogeneity is rejected, many parameters based on averages may deliver misleading information.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10023/40902013-01-01T00:00:00ZLee, JinhyunThis paper discusses how to identify individual-specific causal effects of an ordered discrete endogenous variable. The counterfactual heterogeneous causal information is recovered by identifying the partial differences of a structural relation. The proposed refutable nonparametric local restrictions exploit the fact that the pattern of endogeneity may vary across the level of the unobserved variable. The restrictions adopted in this paper impose a sense of order to an unordered binary endogeneous variable. This allows for a unified structural approach to studying various treatment effects when self-selection on unobservables is present. The usefulness of the identification results is illustrated using the data on the Vietnam-era veterans. The empirical findings reveal that when other observable characteristics are identical, military service had positive impacts for individuals with low (unobservable) earnings potential, while it had negative impacts for those with high earnings potential. This heterogeneity would not be detected by average effects which would underestimate the actual effects because different signs would be cancelled out. This partial identification result can be used to test homogeneity in response. When homogeneity is rejected, many parameters based on averages may deliver misleading information.Efficient nash equilibrium under adverse selectionhttp://hdl.handle.net/10023/4089
This paper revisits the problem of adverse selection in the insurance market of Rothschild and Stiglitz (QJE, 1976). We propose a simple extension of the game-theoretic structure in Hellwig (EER, 1987) under which Nash-type strategic interaction between the informed customers and the uninformed firms results always in a particular separating equilibrium. The equilibrium allocation is unique and Pareto-efficient in the interim sense subject to incentive-compatibility and individual rationality. In fact, it is the unique neutral optimum in the sense of Myerson (ECMA, 1983).
Thu, 01 Aug 2013 00:00:00 GMThttp://hdl.handle.net/10023/40892013-08-01T00:00:00ZDiasakos, TheodorosKoufopoulos, KostasThis paper revisits the problem of adverse selection in the insurance market of Rothschild and Stiglitz (QJE, 1976). We propose a simple extension of the game-theoretic structure in Hellwig (EER, 1987) under which Nash-type strategic interaction between the informed customers and the uninformed firms results always in a particular separating equilibrium. The equilibrium allocation is unique and Pareto-efficient in the interim sense subject to incentive-compatibility and individual rationality. In fact, it is the unique neutral optimum in the sense of Myerson (ECMA, 1983).Complexity and bounded rationality in individual decision problemshttp://hdl.handle.net/10023/4088
I develop a model of endogenous bounded rationality due to search costs, arising implicitly from the problem's complexity. The decision maker is not required to know the entire structure of the problem when making choices but can think ahead, through costly search, to reveal more of it. However, the costs of search are not assumed exogenously; they are inferred from revealed preferences through her choices. Thus, bounded rationality and its extent emerge endogenously: as problems become simpler or as the benefits of deeper search become larger relative to its costs, the choices more closely resemble those of a rational agent. For a fixed decision problem, the costs of search will vary across agents. For a given decision maker, they will vary across problems. The model explains, therefore, why the disparity, between observed choices and those prescribed under rationality, varies across agents and problems. It also suggests, under reasonable assumptions, an identifying prediction: a relation between the benefits of deeper search and the depth of the search. As long as calibration of the search costs is possible, this can be tested on any agent-problem pair. My approach provides a common framework for depicting the underlying limitations that force departures from rationality in different and unrelated decision-making situations. Specifically, I show that it is consistent with violations of timing independence in temporal framing problems, dynamic inconsistency and diversification bias in sequential versus simultaneous choice problems, and with plausible but contrasting risk attitudes across small- and large-stakes gambles.
Tue, 01 Oct 2013 00:00:00 GMThttp://hdl.handle.net/10023/40882013-10-01T00:00:00ZDiasakos, TheodorosI develop a model of endogenous bounded rationality due to search costs, arising implicitly from the problem's complexity. The decision maker is not required to know the entire structure of the problem when making choices but can think ahead, through costly search, to reveal more of it. However, the costs of search are not assumed exogenously; they are inferred from revealed preferences through her choices. Thus, bounded rationality and its extent emerge endogenously: as problems become simpler or as the benefits of deeper search become larger relative to its costs, the choices more closely resemble those of a rational agent. For a fixed decision problem, the costs of search will vary across agents. For a given decision maker, they will vary across problems. The model explains, therefore, why the disparity, between observed choices and those prescribed under rationality, varies across agents and problems. It also suggests, under reasonable assumptions, an identifying prediction: a relation between the benefits of deeper search and the depth of the search. As long as calibration of the search costs is possible, this can be tested on any agent-problem pair. My approach provides a common framework for depicting the underlying limitations that force departures from rationality in different and unrelated decision-making situations. Specifically, I show that it is consistent with violations of timing independence in temporal framing problems, dynamic inconsistency and diversification bias in sequential versus simultaneous choice problems, and with plausible but contrasting risk attitudes across small- and large-stakes gambles.Comparative statics of asset prices : the effects of other assets' riskhttp://hdl.handle.net/10023/4087
Currently, financial economics is unable to predict changes in asset prices with respect to changes in the underlying risk factors, even when an asset's dividend is independent of a given factor. This paper takes steps towards addressing this issue by highlighting a crucial component of wealth effects on asset prices hitherto ignored by the literature. Changes in wealth do not only alter an agent's risk aversion, but also her perceived ``riskiness'' of a security. The latter enhances significantly the extent to which market-clearing leads to endogenously-generated correlation across asset prices, over and above that induced by correlation between payoffs, giving the appearance of ``contagion.''
Revision & Resubmission requested by the Review of Asset Pricing Studies (ISSN: 2045-9920)
Thu, 01 Aug 2013 00:00:00 GMThttp://hdl.handle.net/10023/40872013-08-01T00:00:00ZDiasakos, TheodorosCurrently, financial economics is unable to predict changes in asset prices with respect to changes in the underlying risk factors, even when an asset's dividend is independent of a given factor. This paper takes steps towards addressing this issue by highlighting a crucial component of wealth effects on asset prices hitherto ignored by the literature. Changes in wealth do not only alter an agent's risk aversion, but also her perceived ``riskiness'' of a security. The latter enhances significantly the extent to which market-clearing leads to endogenously-generated correlation across asset prices, over and above that induced by correlation between payoffs, giving the appearance of ``contagion.''Beliefs and actions in the trust game : Creating instrumental variables to estimate the causal effecthttp://hdl.handle.net/10023/4086
In many economic contexts, an elusive variable of interest is the agent’s belief about relevant events, e.g. about other agents’ behavior. A growing number of surveys and experiments ask participants to state beliefs explicitly but little is known about the causal relation between beliefs and other behavioral variables. This paper discusses the possibility of creating exogenous instrumental variables for belief statements, by informing the agent about exogenous manipulations of the relevant events. We conduct trust game experiments where the amount sent back by the second player (trustee) is exogenously varied. The procedure allows detecting causal links from beliefs to actions under plausible assumptions. The IV-estimated effect is significant, confirming the causal role of beliefs. It is only slightly and insignificantly smaller than in estimations without instrumentation, consistent with a mild effect of social norms or other omitted variables.
Wed, 01 Aug 2012 00:00:00 GMThttp://hdl.handle.net/10023/40862012-08-01T00:00:00ZCosta-Gomes, MiguelHuck, SteffenWeizsacker, GeorgIn many economic contexts, an elusive variable of interest is the agent’s belief about relevant events, e.g. about other agents’ behavior. A growing number of surveys and experiments ask participants to state beliefs explicitly but little is known about the causal relation between beliefs and other behavioral variables. This paper discusses the possibility of creating exogenous instrumental variables for belief statements, by informing the agent about exogenous manipulations of the relevant events. We conduct trust game experiments where the amount sent back by the second player (trustee) is exogenously varied. The procedure allows detecting causal links from beliefs to actions under plausible assumptions. The IV-estimated effect is significant, confirming the causal role of beliefs. It is only slightly and insignificantly smaller than in estimations without instrumentation, consistent with a mild effect of social norms or other omitted variables.A simple characterization of dynamic completeness in continuous timehttp://hdl.handle.net/10023/4085
This paper investigates dynamic completeness of financial markets in which the underlying risk process is a multi-dimensional Brownian motion and the risky securities' dividends geometric Brownian motions. A sufficient condition, that the instantaneous dispersion matrix of the relative dividends is non-degenerate, was established recently in the literature for single-commodity, pure-exchange economies with many heterogenous agents, under the assumption that the intermediate flows of all dividends, utilities, and endowments are analytic functions. For the current setting, a different mathematical argument in which analyticity is not needed shows that a slightly weaker condition suffices for general pricing kernels. That is, dynamic completeness obtains irrespectively of preferences, endowments, and other structural elements (such as whether or not the budget constraints include only pure exchange, whether or not the time horizon is finite with lump-sum dividends available on the terminal date, etc.).
Under review (second round) by Mathematical Finance (Online ISSN: 1467-9965)
Sun, 01 Sep 2013 00:00:00 GMThttp://hdl.handle.net/10023/40852013-09-01T00:00:00ZDiasakos, TheodorosThis paper investigates dynamic completeness of financial markets in which the underlying risk process is a multi-dimensional Brownian motion and the risky securities' dividends geometric Brownian motions. A sufficient condition, that the instantaneous dispersion matrix of the relative dividends is non-degenerate, was established recently in the literature for single-commodity, pure-exchange economies with many heterogenous agents, under the assumption that the intermediate flows of all dividends, utilities, and endowments are analytic functions. For the current setting, a different mathematical argument in which analyticity is not needed shows that a slightly weaker condition suffices for general pricing kernels. That is, dynamic completeness obtains irrespectively of preferences, endowments, and other structural elements (such as whether or not the budget constraints include only pure exchange, whether or not the time horizon is finite with lump-sum dividends available on the terminal date, etc.).Nominal stability and financial globalizationhttp://hdl.handle.net/10023/4080
Over the past four decades, advanced economies experienced a large growth in gross external portfolio positions. This phenomenon has been described as Financial Globalization. Over roughly the same time frame, most of these countries also saw a substantial fall in the level and variability of inflation. Many economists have conjectured that financial globalization contributed to the improved performance in the level and predictability of inflation. In this paper, we explore the causal link running in the opposite direction. We show that a monetary policy rule which reduces inflation variability leads to an increase in the size of gross external positions, both in equity and bond portfolios. This appears to be a robust prediction of open economy macro models with endogenous portfolio choice. It holds across different modeling specifications and parameterizations. We also present preliminary empirical evidence which shows a negative relationship between inflation volatility and the size of gross external positions.
Mon, 30 Sep 2013 00:00:00 GMThttp://hdl.handle.net/10023/40802013-09-30T00:00:00ZDevereux, Michael BSenay, OzgeSutherland, AlanOver the past four decades, advanced economies experienced a large growth in gross external portfolio positions. This phenomenon has been described as Financial Globalization. Over roughly the same time frame, most of these countries also saw a substantial fall in the level and variability of inflation. Many economists have conjectured that financial globalization contributed to the improved performance in the level and predictability of inflation. In this paper, we explore the causal link running in the opposite direction. We show that a monetary policy rule which reduces inflation variability leads to an increase in the size of gross external positions, both in equity and bond portfolios. This appears to be a robust prediction of open economy macro models with endogenous portfolio choice. It holds across different modeling specifications and parameterizations. We also present preliminary empirical evidence which shows a negative relationship between inflation volatility and the size of gross external positions.Anticipation, learning and welfare : the case of distortionary taxationhttp://hdl.handle.net/10023/4067
We study the impact of anticipated fiscal policy changes in a Ramsey economy where agents form long-horizon expectations using adaptive learning. We ex- tend the existing framework by introducing distortionary taxes as well as elastic labour supply, which makes agents' decisions non-predetermined but more realistic. We detect that the dynamic responses to anticipated tax changes under learning have oscillatory behaviour that can be interpreted as self-ful lling waves of optimism and pessimism emerging from systematic forecast errors. Moreover, we demonstrate that these waves can have important implications for the welfare consequences of fiscal reforms.
Revise and resubmit in Journal of Economic Dynamics and Control
Mon, 26 Aug 2013 00:00:00 GMThttp://hdl.handle.net/10023/40672013-08-26T00:00:00ZGasteiger, EmanuelZhang, ShoujianWe study the impact of anticipated fiscal policy changes in a Ramsey economy where agents form long-horizon expectations using adaptive learning. We ex- tend the existing framework by introducing distortionary taxes as well as elastic labour supply, which makes agents' decisions non-predetermined but more realistic. We detect that the dynamic responses to anticipated tax changes under learning have oscillatory behaviour that can be interpreted as self-ful lling waves of optimism and pessimism emerging from systematic forecast errors. Moreover, we demonstrate that these waves can have important implications for the welfare consequences of fiscal reforms.The market for 'rough diamonds' : information, finance and wage inequalityhttp://hdl.handle.net/10023/4066
During the past four decades both between and within group wage inequality increased significantly in the US. I provide a microfounded justification for this pattern, by introducing private employer learning in a model of signaling with credit constraints. In particular, I show that when financial constraints relax, talented individuals can acquire education and leave the uneducated pool, this decreases unskilled inexperienced wages and boosts wage inequality. This explanation is consistent with US data from 1970 to 1997, indicating that the rise of the skill and the experience premium coincides with a fall in unskilled-inexperienced wages, while at the same time skilled or experienced wages remain flat. The model accounts for: (i) the increase in the skill premium despite the growing supply of skills; (ii) the understudied aspect of rising inequality related to the increase in the experience premium; (iii) the sharp growth of the skill premium for inexperienced workers and its moderate expansion for the experienced ones; (iv) the puzzling coexistence of increasing experience premium within the group of unskilled workers and its stable pattern among the skilled ones. The results hold under various robustness checks and provide some interesting policy implications about the potential conflict between inequality of opportunity and substantial economic inequality, as well as the role of minimum wage policy in determining the equilibrium wage inequality.
Tue, 01 Oct 2013 00:00:00 GMThttp://hdl.handle.net/10023/40662013-10-01T00:00:00ZKoutmeridis, TheodoreDuring the past four decades both between and within group wage inequality increased significantly in the US. I provide a microfounded justification for this pattern, by introducing private employer learning in a model of signaling with credit constraints. In particular, I show that when financial constraints relax, talented individuals can acquire education and leave the uneducated pool, this decreases unskilled inexperienced wages and boosts wage inequality. This explanation is consistent with US data from 1970 to 1997, indicating that the rise of the skill and the experience premium coincides with a fall in unskilled-inexperienced wages, while at the same time skilled or experienced wages remain flat. The model accounts for: (i) the increase in the skill premium despite the growing supply of skills; (ii) the understudied aspect of rising inequality related to the increase in the experience premium; (iii) the sharp growth of the skill premium for inexperienced workers and its moderate expansion for the experienced ones; (iv) the puzzling coexistence of increasing experience premium within the group of unskilled workers and its stable pattern among the skilled ones. The results hold under various robustness checks and provide some interesting policy implications about the potential conflict between inequality of opportunity and substantial economic inequality, as well as the role of minimum wage policy in determining the equilibrium wage inequality.My group beats your group : evaluating non-income inequalitieshttp://hdl.handle.net/10023/4065
This paper proposes a new methodology, the Domination Index, to evaluate non-income inequalities between social groups such as inequalities of educational attainment, occupational status, health or subjective well-being. The Domination Index does not require specic cardinalisation assumptions, but only uses the ordinal structure of these non-income variables. We approach from an axiomatic perspective and show that a set of desirable properties for a group inequality measure when the variable of interest is ordinal, characterizes the Domination Index up to a positive scalar transformation. Moreover we make use of the Domination Index to explore the relation between inequality and segregation and show how these two concepts are related theoretically.
Mon, 12 Aug 2013 00:00:00 GMThttp://hdl.handle.net/10023/40652013-08-12T00:00:00ZCuhadaroglu, TugceThis paper proposes a new methodology, the Domination Index, to evaluate non-income inequalities between social groups such as inequalities of educational attainment, occupational status, health or subjective well-being. The Domination Index does not require specic cardinalisation assumptions, but only uses the ordinal structure of these non-income variables. We approach from an axiomatic perspective and show that a set of desirable properties for a group inequality measure when the variable of interest is ordinal, characterizes the Domination Index up to a positive scalar transformation. Moreover we make use of the Domination Index to explore the relation between inequality and segregation and show how these two concepts are related theoretically.A social network for trade and inventories of stock during the South Sea Bubblehttp://hdl.handle.net/10023/4006
A social network of stock trading is defined for the notorious South Sea Bubble. Complete market trade in East India Company and Bank of England shares is described in a flow network. Intermediation is treated as a form of network centrality, which can be analysed using measures of pass-through, inventories and immediacy. New features of the South Sea Bubble are documented: i) the crisis suffered by goldsmith bankers may have pre-dated the Bubble; ii) yet the depth and immediacy of intermediation was maintained throughout the Bubble; iii) a gradual trend towards dis-intermediation occurred after the Bubble and iv) a there was a switch from intermediation based upon brokerage to intermediation based upon dealership.
Thu, 01 Mar 2012 00:00:00 GMThttp://hdl.handle.net/10023/40062012-03-01T00:00:00ZMays, AndrewShea, Gary SA social network of stock trading is defined for the notorious South Sea Bubble. Complete market trade in East India Company and Bank of England shares is described in a flow network. Intermediation is treated as a form of network centrality, which can be analysed using measures of pass-through, inventories and immediacy. New features of the South Sea Bubble are documented: i) the crisis suffered by goldsmith bankers may have pre-dated the Bubble; ii) yet the depth and immediacy of intermediation was maintained throughout the Bubble; iii) a gradual trend towards dis-intermediation occurred after the Bubble and iv) a there was a switch from intermediation based upon brokerage to intermediation based upon dealership.The British Industrial Revolution in Global Perspective : A book reviewhttp://hdl.handle.net/10023/3998
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10023/39982012-01-01T00:00:00ZShea, Gary SPolicy change and learning in the RBC modelhttp://hdl.handle.net/10023/3973
What is the impact of surprise and anticipated policy changes when agents form expectations using adaptive learning rather than rational expectations? We examine this issue using the standard stochastic real business cycle model with lump-sum taxes. Agents combine knowledge about future policy with econometric forecasts of future wages and interest rates. Both permanent and temporary policy changes are analyzed. Dynamics under learning can have large impact effects and a gradual hump-shaped response, and tend to be prominently characterized by oscillations not present under rational expectations. These fluctuations reflect periods of excessive optimism or pessimism, followed by subsequent corrections.
This work was supported by the Economic and Social Research Council (ESRC) [Grant number RES-062-23-2617]
Tue, 01 Oct 2013 00:00:00 GMThttp://hdl.handle.net/10023/39732013-10-01T00:00:00ZMitra, KaushikEvans, George WHonkapohja, SeppoWhat is the impact of surprise and anticipated policy changes when agents form expectations using adaptive learning rather than rational expectations? We examine this issue using the standard stochastic real business cycle model with lump-sum taxes. Agents combine knowledge about future policy with econometric forecasts of future wages and interest rates. Both permanent and temporary policy changes are analyzed. Dynamics under learning can have large impact effects and a gradual hump-shaped response, and tend to be prominently characterized by oscillations not present under rational expectations. These fluctuations reflect periods of excessive optimism or pessimism, followed by subsequent corrections.Cost and policy implications of agricultural pollution, with special reference to pesticideshttp://hdl.handle.net/10023/3725
Modem commercial agricultural practices involving chemical inputs such as fertilisers
and pesticides have been associated with huge increases in food production never
witnessed before, and in the case of cereal production (especially wheat) under Green
Revolution technology, recorded spectacular growth. As statistics show, production and
productivity have increased. However, the high chemical usage of fertilizers and
pesticides used to bring about these increases in food production are not without
problems. A visible parallel correlation between higher productivity, high artificial input use and environmental degradation and human health effects is evident in many countries where commercial agriculture is widespread. The high usage of these chemical inputs has caused numerous pollution problems impacting on human health, agricultural land, other production processes, wildlife and the environment in general. The private and external costs are very high. Such a production path is clearly unsustainable. This Ph.D. study lays its focus on estimating the private costs of illnesses arising from direct exposure to pesticides during handling and spraying by farmers on their farms in Sri Lanka. For this purpose three valuation techniques are used. They are the contingent valuation, cost of illness and the aversive behaviour approaches. Multiple regression analyses are also carried out to establish several relationships involving pesticide handling/spraying and direct exposure to pesticides. Policy implications of the regression analyses are then discussed. A health production model showing the relationships between the three approaches used for estimating the private costs of ill health and thereby inferring the willingness to pay for pollution control is presented. The theoretical background to agricultural pollution, drawing examples mostly from Asia, is also dealt within this thesis.
Data for this Ph.D. study were obtained from a field survey carried out in the summer of
1996. During this survey, 227 subsistence farmers handling and spraying pesticides on a
regular basis were interviewed to gather the necessary data. For the analysis of data, only 203 samples are used.
Fri, 01 Jan 1999 00:00:00 GMThttp://hdl.handle.net/10023/37251999-01-01T00:00:00ZWilson, ClevoModem commercial agricultural practices involving chemical inputs such as fertilisers
and pesticides have been associated with huge increases in food production never
witnessed before, and in the case of cereal production (especially wheat) under Green
Revolution technology, recorded spectacular growth. As statistics show, production and
productivity have increased. However, the high chemical usage of fertilizers and
pesticides used to bring about these increases in food production are not without
problems. A visible parallel correlation between higher productivity, high artificial input use and environmental degradation and human health effects is evident in many countries where commercial agriculture is widespread. The high usage of these chemical inputs has caused numerous pollution problems impacting on human health, agricultural land, other production processes, wildlife and the environment in general. The private and external costs are very high. Such a production path is clearly unsustainable. This Ph.D. study lays its focus on estimating the private costs of illnesses arising from direct exposure to pesticides during handling and spraying by farmers on their farms in Sri Lanka. For this purpose three valuation techniques are used. They are the contingent valuation, cost of illness and the aversive behaviour approaches. Multiple regression analyses are also carried out to establish several relationships involving pesticide handling/spraying and direct exposure to pesticides. Policy implications of the regression analyses are then discussed. A health production model showing the relationships between the three approaches used for estimating the private costs of ill health and thereby inferring the willingness to pay for pollution control is presented. The theoretical background to agricultural pollution, drawing examples mostly from Asia, is also dealt within this thesis.
Data for this Ph.D. study were obtained from a field survey carried out in the summer of
1996. During this survey, 227 subsistence farmers handling and spraying pesticides on a
regular basis were interviewed to gather the necessary data. For the analysis of data, only 203 samples are used.Essays on housing and monetary policyhttp://hdl.handle.net/10023/3681
This thesis, motivated by my reflections about the failings of monetary policy implementation as a cause of the sub-prime crisis, attempts to answer the following inquiries: (i) whether interest rates have played a major role in generating the house price fluctuations in the U.S., (ii) what are the effects of accommodative monetary policy on the economy given banks' excessive risk-taking, and (iii) whether an optimal monetary policy rule can be found for curbing credit-driven economic volatilities in the model economy with unconventional transmission channels operating.
By using a decomposition technique and regression analysis, it can be shown that short-term interest rates exert the most potent influence on the evolution of the volatile components of housing prices. One possible explanation for this is that low policy rates for a prolonged period tend to encourage bankers to take on more risk in lending. This transmission channel, labelled as the risk-taking channel, accounts for the gap to some extent between the forecast and the actual impact of monetary policy on the housing market and the overall economy. A looser monetary policy stance can also shift the preference of economic agents toward housing as theoretically and empirically corroborated in the context of choice between durable and nondurable goods. This transmission route is termed the preference channel. If these two channels are operative in the economy, policy makers need to react aggressively to rapid credit growth in order to stabilize the paths of housing prices and output. These findings provide meaningful implications for monetary policy implementation. First of all, central bankers should strive to identify in a timely fashion newly emerging and state-dependent transmission channels of monetary policy, and accurately assess the impact of policy decisions transmitted through these channels. Secondly, the intervention of central banks in the credit or housing market by adjusting policy rates can be optimal, relative to inaction, in circumstances where banks' risk-taking and the preference for housing are overly exuberant.
Tue, 19 Mar 2013 00:00:00 GMThttp://hdl.handle.net/10023/36812013-03-19T00:00:00ZNam, Min-HoThis thesis, motivated by my reflections about the failings of monetary policy implementation as a cause of the sub-prime crisis, attempts to answer the following inquiries: (i) whether interest rates have played a major role in generating the house price fluctuations in the U.S., (ii) what are the effects of accommodative monetary policy on the economy given banks' excessive risk-taking, and (iii) whether an optimal monetary policy rule can be found for curbing credit-driven economic volatilities in the model economy with unconventional transmission channels operating.
By using a decomposition technique and regression analysis, it can be shown that short-term interest rates exert the most potent influence on the evolution of the volatile components of housing prices. One possible explanation for this is that low policy rates for a prolonged period tend to encourage bankers to take on more risk in lending. This transmission channel, labelled as the risk-taking channel, accounts for the gap to some extent between the forecast and the actual impact of monetary policy on the housing market and the overall economy. A looser monetary policy stance can also shift the preference of economic agents toward housing as theoretically and empirically corroborated in the context of choice between durable and nondurable goods. This transmission route is termed the preference channel. If these two channels are operative in the economy, policy makers need to react aggressively to rapid credit growth in order to stabilize the paths of housing prices and output. These findings provide meaningful implications for monetary policy implementation. First of all, central bankers should strive to identify in a timely fashion newly emerging and state-dependent transmission channels of monetary policy, and accurately assess the impact of policy decisions transmitted through these channels. Secondly, the intervention of central banks in the credit or housing market by adjusting policy rates can be optimal, relative to inaction, in circumstances where banks' risk-taking and the preference for housing are overly exuberant.Two-stage threshold representationshttp://hdl.handle.net/10023/3572
We study two-stage choice procedures in which the decision maker first preselects the alternatives whose values according to a criterion pass a menu-dependent threshold, and then maximizes a second criterion to narrow the selection further. This framework overlaps with several existing models that have various interpretations and impose various additional restrictions on behavior. We show that the general class of procedures is characterized by acyclicity of the revealed "first-stage separation relation."
Sun, 01 Sep 2013 00:00:00 GMThttp://hdl.handle.net/10023/35722013-09-01T00:00:00ZManzini, PaolaMariotti, MarcoTyson, ChristopherWe study two-stage choice procedures in which the decision maker first preselects the alternatives whose values according to a criterion pass a menu-dependent threshold, and then maximizes a second criterion to narrow the selection further. This framework overlaps with several existing models that have various interpretations and impose various additional restrictions on behavior. We show that the general class of procedures is characterized by acyclicity of the revealed "first-stage separation relation."Multi-task research and research joint ventureshttp://hdl.handle.net/10023/3497
The paper shows that, whenever the completion of a research project requires the overcoming of more than one research obstacle, then Research Joint Ventures enjoy an intrinsic advantage relative to independent firms. This advantage, which has hitherto escaped attention in the RJV literature, relates to the RJV’s ability to organize research more efficiently than independent firms. The fact that RJVs can be both more profitable and yield higher expected net welfare than independent firms is surprising because it is derived from a model in which RJVs do not optimize over R&D investment. The paper exploits a basic result in systems reliability theory to establish the organizational superiority of RJVs.
Mon, 01 Apr 2013 00:00:00 GMThttp://hdl.handle.net/10023/34972013-04-01T00:00:00ZLa Manna, Manfredi M AThe paper shows that, whenever the completion of a research project requires the overcoming of more than one research obstacle, then Research Joint Ventures enjoy an intrinsic advantage relative to independent firms. This advantage, which has hitherto escaped attention in the RJV literature, relates to the RJV’s ability to organize research more efficiently than independent firms. The fact that RJVs can be both more profitable and yield higher expected net welfare than independent firms is surprising because it is derived from a model in which RJVs do not optimize over R&D investment. The paper exploits a basic result in systems reliability theory to establish the organizational superiority of RJVs.Stochastic choice and consideration setshttp://hdl.handle.net/10023/3457
We model a boundedly rational agent who suffers from limited attention. The agent considers each feasible alternative with a given (unobservable) probability,the attention parameter, and then chooses the alternative that maximises a preference relation within the set of considered alternatives. We show that this random choice rule is the only one for which the impact of removing an alternative on the choice probability of any other alternative is asymmetric and menu independent. Both the preference relation and the attention parameters are identified uniquely by stochastic choice data.
Forthcoming in Econometrica
Fri, 01 Mar 2013 00:00:00 GMThttp://hdl.handle.net/10023/34572013-03-01T00:00:00ZManzini, PaolaMariotti, MarcoWe model a boundedly rational agent who suffers from limited attention. The agent considers each feasible alternative with a given (unobservable) probability,the attention parameter, and then chooses the alternative that maximises a preference relation within the set of considered alternatives. We show that this random choice rule is the only one for which the impact of removing an alternative on the choice probability of any other alternative is asymmetric and menu independent. Both the preference relation and the attention parameters are identified uniquely by stochastic choice data.The timing of asset trade and optimal policy in dynamic open economieshttp://hdl.handle.net/10023/3418
Using a standard open economy DSGE model, it is shown that the timing of asset trade relative to policy decisions has a potentially important impact on the welfare evaluation of monetary policy at the individual country level. If asset trade in the initial period takes place before the announcement of policy, a national policymaker can choose a policy rule which reduces the work effort of households in the policymaker’s country in the knowledge that consumption is fully insured by optimally chosen international portfolio positions. But if asset trade takes place after the policy announcement, this insurance is absent and households in the policymaker’s country bear the full consumption consequences of the chosen policy rule. The welfare incentives faced by national policymakers are very different between the two cases. Numerical examples confirm that asset market timing has a significant impact on the optimal policy rule.
Sun, 01 Dec 2013 00:00:00 GMThttp://hdl.handle.net/10023/34182013-12-01T00:00:00ZSenay, OzgeSutherland, AlanUsing a standard open economy DSGE model, it is shown that the timing of asset trade relative to policy decisions has a potentially important impact on the welfare evaluation of monetary policy at the individual country level. If asset trade in the initial period takes place before the announcement of policy, a national policymaker can choose a policy rule which reduces the work effort of households in the policymaker’s country in the knowledge that consumption is fully insured by optimally chosen international portfolio positions. But if asset trade takes place after the policy announcement, this insurance is absent and households in the policymaker’s country bear the full consumption consequences of the chosen policy rule. The welfare incentives faced by national policymakers are very different between the two cases. Numerical examples confirm that asset market timing has a significant impact on the optimal policy rule.Contracting institutions and developmenthttp://hdl.handle.net/10023/3307
The quality of contracting institutions has been thought to be of second-order importance next to the impact that good property rights institutions can have on long- run growth. Using a large range of proxies for each type of institution, we find a robust $negative$ link between the quality of contracting institutions and long-run growth when we condition on property rights and a number of additional macroeconomic variables. Although the result remains something of a puzzle, we present evidence which suggests that only when property rights institutions are good do contracting institutions appear also to be good for development. Good contracting institutions can reduce long-run growth when property rights are not secured, presumably because the gains from the (costly) contracting institutions cannot be realised. This suggests that contracting institutions can benefit growth, and that the sequence of institutional change can matter.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10023/33072012-01-01T00:00:00ZTrew, Alex WilliamThe quality of contracting institutions has been thought to be of second-order importance next to the impact that good property rights institutions can have on long- run growth. Using a large range of proxies for each type of institution, we find a robust $negative$ link between the quality of contracting institutions and long-run growth when we condition on property rights and a number of additional macroeconomic variables. Although the result remains something of a puzzle, we present evidence which suggests that only when property rights institutions are good do contracting institutions appear also to be good for development. Good contracting institutions can reduce long-run growth when property rights are not secured, presumably because the gains from the (costly) contracting institutions cannot be realised. This suggests that contracting institutions can benefit growth, and that the sequence of institutional change can matter.Markets and how they work: a comparative analysis of fieldwork evidence on globalisation, corporate governance, institutional structure and competition in Russia, India and China, supported by a quantitative worldwide cross-section study of market anomalieshttp://hdl.handle.net/10023/3233
This thesis examines the efficacy of markets, using both quantitative and
qualitative methods in a complementary way. Specifically, it starts (in Part II) by using the results from a quantitative analysis of initial public offering
(IPO) underpricing as a barometer for corporate governance failure. This
quantitative work identified Russia, China and India as extreme outliers. The
data set used for this work was the cross-section sample of 45 countries developed by Loughran, Ritter & Rydqvist (2008). More broadly (in Part III), the thesis takes the lead of the quantitative evidence to examine, in a qualitative framework, possible sources of corporate governance failure in China, India and Russia. This was done categorically, under the headings of Globalisation, Corporate Governance, Institutional Structure and Competitive Strategy. Data were gathered by eldwork in China, India and Russia, and these findings were then benchmarked against findings from further fieldwork in the United Kingdom.
This created a unique 56,000 word database, which was used for both cross-site and within-site analysis. This indicates how both unique attributes (e.g. rule of law, transparency, regulation, etc.), and common attributes (e.g. transition from a socialist/Marxist regime, market immaturity, asymmetric information etc.), combine to explain the different morphologies of corporate governance in these three countries.
The quantitative analysis (Part II) consists of exploratory data analysis (EDA) and econometric work. The exploratory data analysis establishes, through graphical means and regression techniques, a negative correlation between IPO underpricing and globalisation (as measured by the KOF index, see Dreher, 2006). Building on this, the subsequent econometric modelling suggests that economic, demographic and institutional factors are all significant determinants of IPO underpricing.
The qualitative analysis carried out in Part III of the thesis, builds on and extends the quantitative analysis of Part II. This is consistent with the multiple method approach, which combines both quantitative and qualitative analysis to achieve a synthesis of findings. The qualitative analysis uses evidence from semi-structured interviews with finance professionals and opinion makers, as well as evidence from additional primary and secondary sources, which was also made available through fieldwork contacts. This analysis emphasises the especial importance of board composition, information flows, the judicial system, the stock exchanges, and financial regulators for forms of corporate governance.
Fri, 30 Nov 2012 00:00:00 GMThttp://hdl.handle.net/10023/32332012-11-30T00:00:00ZDyrmose, MortenThis thesis examines the efficacy of markets, using both quantitative and
qualitative methods in a complementary way. Specifically, it starts (in Part II) by using the results from a quantitative analysis of initial public offering
(IPO) underpricing as a barometer for corporate governance failure. This
quantitative work identified Russia, China and India as extreme outliers. The
data set used for this work was the cross-section sample of 45 countries developed by Loughran, Ritter & Rydqvist (2008). More broadly (in Part III), the thesis takes the lead of the quantitative evidence to examine, in a qualitative framework, possible sources of corporate governance failure in China, India and Russia. This was done categorically, under the headings of Globalisation, Corporate Governance, Institutional Structure and Competitive Strategy. Data were gathered by eldwork in China, India and Russia, and these findings were then benchmarked against findings from further fieldwork in the United Kingdom.
This created a unique 56,000 word database, which was used for both cross-site and within-site analysis. This indicates how both unique attributes (e.g. rule of law, transparency, regulation, etc.), and common attributes (e.g. transition from a socialist/Marxist regime, market immaturity, asymmetric information etc.), combine to explain the different morphologies of corporate governance in these three countries.
The quantitative analysis (Part II) consists of exploratory data analysis (EDA) and econometric work. The exploratory data analysis establishes, through graphical means and regression techniques, a negative correlation between IPO underpricing and globalisation (as measured by the KOF index, see Dreher, 2006). Building on this, the subsequent econometric modelling suggests that economic, demographic and institutional factors are all significant determinants of IPO underpricing.
The qualitative analysis carried out in Part III of the thesis, builds on and extends the quantitative analysis of Part II. This is consistent with the multiple method approach, which combines both quantitative and qualitative analysis to achieve a synthesis of findings. The qualitative analysis uses evidence from semi-structured interviews with finance professionals and opinion makers, as well as evidence from additional primary and secondary sources, which was also made available through fieldwork contacts. This analysis emphasises the especial importance of board composition, information flows, the judicial system, the stock exchanges, and financial regulators for forms of corporate governance.Empirical investigations into stock market integration and risk monitoring of the emerging Chinese stock marketshttp://hdl.handle.net/10023/3208
The degree of stock market integration has important implications for cross-border portfolio
diversification, for which Mainland China has become an attractive destination,
particularly following the gradual opening-up of its A-share market to foreign institutional
investors. The first part of this thesis explores the various aspects of stock market integration
taking place in Mainland China, in an attempt to resolve the ambiguity between extant
empirical and anecdotal evidence on the issue. The evidence drawn from different statistical
perspectives collectively establishes that the Mainland Chinese stock market is in a process of
further integrating with a selection of the world’s developed stock markets. Nevertheless, such
increased integration should not preclude foreign institutional investors from diversifying into
the Chinese A-share market, as the current integration is far from being complete.
Adopting appropriate risk monitoring techniques for venturing into the volatile Chinese A-share market is another imperative issue faced by foreign institutional investors, whose risk
practices and economic capital are largely regulated by the Basel Accord. The second leg of
this thesis addresses this problem through an evaluation of various volatility forecasting
models for Value-at-Risk (VaR) reporting. Our results highlight the importance of adopting
heterogeneous risk monitoring models in different investment environments for the purpose
of regulatory compliance and optimal economic capital allocation.
Overall, the studies contained in this thesis should add knowledge to the burgeoning literature
on international financial integration at large, while serving the interests of institutional
investors and financial regulatory authorities alike.
Tue, 19 Jun 2012 00:00:00 GMThttp://hdl.handle.net/10023/32082012-06-19T00:00:00ZChen, XingThe degree of stock market integration has important implications for cross-border portfolio
diversification, for which Mainland China has become an attractive destination,
particularly following the gradual opening-up of its A-share market to foreign institutional
investors. The first part of this thesis explores the various aspects of stock market integration
taking place in Mainland China, in an attempt to resolve the ambiguity between extant
empirical and anecdotal evidence on the issue. The evidence drawn from different statistical
perspectives collectively establishes that the Mainland Chinese stock market is in a process of
further integrating with a selection of the world’s developed stock markets. Nevertheless, such
increased integration should not preclude foreign institutional investors from diversifying into
the Chinese A-share market, as the current integration is far from being complete.
Adopting appropriate risk monitoring techniques for venturing into the volatile Chinese A-share market is another imperative issue faced by foreign institutional investors, whose risk
practices and economic capital are largely regulated by the Basel Accord. The second leg of
this thesis addresses this problem through an evaluation of various volatility forecasting
models for Value-at-Risk (VaR) reporting. Our results highlight the importance of adopting
heterogeneous risk monitoring models in different investment environments for the purpose
of regulatory compliance and optimal economic capital allocation.
Overall, the studies contained in this thesis should add knowledge to the burgeoning literature
on international financial integration at large, while serving the interests of institutional
investors and financial regulatory authorities alike.A study of financial distress and R&D in Chinese enterpriseshttp://hdl.handle.net/10023/3204
Over the past 30 years, the Chinese economy has been going through complex
transformation from a centrally planned towards a market economy. The reform of the
enterprises has played an important part in this transformation. This is in addition to
macro economy reforms, as well as changes in the institutional framework.
The thesis examines the implications of macroeconomic, ownership structure,
as well as comprehensive institutional framework changes for Chinese enterprises’
survival and R&D activities.
I study the impact of both microeconomic factors and the macro economy on
the financial distress of Chinese listed companies over a period of massive economic
transition, 1995 to 2006. Using hazard regression analysis, I find substantial effects of
firm level covariates (age, size, cash flow and gearing) on financial distress, but also
significant roles for macroeconomic stability and institution effect. Business exits in
my data on Chinese quoted firms are vanishingly rare, arguably because of active state
protection for the failing firms. I investigate the firms’ innovation activity and efficiency of different ownership
sectors. Ownership influence on R&D investment and efficiency is estimated, using
productivity frontier function, for a sample of large and medium size Chinese industrial
enterprises from 2000-2007. I found that the presence of state ownership is positively
related to R&D investment, but negatively related to R&D performance. Foreign firms
are technical leaders in Chinese industries and have advantage in R&D efficiency. My
results also show significant cross industries differences in R&D effort and technical
level. These point out that firms possessing more innovation resources and government
support are not the ones performing better technically.
I extend my study into a more general mixed duopoly model in which a welfare maximizing public firm competes with profit maximizing private firm in R&D. I
assume that different operation strategy influence firms’ tolerance of R&D spillover
which plays a key role in their R&D investment mount and technology efficiency. I
prove that a public firm is more likely to share its R&D fruit and its higher R&D invest-
ment is accompanied by lower efficiency.
Overall, macroeconomy on firm survival and ownership structure on firm innovation activities are channels to understand Chinese economy reform. Because conditions in China were similar in many ways to other transition economies, these results
provide important information about the process of economic transformation more
generally.
Fri, 30 Nov 2012 00:00:00 GMThttp://hdl.handle.net/10023/32042012-11-30T00:00:00ZHan, JieOver the past 30 years, the Chinese economy has been going through complex
transformation from a centrally planned towards a market economy. The reform of the
enterprises has played an important part in this transformation. This is in addition to
macro economy reforms, as well as changes in the institutional framework.
The thesis examines the implications of macroeconomic, ownership structure,
as well as comprehensive institutional framework changes for Chinese enterprises’
survival and R&D activities.
I study the impact of both microeconomic factors and the macro economy on
the financial distress of Chinese listed companies over a period of massive economic
transition, 1995 to 2006. Using hazard regression analysis, I find substantial effects of
firm level covariates (age, size, cash flow and gearing) on financial distress, but also
significant roles for macroeconomic stability and institution effect. Business exits in
my data on Chinese quoted firms are vanishingly rare, arguably because of active state
protection for the failing firms. I investigate the firms’ innovation activity and efficiency of different ownership
sectors. Ownership influence on R&D investment and efficiency is estimated, using
productivity frontier function, for a sample of large and medium size Chinese industrial
enterprises from 2000-2007. I found that the presence of state ownership is positively
related to R&D investment, but negatively related to R&D performance. Foreign firms
are technical leaders in Chinese industries and have advantage in R&D efficiency. My
results also show significant cross industries differences in R&D effort and technical
level. These point out that firms possessing more innovation resources and government
support are not the ones performing better technically.
I extend my study into a more general mixed duopoly model in which a welfare maximizing public firm competes with profit maximizing private firm in R&D. I
assume that different operation strategy influence firms’ tolerance of R&D spillover
which plays a key role in their R&D investment mount and technology efficiency. I
prove that a public firm is more likely to share its R&D fruit and its higher R&D invest-
ment is accompanied by lower efficiency.
Overall, macroeconomy on firm survival and ownership structure on firm innovation activities are channels to understand Chinese economy reform. Because conditions in China were similar in many ways to other transition economies, these results
provide important information about the process of economic transformation more
generally.On taxes, labour market distortions and product market imperfectionshttp://hdl.handle.net/10023/3053
This thesis aims to provide new and useful insights into the effects that various
tax, labour and product market reforms have on the overall economic performance.
Additionally, it aims also to provide insights about the optimal monetary
and fiscal policy behaviour within the economy characterized with various real
labour market frictions.
We analyze the benefits of tax reforms and their effectiveness relative to product
or other labour market reforms. A general equilibrium model with imperfect
competition, wage bargaining and different forms of tax distortions is applied in
order to analyze these issues. We find that structural reforms imply short run costs
but long run gains; that the long run gains outweigh the short run costs; and that
the financing of such reforms will be the main stumbling block. We also find that
the effectiveness of various reform instruments depends on the policy maker's ultimate
objective. More precisely, tax reforms are more effective for welfare gains,
but market liberalization is more valuable for generating employment.
In order to advance our understanding of the tax and product market reform
processes, we then develop a dynamic stochastic general equilibrium model which
incorporates search-matching frictions, costly ring and endogenous job destruction
decisions, as well as a distortionary progressive wage and a at payroll tax.
We confirm the negative effects of marginal tax distortions on the overall economic
performance. We also find a positive effect of an increase in the wage tax progressivity
and product market liberalization on employment, output and consumption.
Following a positive technology shock, the volatility of employment, output and
consumption turns out to be lower in the reformed economy, whereas the impact
effect on inflation is more pronounced. Following a positive government spending
shock the volatility of employment, output and consumption is again lower in the
reformed economy, but the inflation response is stronger over the whole adjustment
path. We also find detrimental effects on employment and output of a tax reform
which keeps the marginal tax wedge unchanged by partially offsetting a decrease
in the payroll tax by an increase in the wage tax rate. If this reform is anticipated
one period in advance the negative effects remain all over the transition path.
We investigate the optimal monetary and fiscal policy implication of the
New-Keynesian setup enriched with search-matching frictions. We show that the optimal policy features deviation from strict price stability, and that the Ramsey
planner uses both inflation and taxes in order to fully exploit the benefits of the
productivity increase following a positive productivity shock. We also find that the
optimal tax rate and government liabilities inherit the time series properties of the
underlying shocks. Moreover, we identify a certain degree of overshooting in inflation and tax rates following a positive productivity shock, and a certain degree
of undershooting following a positive government spending shock as a consequence
of the assumed commitment of policy maker.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/10023/30532010-01-01T00:00:00ZBokan, NikolaThis thesis aims to provide new and useful insights into the effects that various
tax, labour and product market reforms have on the overall economic performance.
Additionally, it aims also to provide insights about the optimal monetary
and fiscal policy behaviour within the economy characterized with various real
labour market frictions.
We analyze the benefits of tax reforms and their effectiveness relative to product
or other labour market reforms. A general equilibrium model with imperfect
competition, wage bargaining and different forms of tax distortions is applied in
order to analyze these issues. We find that structural reforms imply short run costs
but long run gains; that the long run gains outweigh the short run costs; and that
the financing of such reforms will be the main stumbling block. We also find that
the effectiveness of various reform instruments depends on the policy maker's ultimate
objective. More precisely, tax reforms are more effective for welfare gains,
but market liberalization is more valuable for generating employment.
In order to advance our understanding of the tax and product market reform
processes, we then develop a dynamic stochastic general equilibrium model which
incorporates search-matching frictions, costly ring and endogenous job destruction
decisions, as well as a distortionary progressive wage and a at payroll tax.
We confirm the negative effects of marginal tax distortions on the overall economic
performance. We also find a positive effect of an increase in the wage tax progressivity
and product market liberalization on employment, output and consumption.
Following a positive technology shock, the volatility of employment, output and
consumption turns out to be lower in the reformed economy, whereas the impact
effect on inflation is more pronounced. Following a positive government spending
shock the volatility of employment, output and consumption is again lower in the
reformed economy, but the inflation response is stronger over the whole adjustment
path. We also find detrimental effects on employment and output of a tax reform
which keeps the marginal tax wedge unchanged by partially offsetting a decrease
in the payroll tax by an increase in the wage tax rate. If this reform is anticipated
one period in advance the negative effects remain all over the transition path.
We investigate the optimal monetary and fiscal policy implication of the
New-Keynesian setup enriched with search-matching frictions. We show that the optimal policy features deviation from strict price stability, and that the Ramsey
planner uses both inflation and taxes in order to fully exploit the benefits of the
productivity increase following a positive productivity shock. We also find that the
optimal tax rate and government liabilities inherit the time series properties of the
underlying shocks. Moreover, we identify a certain degree of overshooting in inflation and tax rates following a positive productivity shock, and a certain degree
of undershooting following a positive government spending shock as a consequence
of the assumed commitment of policy maker.An economic and business strategy analysis of joint ventures between Greek enterprises and enterprises in the Balkan countries and Russia : from the Greek parent company perspectivehttp://hdl.handle.net/10023/2963
This thesis analyses joint ventures which have been established
between Greek enterprises and enterprises from Albania, Bulgaria, Romania
and Russia. An international joint venture (IJV) is an enterprise established
between two or more companies, one of which exercises its entrepreneurial
activities in a foreign country.
The core set of questions that this thesis addresses consist of motives
for the establishment of joint ventures, partner selection criteria, control and
conflict inside a joint venture, stability and performance. Another issue
addressed is that of the problems which joint venturers face, as identified by
Greek businessmen and academics.
This framework is deployed upon an extensive body of primary source
data gathered in 1994 by field work methods, using an administrated
questionnaire largely within the Greek parent companies. Our research
relates to evidence on 44 Greek enterprises, groups of companies, or
individuals who established joint ventures with Eastern European partners
after 1989. The questionnaire design is based on the notion that the
expansion of the domestic boundaries of the firm abroad, and its decision to
establish an IJV are the outcomes of strategic, financial and country specific
motives.
The key results of the thesis are that successful joint ventures in
Eastern Europe have the following characteristics:
1. Dominant control by the Greek partner over the IJV, when the Eastern
European partner is a bureaucrat.
2. Low perceived conflict as regards intensity and frequency over dimensions
like transfer of knowledge.
3. High stability as measured by the percentage increase in share capital held
by the Greek partner and by resistance to transformation to wholly owned
subsidiary status of the IJV.
4. Good perceived financial performance.
5. Evolution of the IJV such that the Eastern European partner increasingly
takes on a managerial role and becomes attuned to managerial modes of
behaviour.
6. Shared decision making between partners to the IJV beyond the infant
stage.
Wed, 01 Jan 1997 00:00:00 GMThttp://hdl.handle.net/10023/29631997-01-01T00:00:00ZIoannis-Dionysios, SalavrakosThis thesis analyses joint ventures which have been established
between Greek enterprises and enterprises from Albania, Bulgaria, Romania
and Russia. An international joint venture (IJV) is an enterprise established
between two or more companies, one of which exercises its entrepreneurial
activities in a foreign country.
The core set of questions that this thesis addresses consist of motives
for the establishment of joint ventures, partner selection criteria, control and
conflict inside a joint venture, stability and performance. Another issue
addressed is that of the problems which joint venturers face, as identified by
Greek businessmen and academics.
This framework is deployed upon an extensive body of primary source
data gathered in 1994 by field work methods, using an administrated
questionnaire largely within the Greek parent companies. Our research
relates to evidence on 44 Greek enterprises, groups of companies, or
individuals who established joint ventures with Eastern European partners
after 1989. The questionnaire design is based on the notion that the
expansion of the domestic boundaries of the firm abroad, and its decision to
establish an IJV are the outcomes of strategic, financial and country specific
motives.
The key results of the thesis are that successful joint ventures in
Eastern Europe have the following characteristics:
1. Dominant control by the Greek partner over the IJV, when the Eastern
European partner is a bureaucrat.
2. Low perceived conflict as regards intensity and frequency over dimensions
like transfer of knowledge.
3. High stability as measured by the percentage increase in share capital held
by the Greek partner and by resistance to transformation to wholly owned
subsidiary status of the IJV.
4. Good perceived financial performance.
5. Evolution of the IJV such that the Eastern European partner increasingly
takes on a managerial role and becomes attuned to managerial modes of
behaviour.
6. Shared decision making between partners to the IJV beyond the infant
stage.The impact of industrialization on adult mortality in Eastern Scotland, c. 1810-1861http://hdl.handle.net/10023/2917
This study investigates the links between economic and demographic variables by
examining the impact of industrialization on adult mortality in eastern Scotland, c. 1810-61.
Using the concept of the urban hierarchy, sixteen parishes in the counties of Angus and
Fife were selected to represent different degrees of industrialization. Patterns of adult
mortality in these parishes between 1810 and 1854 are then examined using data on burials
from the parish registers. The results are checked by comparing them with the results
obtained from an analysis of vital registration data on deaths for the period 1855-61. Thus
overall trends in adult mortality are identified and then disaggregated by age, sex, cause of
death and occupation.
The results show that adult mortality was generally higher in the most industrialized
areas. Furthermore, rates in these parishes generally increased over the period whilst in the
less industrialized areas they fell. Overall most people died from infectious diseases but
deaths from these causes (including tuberculosis) fell over the period. The increase in
mortality appears to be in part due to a rise in deaths from respiratory diseases (especially
amongst textile workers in the main industrial centres) and food- and water-borne illnesses.
This suggests that industrialization had a negative impact on adult mortality rates, causing a
short-term rise in mortality in the early to mid-nineteenth century. This was in part due to
the direct effect industrialization had, with the shift towards textile employment probably
leading to increased mortality from respiratory diseases especially amongst factory
workers. The impact of industrialization also appears to have operated indirectly via the
impetus it gave to urbanization and changes in the spatial distribution of the population that
resulted in worsening sanitary conditions and increased exposure to infection.
Mon, 01 Jan 1996 00:00:00 GMThttp://hdl.handle.net/10023/29171996-01-01T00:00:00ZBall, EmmaThis study investigates the links between economic and demographic variables by
examining the impact of industrialization on adult mortality in eastern Scotland, c. 1810-61.
Using the concept of the urban hierarchy, sixteen parishes in the counties of Angus and
Fife were selected to represent different degrees of industrialization. Patterns of adult
mortality in these parishes between 1810 and 1854 are then examined using data on burials
from the parish registers. The results are checked by comparing them with the results
obtained from an analysis of vital registration data on deaths for the period 1855-61. Thus
overall trends in adult mortality are identified and then disaggregated by age, sex, cause of
death and occupation.
The results show that adult mortality was generally higher in the most industrialized
areas. Furthermore, rates in these parishes generally increased over the period whilst in the
less industrialized areas they fell. Overall most people died from infectious diseases but
deaths from these causes (including tuberculosis) fell over the period. The increase in
mortality appears to be in part due to a rise in deaths from respiratory diseases (especially
amongst textile workers in the main industrial centres) and food- and water-borne illnesses.
This suggests that industrialization had a negative impact on adult mortality rates, causing a
short-term rise in mortality in the early to mid-nineteenth century. This was in part due to
the direct effect industrialization had, with the shift towards textile employment probably
leading to increased mortality from respiratory diseases especially amongst factory
workers. The impact of industrialization also appears to have operated indirectly via the
impetus it gave to urbanization and changes in the spatial distribution of the population that
resulted in worsening sanitary conditions and increased exposure to infection.Measures of solvency in the regulation of the UK life assurance industryhttp://hdl.handle.net/10023/2892
The problem of designing appropriate solvency regulations is addressed with respect
to the U. K. life assurance industry using various theoretical and methodological
techniques. These alternative approaches to the measurement of insurer solvency
are explored in order to provide a framework for assessing regulations. Reviews of
the current insurance regulatory environment as well as an extensive statistical and
economic analysis of the life assurance industry provide a practical backdrop to
subsequent model building.
Building on these reviews, a 'Monte-Carlo' simulation model of an insurer portfolio
is constructed to demonstrate additional considerations relevant to solvency
regulation. The hypothetical insurance company is assumed to maximise the
expected utility of 'ultimate surplus', which is taken as an indicator of end-of-period
wealth. Five asset classes are used and liabilities are assumed fixed. The simulated
run-off performance of the portfolio is evaluated in terms of the probability of
insolvency demonstrating a 'U' shaped relationship between the risk preference of
the insurer and the insolvency probability.
Implications for the design of regulatory constraints are also assessed with respect to
the simulations. In particular, the contrast between ex ante and ex post measures of
insurer solvency are highlighted with the conclusion taken that current regulations
might gain further insight into the underlying solvency performance of insurance
companies if they were to use ex ante solvency measures. This subsequent policy
prescription is qualified by two factors: first, that the value of simulations and
forecasting as an ex ante measure of performance is only as good as the models used
to forecast ex ante; and second, that any proposed regulatory shift must be assessed
within a cost-benefit analysis. Overall, the simulation analysis suggests that current
regulations provide an incomplete picture of the solvency performance of the U. K.
life assurance industry.
Fri, 01 Jan 1999 00:00:00 GMThttp://hdl.handle.net/10023/28921999-01-01T00:00:00ZGully, Benjamin R.The problem of designing appropriate solvency regulations is addressed with respect
to the U. K. life assurance industry using various theoretical and methodological
techniques. These alternative approaches to the measurement of insurer solvency
are explored in order to provide a framework for assessing regulations. Reviews of
the current insurance regulatory environment as well as an extensive statistical and
economic analysis of the life assurance industry provide a practical backdrop to
subsequent model building.
Building on these reviews, a 'Monte-Carlo' simulation model of an insurer portfolio
is constructed to demonstrate additional considerations relevant to solvency
regulation. The hypothetical insurance company is assumed to maximise the
expected utility of 'ultimate surplus', which is taken as an indicator of end-of-period
wealth. Five asset classes are used and liabilities are assumed fixed. The simulated
run-off performance of the portfolio is evaluated in terms of the probability of
insolvency demonstrating a 'U' shaped relationship between the risk preference of
the insurer and the insolvency probability.
Implications for the design of regulatory constraints are also assessed with respect to
the simulations. In particular, the contrast between ex ante and ex post measures of
insurer solvency are highlighted with the conclusion taken that current regulations
might gain further insight into the underlying solvency performance of insurance
companies if they were to use ex ante solvency measures. This subsequent policy
prescription is qualified by two factors: first, that the value of simulations and
forecasting as an ex ante measure of performance is only as good as the models used
to forecast ex ante; and second, that any proposed regulatory shift must be assessed
within a cost-benefit analysis. Overall, the simulation analysis suggests that current
regulations provide an incomplete picture of the solvency performance of the U. K.
life assurance industry.The determinants of competitive advantage: a critical appraisalhttp://hdl.handle.net/10023/2887
The thesis deals with the means whereby a firm can gain
a competitive advantage over its rivals. After considering
how this issue is dealt with in the management literature,
the thesis focuses on two possible routes to competitive
advantage. The first is largely internal to the firm, and
concerns the design of managerial contracts to provide
managers with the incentives to act in the best interests of
shareholders. The second route is external, involving
strategic market moves in relation to rival firms. These two
possible routes to competitive advantage are appraised in
light of recent theoretical developments in 1principal-agent
analysis the internal route, and the new industrial
economics the external route. The final section of the
thesis is empirical and deals with the share price
experience of the top 100 U. K. companies since 1970. The
econometric notion of cointegration is employed to test for
the existence of sustained competitive advantage. The
tentative conclusion reached is that while companies may be
able to achieve a sustained competitive advantage, the
compensation contracts employed have not been a successful
means of obtaining such advantage. The suggestion is that
external routes to competitive advantage might be more
effective.
Tue, 01 Jan 1991 00:00:00 GMThttp://hdl.handle.net/10023/28871991-01-01T00:00:00ZAllan, Andrew C.The thesis deals with the means whereby a firm can gain
a competitive advantage over its rivals. After considering
how this issue is dealt with in the management literature,
the thesis focuses on two possible routes to competitive
advantage. The first is largely internal to the firm, and
concerns the design of managerial contracts to provide
managers with the incentives to act in the best interests of
shareholders. The second route is external, involving
strategic market moves in relation to rival firms. These two
possible routes to competitive advantage are appraised in
light of recent theoretical developments in 1principal-agent
analysis the internal route, and the new industrial
economics the external route. The final section of the
thesis is empirical and deals with the share price
experience of the top 100 U. K. companies since 1970. The
econometric notion of cointegration is employed to test for
the existence of sustained competitive advantage. The
tentative conclusion reached is that while companies may be
able to achieve a sustained competitive advantage, the
compensation contracts employed have not been a successful
means of obtaining such advantage. The suggestion is that
external routes to competitive advantage might be more
effective.The political economy of discrimination and underdevelopment in Rhodesia, with special reference to African workers, 1940-73http://hdl.handle.net/10023/2812
The study
begins by
examining
the orthodox theory
of
discrimination as a possible model with which
to
evaluate
the
socio-economic
basis
of race relations
in Rhodesia.
It is argued that it is inadequate because the theory is not
grounded
in the
settler-colonial system,
its historiography
and
the peripheral capitalist social
formation in Rhodesia
in which a complex articulation
between
modes of production
is found.
A critique
is then undertaken of the principal
theories
hitherto used to
explain the course of Rhodesian economic
development
- namely, dualism and
those termed neo-marxist.
It is
argued
that Barber's modification of
the dualistic model
of W. A. Lewis is deficient
particularly
because of the
absence of a
theory of
'primitive
accumulation' and also
the lack
of an analysis of the political economy of the
'labour transfer', the basis
of peripheral capitalist
development. Arrighi's 'neo-maraist'
analysis of Rhodesian
development is
also criticized for its inadequate theory
of
'primitive
accumulation' and the lack
of attention paid
to
the labour mobilization process. An analytical alternative
is proposed,
based
on an explanation of
'primitive
capital
accumulation' and the
specific
forms of-labour utilization
found in Rhodesia in
association with particular modes of
production existent
during the
period under review. An
attempt is
made to
specify
these
modes and the
social
relations related thereto.
The labour
structures
found in the
economic system are
then
examined
in the
context of the income distribution
pattern, the
class structure of the
social
formation
and
the
primary
'dynamic'
of Rhodesian postwar
development
- industrialization. It is
suggested that changes
in labour
policy
in various modes of production were essentially
concerned with ensuring
the maintenance of a system of cheap
labour
whereby employers acquired
labour-power below
the cost of its
own reproduction. The development of
peripheral capitalism under conditions of settler colonialism
required changes to labour policy. These modifications
left the basic
structures of
the
socio-economic system
intact,
although
they
gave rise to
substantial pressures
for change,
e. g. from unions and African nationalism.
The State has
been particularly significant
in
containing
these
socio-economic and political pressures, especially
in the field of
labour
policy.
An
attempt
is
made to identify the
changes
in labour
mobilization that have taken
place,
to
assess
their impact
on
the nature of discrimination
and underdevelopment, and to
point out some of the
more
important features of
the class
formation process that have resulted from the development
of capitalism
in Rhodesia.
Wed, 01 Jan 1975 00:00:00 GMThttp://hdl.handle.net/10023/28121975-01-01T00:00:00ZClarke, Duncan GodfreyThe study
begins by
examining
the orthodox theory
of
discrimination as a possible model with which
to
evaluate
the
socio-economic
basis
of race relations
in Rhodesia.
It is argued that it is inadequate because the theory is not
grounded
in the
settler-colonial system,
its historiography
and
the peripheral capitalist social
formation in Rhodesia
in which a complex articulation
between
modes of production
is found.
A critique
is then undertaken of the principal
theories
hitherto used to
explain the course of Rhodesian economic
development
- namely, dualism and
those termed neo-marxist.
It is
argued
that Barber's modification of
the dualistic model
of W. A. Lewis is deficient
particularly
because of the
absence of a
theory of
'primitive
accumulation' and also
the lack
of an analysis of the political economy of the
'labour transfer', the basis
of peripheral capitalist
development. Arrighi's 'neo-maraist'
analysis of Rhodesian
development is
also criticized for its inadequate theory
of
'primitive
accumulation' and the lack
of attention paid
to
the labour mobilization process. An analytical alternative
is proposed,
based
on an explanation of
'primitive
capital
accumulation' and the
specific
forms of-labour utilization
found in Rhodesia in
association with particular modes of
production existent
during the
period under review. An
attempt is
made to
specify
these
modes and the
social
relations related thereto.
The labour
structures
found in the
economic system are
then
examined
in the
context of the income distribution
pattern, the
class structure of the
social
formation
and
the
primary
'dynamic'
of Rhodesian postwar
development
- industrialization. It is
suggested that changes
in labour
policy
in various modes of production were essentially
concerned with ensuring
the maintenance of a system of cheap
labour
whereby employers acquired
labour-power below
the cost of its
own reproduction. The development of
peripheral capitalism under conditions of settler colonialism
required changes to labour policy. These modifications
left the basic
structures of
the
socio-economic system
intact,
although
they
gave rise to
substantial pressures
for change,
e. g. from unions and African nationalism.
The State has
been particularly significant
in
containing
these
socio-economic and political pressures, especially
in the field of
labour
policy.
An
attempt
is
made to identify the
changes
in labour
mobilization that have taken
place,
to
assess
their impact
on
the nature of discrimination
and underdevelopment, and to
point out some of the
more
important features of
the class
formation process that have resulted from the development
of capitalism
in Rhodesia.Sequential action and beliefs under partially observable DSGE environmentshttp://hdl.handle.net/10023/2744
This paper introduces a classification of DSGEs from a Markovian perspective, and positions the class of POMDP (Partially Observable Markov Decision Process) to the center of a generalization of linear rational expectations models. The analysis of the POMDP class builds on the previous development in dynamic controls for linear system, and derives a solution algorithm by formulating an equilibrium as a fixed point of an operator that maps what we observe into what we believe.
Revised and resubmitted at Computational Economics
Thu, 01 Dec 2011 00:00:00 GMThttp://hdl.handle.net/10023/27442011-12-01T00:00:00ZKim, Seong-HoonThis paper introduces a classification of DSGEs from a Markovian perspective, and positions the class of POMDP (Partially Observable Markov Decision Process) to the center of a generalization of linear rational expectations models. The analysis of the POMDP class builds on the previous development in dynamic controls for linear system, and derives a solution algorithm by formulating an equilibrium as a fixed point of an operator that maps what we observe into what we believe.Foreign aid, economic development and the indebtedness problem, with special reference to the Sudanhttp://hdl.handle.net/10023/2630
In the task of promoting both economic growth and development
of the developing countries, both theory and development experience
suggest that international co-operation in a broad sense has a vital
role to play. For most developing countries, foreign trade is, and is
likely to remain, the most important ingredient of such co-operation,
although in the absence of a so-called new international economic
order, its benefits may be smaller than most developing countries
think to be equitable. But despite the overwhelming importance of
trade, resource transfers from the more advanced and rich countries
have a significant and in many cases, a decisive role as well to play
in augmenting economic development. Resource transfers include
foreign investment, financial aid and technical assistance.
The present study principally examines the role of foreign aid -
including both financial and technical assistance - in economic
development with particular reference to the Sudan. This focus on aid
is not intended to under-rate the significance of other forms of
co-operation between advanced and developing countries in promoting
the latter's development. This study falls into three main parts which
together cover most of the principal issues related to foreign aid, and
examine the situation in the Sudan.
Part I is a critical review of the theoretical literature on aid
and of the controversies that have arisen in the light of the different
empirical investigations which have been attempted to establish its
impact upon recipient economies. It also examines the rationale
behind the provision of aid and the requirements which are to be
satisfied if it is to be used effectively.
Part II is an attempt to apply the conceptual framework of the
previous part to an elucidation of the role of aid in the Sudan's economic
development. It begins with a brief description of the structure of the
Sudanese economy and a survey of the trends in available resources.
In the light of this analysis, a number of key issues are examined:
in particular the source, composition and end-use of aid funds; the
significance of Arab capital; the structure of aid management, and the
role of technical assistance in supplementing domestic skills. Apart
from these largely qualitative appraisals, the study also attempts to
apply Weisskopf's behavioural model to evaluate the contribution of
foreign aid to the Sudanese economy. Part II includes an examination
of the limitations of such econometric studies.
Part III examines the so-called debt problem of developing
countries and its extent. Since foreign aid is not wholly provided in
grant form, its inflow into developing countries has been accompanied
by a growing debt. Part III contains a critical appraisal of the
indebtedness issue of developing countries in the light of recent debates.
Its prime concern is, however, to identify the causes and to
demonstrate the immediate as well as the long-term implications of
debt difficulties. This is followed by a scrutiny of the debt position
of the Sudan, using for this purpose both published and unpublished
data.
Finally, a concluding section summarizes some of the most
important propositions arrived at in the dissertation.
Thu, 01 Jan 1981 00:00:00 GMThttp://hdl.handle.net/10023/26301981-01-01T00:00:00ZAbuel Nour, Abuel Gasim MohamedIn the task of promoting both economic growth and development
of the developing countries, both theory and development experience
suggest that international co-operation in a broad sense has a vital
role to play. For most developing countries, foreign trade is, and is
likely to remain, the most important ingredient of such co-operation,
although in the absence of a so-called new international economic
order, its benefits may be smaller than most developing countries
think to be equitable. But despite the overwhelming importance of
trade, resource transfers from the more advanced and rich countries
have a significant and in many cases, a decisive role as well to play
in augmenting economic development. Resource transfers include
foreign investment, financial aid and technical assistance.
The present study principally examines the role of foreign aid -
including both financial and technical assistance - in economic
development with particular reference to the Sudan. This focus on aid
is not intended to under-rate the significance of other forms of
co-operation between advanced and developing countries in promoting
the latter's development. This study falls into three main parts which
together cover most of the principal issues related to foreign aid, and
examine the situation in the Sudan.
Part I is a critical review of the theoretical literature on aid
and of the controversies that have arisen in the light of the different
empirical investigations which have been attempted to establish its
impact upon recipient economies. It also examines the rationale
behind the provision of aid and the requirements which are to be
satisfied if it is to be used effectively.
Part II is an attempt to apply the conceptual framework of the
previous part to an elucidation of the role of aid in the Sudan's economic
development. It begins with a brief description of the structure of the
Sudanese economy and a survey of the trends in available resources.
In the light of this analysis, a number of key issues are examined:
in particular the source, composition and end-use of aid funds; the
significance of Arab capital; the structure of aid management, and the
role of technical assistance in supplementing domestic skills. Apart
from these largely qualitative appraisals, the study also attempts to
apply Weisskopf's behavioural model to evaluate the contribution of
foreign aid to the Sudanese economy. Part II includes an examination
of the limitations of such econometric studies.
Part III examines the so-called debt problem of developing
countries and its extent. Since foreign aid is not wholly provided in
grant form, its inflow into developing countries has been accompanied
by a growing debt. Part III contains a critical appraisal of the
indebtedness issue of developing countries in the light of recent debates.
Its prime concern is, however, to identify the causes and to
demonstrate the immediate as well as the long-term implications of
debt difficulties. This is followed by a scrutiny of the debt position
of the Sudan, using for this purpose both published and unpublished
data.
Finally, a concluding section summarizes some of the most
important propositions arrived at in the dissertation.Compensatory and noncompensatory decision strategies in a monopolistic screening modelhttp://hdl.handle.net/10023/2602
A monopolist supplies a multi-attribute good and does not know whether the consumer makes or avoids tradeoffs between attributes. We illustrate a form of exploitation to which the tradeoff-avoiding consumer is vulnerable and draw some policy-relevant conclusions.
Submitted to the B.E. Journal of Theoretical Economics
Wed, 23 Nov 2011 00:00:00 GMThttp://hdl.handle.net/10023/26022011-11-23T00:00:00ZPapi, MauroA monopolist supplies a multi-attribute good and does not know whether the consumer makes or avoids tradeoffs between attributes. We illustrate a form of exploitation to which the tradeoff-avoiding consumer is vulnerable and draw some policy-relevant conclusions.The timing of asset trade and optimal monetary policy in dynamic open economieshttp://hdl.handle.net/10023/2601
Using a standard open economy DSGE model, it is shown that the timing of asset trade relative to policy decisions has a potentially important impact on the welfare evaluation of monetary policy at the individual country level. If asset trade in the initial period takes place before the announcement of policy, a national policymaker can choose a policy rule which reduces the work effort of households in the policymaker’s country in the knowledge that consumption is fully insured by optimally chosen international portfolio positions. But if asset trade takes place after the policy announcement, this insurance is absent and households in the policymaker’s country bear the full consumption consequences of the chosen policy rule. The welfare incentives faced by national policymakers are very different between the two cases. Numerical examples confirm that asset market timing has a significant impact on the optimal policy rule.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/10023/26012010-01-01T00:00:00ZSenay, OzgeSutherland, AlanUsing a standard open economy DSGE model, it is shown that the timing of asset trade relative to policy decisions has a potentially important impact on the welfare evaluation of monetary policy at the individual country level. If asset trade in the initial period takes place before the announcement of policy, a national policymaker can choose a policy rule which reduces the work effort of households in the policymaker’s country in the knowledge that consumption is fully insured by optimally chosen international portfolio positions. But if asset trade takes place after the policy announcement, this insurance is absent and households in the policymaker’s country bear the full consumption consequences of the chosen policy rule. The welfare incentives faced by national policymakers are very different between the two cases. Numerical examples confirm that asset market timing has a significant impact on the optimal policy rule.Sequential action and beliefs under partially observable DSGE environmentshttp://hdl.handle.net/10023/2599
This paper introduces a classification of DSGEs from a Markovian perspective, and positions the class of POMDP (Partially Observable Markov Decision Process) to the center of a generalization of linear rational expectations models. The analysis of the POMDP class builds on the previous development in dynamic controls for linear system, and derives a solution algorithm by formulating an equilibrium as a fixed point of an operator that maps what we observe into what we believe.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10023/25992012-01-01T00:00:00ZKim, Seong-HoonThis paper introduces a classification of DSGEs from a Markovian perspective, and positions the class of POMDP (Partially Observable Markov Decision Process) to the center of a generalization of linear rational expectations models. The analysis of the POMDP class builds on the previous development in dynamic controls for linear system, and derives a solution algorithm by formulating an equilibrium as a fixed point of an operator that maps what we observe into what we believe.Satisficing choice procedureshttp://hdl.handle.net/10023/2595
Standard choice theory assumes that the budget set is known to the decision-maker in advance. In contrast, we develop a model in which alternatives are examined sequentially and decision-makers exhibit `satisficing' attitudes. We axiomatically characterize our model and investigate behavioral definitions of satisfaction, attention, and preference under various choice domains. Moreover, we relate our framework to several well-known existing models.
"Financial support from the Scottish Institute for Economic Research is gratefully acknowledged."
Sat, 01 Sep 2012 00:00:00 GMThttp://hdl.handle.net/10023/25952012-09-01T00:00:00ZPapi, MauroStandard choice theory assumes that the budget set is known to the decision-maker in advance. In contrast, we develop a model in which alternatives are examined sequentially and decision-makers exhibit `satisficing' attitudes. We axiomatically characterize our model and investigate behavioral definitions of satisfaction, attention, and preference under various choice domains. Moreover, we relate our framework to several well-known existing models.Calibration of the chaotic interest rate modelhttp://hdl.handle.net/10023/2568
In this thesis we establish a relationship between the Potential Approach to interest rates and the Market Models. This relationship allows us to derive the dynamics of forward LIBOR rates and forward swap rates by modelling the state price density. It means that we are able to secure the arbitrage-free condition and positive interest rate feature when we model the volatility drifts of those dynamics.
On the other hand, we develop the Potential Approach, particularly the Hughston-Rafailidis Chaotic Interest Rate Model. The early argument enables us to infer that the Chaos Models belong to the Stochastic Volatility Market Models.
In particular, we propose One-variable Chaos Models with the application of exponential polynomials. This maintains the generality of the Chaos Models and performs well for yield curves comparing with the Nelson-Siegel Form and the Svensson Form. Moreover, we calibrate the One-variable Chaos Model to European Caplets and European Swaptions. We show that the One-variable Chaos Models can reproduce the humped shape of the term structure of caplet volatility and also the volatility smile/skew curve. The calibration errors are small compared with the Lognormal Forward LIBOR Model, the SABR Model, traditional Short Rate Models, and other models under the Potential Approach. After the calibration, we introduce some new interest rate models under the Potential Approach. In particular, we suggest a new framework where the volatility drifts can be indirectly modelled from the short rate via the state price density.
Tue, 30 Nov 2010 00:00:00 GMThttp://hdl.handle.net/10023/25682010-11-30T00:00:00ZTsujimoto, TsunehiroIn this thesis we establish a relationship between the Potential Approach to interest rates and the Market Models. This relationship allows us to derive the dynamics of forward LIBOR rates and forward swap rates by modelling the state price density. It means that we are able to secure the arbitrage-free condition and positive interest rate feature when we model the volatility drifts of those dynamics.
On the other hand, we develop the Potential Approach, particularly the Hughston-Rafailidis Chaotic Interest Rate Model. The early argument enables us to infer that the Chaos Models belong to the Stochastic Volatility Market Models.
In particular, we propose One-variable Chaos Models with the application of exponential polynomials. This maintains the generality of the Chaos Models and performs well for yield curves comparing with the Nelson-Siegel Form and the Svensson Form. Moreover, we calibrate the One-variable Chaos Model to European Caplets and European Swaptions. We show that the One-variable Chaos Models can reproduce the humped shape of the term structure of caplet volatility and also the volatility smile/skew curve. The calibration errors are small compared with the Lognormal Forward LIBOR Model, the SABR Model, traditional Short Rate Models, and other models under the Potential Approach. After the calibration, we introduce some new interest rate models under the Potential Approach. In particular, we suggest a new framework where the volatility drifts can be indirectly modelled from the short rate via the state price density.Choice by lexicographic semiordershttp://hdl.handle.net/10023/2413
In Tversky’s (1969) model of a lexicographic semiorder, a preference is generated via the sequential application of numerical criteria by declaring an alternative x better than an alternative y if the first criterion that distinguishes between x and y ranks x higher than y by an amount exceeding a fixed threshold. We generalize this idea to a fully fledged model of boundedly rational choice. We explore the connection with sequential rationalizability of choice (Apesteguia and Ballester 2010, Manzini and Mariotti 2007) and we provide axiomatic characterizations of both models in terms of observable choice data.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10023/24132012-01-01T00:00:00ZManzini, PaolaMariotti, MarcoIn Tversky’s (1969) model of a lexicographic semiorder, a preference is generated via the sequential application of numerical criteria by declaring an alternative x better than an alternative y if the first criterion that distinguishes between x and y ranks x higher than y by an amount exceeding a fixed threshold. We generalize this idea to a fully fledged model of boundedly rational choice. We explore the connection with sequential rationalizability of choice (Apesteguia and Ballester 2010, Manzini and Mariotti 2007) and we provide axiomatic characterizations of both models in terms of observable choice data.Disinflation and exchange-rate pass-throughhttp://hdl.handle.net/10023/2296
This paper analyzes exchange-rate dynamics following a money-based disinflation under different degrees of exchange-rate pass-through. Using a microfounded dynamic general equilibrium model with imperfect competition and nominal rigidities, it is shown that a monetary slowdown causes an appreciation of the exchange rate and a short-run fall in employment. Varying the degree of pass-through, however, significantly alters the magnitudes of these effects. As the degree of pass-through is reduced, the extent of the short-run appreciation of the exchange rate increases and the short-run impact of the disinflation on employment falls.
Tue, 01 Apr 2008 00:00:00 GMThttp://hdl.handle.net/10023/22962008-04-01T00:00:00ZSenay, OzgeThis paper analyzes exchange-rate dynamics following a money-based disinflation under different degrees of exchange-rate pass-through. Using a microfounded dynamic general equilibrium model with imperfect competition and nominal rigidities, it is shown that a monetary slowdown causes an appreciation of the exchange rate and a short-run fall in employment. Varying the degree of pass-through, however, significantly alters the magnitudes of these effects. As the degree of pass-through is reduced, the extent of the short-run appreciation of the exchange rate increases and the short-run impact of the disinflation on employment falls.Interest rate rules and welfare in open economieshttp://hdl.handle.net/10023/2286
This paper analyses the welfare performance of a set of five alternative interest rate rules in an open economy stochastic dynamic general equilibrium model with nominal rigidities. A rule with a lagged interest rate term, high feedback on inflation and low feedback on output is found to yield the highest welfare for a small open economy. This result is robust across different degrees of openness, different sources of home and foreign shocks, alternative foreign monetary rules and different specifications for price-setting behaviour. The same rule emerges as both the Nash and cooperative equilibria in a two-country version of the model.
Tue, 01 Jul 2008 00:00:00 GMThttp://hdl.handle.net/10023/22862008-07-01T00:00:00ZSenay, OzgeThis paper analyses the welfare performance of a set of five alternative interest rate rules in an open economy stochastic dynamic general equilibrium model with nominal rigidities. A rule with a lagged interest rate term, high feedback on inflation and low feedback on output is found to yield the highest welfare for a small open economy. This result is robust across different degrees of openness, different sources of home and foreign shocks, alternative foreign monetary rules and different specifications for price-setting behaviour. The same rule emerges as both the Nash and cooperative equilibria in a two-country version of the model.Fiscal policy and learninghttp://hdl.handle.net/10023/2260
Using the standard real business cycle model with lump-sum taxes, we analyze the impact of fiscal policy when agents form expectations using adaptive learning rather than rational expectations (RE). The output multipliers for government purchases are significantly higher under learning, and fall within empirical bounds reported in the literature (in sharp contrast to the implausibly low values under RE). Effectiveness of fiscal policy is demonstrated during times of economic stress like the recent Great Recession. Finally it is shown how learning can lead to dynamics empirically documented during episodes of "fiscal consolidations."
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10023/22602012-01-01T00:00:00ZMitra, KaushikEvans, George WHonkapohja, SeppoUsing the standard real business cycle model with lump-sum taxes, we analyze the impact of fiscal policy when agents form expectations using adaptive learning rather than rational expectations (RE). The output multipliers for government purchases are significantly higher under learning, and fall within empirical bounds reported in the literature (in sharp contrast to the implausibly low values under RE). Effectiveness of fiscal policy is demonstrated during times of economic stress like the recent Great Recession. Finally it is shown how learning can lead to dynamics empirically documented during episodes of "fiscal consolidations."Endogenous price flexibility and optimal monetary policyhttp://hdl.handle.net/10023/2240
Much of the literature on optimal monetary policy uses models in which the degree of nominal price flexibility is exogenous. There are, however, good reasons to suppose that the degree of price flexibility adjusts endogenously to changes in monetary conditions. This paper extends the standard New Keynesian model to incorporate an endogenous degree of price flexibility. The model shows that endogenising the degree of price flexibility tends to shift optimal monetary policy towards complete inflation stabilisation, even when shocks take the form of cost-push disturbances. This contrasts with the standard result obtained in models with exogenous price flexibility, which show that optimal monetary policy should allow some degree of inflation volatility in order to stabilise the welfare-relevant output gap.
Mon, 01 Nov 2010 00:00:00 GMThttp://hdl.handle.net/10023/22402010-11-01T00:00:00ZSenay, OzgeSutherland, AlanMuch of the literature on optimal monetary policy uses models in which the degree of nominal price flexibility is exogenous. There are, however, good reasons to suppose that the degree of price flexibility adjusts endogenously to changes in monetary conditions. This paper extends the standard New Keynesian model to incorporate an endogenous degree of price flexibility. The model shows that endogenising the degree of price flexibility tends to shift optimal monetary policy towards complete inflation stabilisation, even when shocks take the form of cost-push disturbances. This contrasts with the standard result obtained in models with exogenous price flexibility, which show that optimal monetary policy should allow some degree of inflation volatility in order to stabilise the welfare-relevant output gap.A producer theory with business riskshttp://hdl.handle.net/10023/2173
In this paper, we consider a producer who faces uninsurable business risks due to incomplete spanning of asset markets over stochastic goods market outcomes, and examine how the presence of the uninsurable business risks affects the producer's optimal pricing and production behaviours. Three key (inter-related) results we find are: (1) optimal prices in goods markets comprise ‘markup’ to the extent of market power and ‘premium’ by shadow price of the risks; (2) price inertia as we observe in data can be explained by a joint work of risk neutralization motive and marginal cost equalization condition; (3) the relative responsiveness of risk neutralization motive and marginal cost equalization at optimum is central to the cyclical variation of markups, providing a consistent explanation for procyclical and countercyclical movements. By these results, the proposed theory of producer leaves important implications both micro and macro, and both empirical and theoretical.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10023/21732012-01-01T00:00:00ZKim, Seong-HoonMoon, SeongmanIn this paper, we consider a producer who faces uninsurable business risks due to incomplete spanning of asset markets over stochastic goods market outcomes, and examine how the presence of the uninsurable business risks affects the producer's optimal pricing and production behaviours. Three key (inter-related) results we find are: (1) optimal prices in goods markets comprise ‘markup’ to the extent of market power and ‘premium’ by shadow price of the risks; (2) price inertia as we observe in data can be explained by a joint work of risk neutralization motive and marginal cost equalization condition; (3) the relative responsiveness of risk neutralization motive and marginal cost equalization at optimum is central to the cyclical variation of markups, providing a consistent explanation for procyclical and countercyclical movements. By these results, the proposed theory of producer leaves important implications both micro and macro, and both empirical and theoretical.Essays on responsible investment, research output analyses and investment performance evaluationhttp://hdl.handle.net/10023/2130
This thesis includes four essays, of which each comprises two original contributions. Based
on this thesis’ eight contributions, we add knowledge or understanding to the literatures on
responsible investment, research output analyses and investment performance evaluation.
First, we develop the first generic, reliable approach to benchmark research area output (e.g.
journal articles or books), which we expect to appeal to governments’ increasing interest in
monitoring their research funding investments. Second, we apply this approach to the research
area of responsible investment, which is currently backed by an about $ 7 trillion industry. We
find that the (quality weighted) quantity of responsible investment’s research output is
statistically significantly under-proportional compared with peer research areas. One of
several explanations for this result lies in the intransparency of the current responsible
investment literature. Third, we develop an approach to research synthesis, which improves a
research area’s transparency without experiencing many weaknesses of conventional literature
reviews. We title this approach Influential Literature Analysis (ILA). Fourth, we apply ILA to
the relatively intransparent responsible investment literature. One of our many findings is that
responsible assets with their ceteris paribus under-proportional total risk might appear
artificially unattractive when assessed by the most common investment performance measure,
the Sharpe ratio, which is biased in favour of high risk assets due to its currently unsolved
negative excess return problem. Fifth, we develop a generic, reliable and robust solution to the
negative Sharpe ratio problem, which investors can customise according to their specific
increasing incremental disutility of risk functions. Six, we generalise our solution to the
negative Sharpe ratio problem, which allows us to solve the negative (excess) return problems
of over twenty other investment performance measures. Seventh, we develop independent,
statistically sophisticated tests of the sufficiency and quality of suggested solutions to the
negative Sharpe ratio problem, since all existing tests a-priori assume the superiority of a
specific solution. In contrast, our tests are only based on the Sharpe ratio itself and two basic
axioms of investment theory. Hence, they are conceptually unrelated to our solutions. Eighth,
we apply these tests using two different data samples to all existing solutions to the negative
Sharpe ratio problem. We find that investors are best advised to use our solutions, the H⁶-, H⁷- or H⁸-measure, in their evaluation of investment performance from a Sharpe ratio like
perspective.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/10023/21302010-01-01T00:00:00ZHoepner, Andreas G. F.This thesis includes four essays, of which each comprises two original contributions. Based
on this thesis’ eight contributions, we add knowledge or understanding to the literatures on
responsible investment, research output analyses and investment performance evaluation.
First, we develop the first generic, reliable approach to benchmark research area output (e.g.
journal articles or books), which we expect to appeal to governments’ increasing interest in
monitoring their research funding investments. Second, we apply this approach to the research
area of responsible investment, which is currently backed by an about $ 7 trillion industry. We
find that the (quality weighted) quantity of responsible investment’s research output is
statistically significantly under-proportional compared with peer research areas. One of
several explanations for this result lies in the intransparency of the current responsible
investment literature. Third, we develop an approach to research synthesis, which improves a
research area’s transparency without experiencing many weaknesses of conventional literature
reviews. We title this approach Influential Literature Analysis (ILA). Fourth, we apply ILA to
the relatively intransparent responsible investment literature. One of our many findings is that
responsible assets with their ceteris paribus under-proportional total risk might appear
artificially unattractive when assessed by the most common investment performance measure,
the Sharpe ratio, which is biased in favour of high risk assets due to its currently unsolved
negative excess return problem. Fifth, we develop a generic, reliable and robust solution to the
negative Sharpe ratio problem, which investors can customise according to their specific
increasing incremental disutility of risk functions. Six, we generalise our solution to the
negative Sharpe ratio problem, which allows us to solve the negative (excess) return problems
of over twenty other investment performance measures. Seventh, we develop independent,
statistically sophisticated tests of the sufficiency and quality of suggested solutions to the
negative Sharpe ratio problem, since all existing tests a-priori assume the superiority of a
specific solution. In contrast, our tests are only based on the Sharpe ratio itself and two basic
axioms of investment theory. Hence, they are conceptually unrelated to our solutions. Eighth,
we apply these tests using two different data samples to all existing solutions to the negative
Sharpe ratio problem. We find that investors are best advised to use our solutions, the H⁶-, H⁷- or H⁸-measure, in their evaluation of investment performance from a Sharpe ratio like
perspective.Three empirical essays on determinants of industry and investment location patterns in the context of economic transition and regional integration : the evidence from Central and Eastern European countrieshttp://hdl.handle.net/10023/2098
The factor determinants of industry and investment location patterns in transition
economies can be expected to differ from those frequently observed in developed
countries. Historically, centrally planned economies have suffered from inefficient
industrial policies that are generally assumed to have had distortive effects on spatial
location of industry. The process of economic transition and regional integration that
followed the demise of socialist structures is assumed to have subsequently affected the
geographical distribution of economic activities within and between countries of the
region. Given the above this thesis capitalises on the quasi-natural experiment setting to
further explore industry and investment location decisions in transition economies.
In particular, the research presented here follows three main objectives. First, it intends to
provide a comprehensive picture of changes in industry location patterns over time.
Second, it aims to contribute to the debate on factor determinants of industry location at
various levels of spatial aggregation. Third, it seeks to explore location determinants of
foreign direct investors in particular, given their pivotal role for economic development
of transition economies. In all instances, the research is geared towards a better
understanding of the role of institutional factors, such as reforms and policies, in affecting
distribution of economic activity across space. Thus, the work conducted qualifies as a
further contribution to the analysis of structural changes that have affected the economies
under examination. In broad terms, the findings presented here point towards significant
changes in spatial location patterns of industry and investments that are leading to
increased polarisation of economic landscape over time. Nonetheless, we find evidence
that certain institutional factors qualify as viable policy levers, thereby providing ample
scope for policy makers to impact existing location patterns of economic activity.
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/10023/20982011-06-01T00:00:00ZŠerić, AdnanThe factor determinants of industry and investment location patterns in transition
economies can be expected to differ from those frequently observed in developed
countries. Historically, centrally planned economies have suffered from inefficient
industrial policies that are generally assumed to have had distortive effects on spatial
location of industry. The process of economic transition and regional integration that
followed the demise of socialist structures is assumed to have subsequently affected the
geographical distribution of economic activities within and between countries of the
region. Given the above this thesis capitalises on the quasi-natural experiment setting to
further explore industry and investment location decisions in transition economies.
In particular, the research presented here follows three main objectives. First, it intends to
provide a comprehensive picture of changes in industry location patterns over time.
Second, it aims to contribute to the debate on factor determinants of industry location at
various levels of spatial aggregation. Third, it seeks to explore location determinants of
foreign direct investors in particular, given their pivotal role for economic development
of transition economies. In all instances, the research is geared towards a better
understanding of the role of institutional factors, such as reforms and policies, in affecting
distribution of economic activity across space. Thus, the work conducted qualifies as a
further contribution to the analysis of structural changes that have affected the economies
under examination. In broad terms, the findings presented here point towards significant
changes in spatial location patterns of industry and investments that are leading to
increased polarisation of economic landscape over time. Nonetheless, we find evidence
that certain institutional factors qualify as viable policy levers, thereby providing ample
scope for policy makers to impact existing location patterns of economic activity.Empirical market microstructure of the FTSEurofirst index futureshttp://hdl.handle.net/10023/1975
This thesis is among the first market microstructure studies of an index futures market
with designated market makers in the academic literature. The purpose of this thesis is to
investigate intraday patterns of key variables, the relative size of the components of the quoted
bid-ask spread, and the order decisions of uninformed traders, in a continuous dealer market for
index futures with market makers. Overall, our findings aim to contribute to a better
understanding of the roles of market makers and public customers in price formation. Intraday
patterns of financial market variables such as trade price, volume, trade size, quoted spreads,
depth, and volatility separately for designated market makers and public customers are
examined.
The lack of relevant and appropriate data in futures markets, as evidenced by Hasbrouck
(2003) and Kurov (2005), has inhibited the growth of market microstructure in futures markets.
Individual orders, quotes, trader identification, and transactions from June 2003 to December
2004, for FTSEurofirst 80 and 100 index futures are used in the study. Inclusion of the parties to
order execution distinguishes this data set from most other futures microstructure sources. As
this thesis is the first known academic study of the extant market microstructure of the
FTSEurofirst index futures, the institutional aspects of the trading process for the FTSEurofirst
index futures are also explored. An alternative method for estimating three cost components as a
proportion of the bid-ask spread is developed. A framework is developed for the order decision
process of an uninformed trader for the first time in a futures market with market makers. The
results of this thesis may have implications for other financial markets and the field of market
microstructure.
Sat, 01 May 2010 00:00:00 GMThttp://hdl.handle.net/10023/19752010-05-01T00:00:00ZFaciane, KirbyThis thesis is among the first market microstructure studies of an index futures market
with designated market makers in the academic literature. The purpose of this thesis is to
investigate intraday patterns of key variables, the relative size of the components of the quoted
bid-ask spread, and the order decisions of uninformed traders, in a continuous dealer market for
index futures with market makers. Overall, our findings aim to contribute to a better
understanding of the roles of market makers and public customers in price formation. Intraday
patterns of financial market variables such as trade price, volume, trade size, quoted spreads,
depth, and volatility separately for designated market makers and public customers are
examined.
The lack of relevant and appropriate data in futures markets, as evidenced by Hasbrouck
(2003) and Kurov (2005), has inhibited the growth of market microstructure in futures markets.
Individual orders, quotes, trader identification, and transactions from June 2003 to December
2004, for FTSEurofirst 80 and 100 index futures are used in the study. Inclusion of the parties to
order execution distinguishes this data set from most other futures microstructure sources. As
this thesis is the first known academic study of the extant market microstructure of the
FTSEurofirst index futures, the institutional aspects of the trading process for the FTSEurofirst
index futures are also explored. An alternative method for estimating three cost components as a
proportion of the bid-ask spread is developed. A framework is developed for the order decision
process of an uninformed trader for the first time in a futures market with market makers. The
results of this thesis may have implications for other financial markets and the field of market
microstructure.Lower inflation : ways and incentives for central bankshttp://hdl.handle.net/10023/1719
This thesis is a technical inquiry into remedies for high inflation. In its center there
is the usual tradeoff between inflation aversion on the one hand and some benefit
from inflation via Phillips curve effects on the other hand. Most remarkable and
pioneering work for us is the famous Barro-Gordon model - see (Barro & Gordon
1983a) respectively (Barro & Gordon 1983b). Parts of this model form the basis of
our work here. Though being well known the discretionary equilibrium is suboptimal
the question arises how to overcome this. We will introduce four different models,
each of them giving a different perspective and way of thinking. Each model shows
a (sometimes slightly) different way a central banker might deliver lower inflation
than the one shot Barro-Gordon game at a first glance would suggest. To cut a long
story short we provide a number of reasons for believing that the purely discretionary
equilibrium may be rarely observed in real life.
Further the thesis provides new insights for derivative pricing theories. In particular, the potential role of financial markets and instruments will be a major focus.
We investigate how such instruments can be used for monetary policy. On the contrary these financial securities have strong influence on the behavior of the central
bank. Taking this into account in chapters 3 and 4 we come up with a new method of
pricing inflation linked derivatives. The latter to the best of our knowledge has never been done before - (Persson, Persson & Svenson 2006), as one of very view economic
works taking into account financial markets, is purely focused on the social planer's
problem.
A purely game theoretic approach is done in chapter 2 to change the original
Barro-Gordon. Here we deviate from a purely rational and purely one period wise
thinking. Finally in chapter 5 we model an asymmetric information situation where
the central banker faces a trade off between his current objective on the one hand
and benefit arising from not perfectly informed agents on the other hand. In that
sense the central bank is also concerned about its reputation.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/10023/17192011-01-01T00:00:00ZGeissler, JohannesThis thesis is a technical inquiry into remedies for high inflation. In its center there
is the usual tradeoff between inflation aversion on the one hand and some benefit
from inflation via Phillips curve effects on the other hand. Most remarkable and
pioneering work for us is the famous Barro-Gordon model - see (Barro & Gordon
1983a) respectively (Barro & Gordon 1983b). Parts of this model form the basis of
our work here. Though being well known the discretionary equilibrium is suboptimal
the question arises how to overcome this. We will introduce four different models,
each of them giving a different perspective and way of thinking. Each model shows
a (sometimes slightly) different way a central banker might deliver lower inflation
than the one shot Barro-Gordon game at a first glance would suggest. To cut a long
story short we provide a number of reasons for believing that the purely discretionary
equilibrium may be rarely observed in real life.
Further the thesis provides new insights for derivative pricing theories. In particular, the potential role of financial markets and instruments will be a major focus.
We investigate how such instruments can be used for monetary policy. On the contrary these financial securities have strong influence on the behavior of the central
bank. Taking this into account in chapters 3 and 4 we come up with a new method of
pricing inflation linked derivatives. The latter to the best of our knowledge has never been done before - (Persson, Persson & Svenson 2006), as one of very view economic
works taking into account financial markets, is purely focused on the social planer's
problem.
A purely game theoretic approach is done in chapter 2 to change the original
Barro-Gordon. Here we deviate from a purely rational and purely one period wise
thinking. Finally in chapter 5 we model an asymmetric information situation where
the central banker faces a trade off between his current objective on the one hand
and benefit arising from not perfectly informed agents on the other hand. In that
sense the central bank is also concerned about its reputation.The economics of trade secrets : evidence from the Economic Espionage Acthttp://hdl.handle.net/10023/1632
This thesis reports on the economic analysis of trade secrets via data collected from prosecutions under the U.S. Economic Espionage Act (EEA.) Ratified in 1996, the EEA increases protection for trade secrets by criminalizing the theft of trade secrets. The empirical basis of the thesis is a unique database constructed using EEA prosecutions from 1996 to 2008. A critical and empirical analysis of these cases provides insight into the use of trade secrets.
The increase in the criminal culpability of trade secret theft has important impacts on the use of trade secrets and the incentives for would-be thieves. A statistical analysis of the EEA data suggest that trade secrets are used primarily in manufacturing and construction. A cluster analysis suggests three broad categories of EEA cases based on the type of trade secret and the sector of the owner. A series of illustrative case studies demonstrates these clusters.
A critical analysis of the damages valuations methods in trade secrets cases demonstrates the highly variable estimates of trade secrets. Given the criminal context of EEA cases, these valuation methods play an important role in sentencing and affect the incentives of the owners of trade secrets. The analysis of the lognormal distribution of the observed values is furthered by a statistical analysis of the EEA valuations, which suggests that the methods can result in very different estimates for the same trade secret.
A regression analysis examines the determinants of trade secret intensity at the firm level. This econometric analysis suggests that trade secret intensity is negatively related to firm size. Collectively, this thesis presents an empirical analysis of trade secrets.
Mon, 01 Nov 2010 00:00:00 GMThttp://hdl.handle.net/10023/16322010-11-01T00:00:00ZSearle, Nicola C.This thesis reports on the economic analysis of trade secrets via data collected from prosecutions under the U.S. Economic Espionage Act (EEA.) Ratified in 1996, the EEA increases protection for trade secrets by criminalizing the theft of trade secrets. The empirical basis of the thesis is a unique database constructed using EEA prosecutions from 1996 to 2008. A critical and empirical analysis of these cases provides insight into the use of trade secrets.
The increase in the criminal culpability of trade secret theft has important impacts on the use of trade secrets and the incentives for would-be thieves. A statistical analysis of the EEA data suggest that trade secrets are used primarily in manufacturing and construction. A cluster analysis suggests three broad categories of EEA cases based on the type of trade secret and the sector of the owner. A series of illustrative case studies demonstrates these clusters.
A critical analysis of the damages valuations methods in trade secrets cases demonstrates the highly variable estimates of trade secrets. Given the criminal context of EEA cases, these valuation methods play an important role in sentencing and affect the incentives of the owners of trade secrets. The analysis of the lognormal distribution of the observed values is furthered by a statistical analysis of the EEA valuations, which suggests that the methods can result in very different estimates for the same trade secret.
A regression analysis examines the determinants of trade secret intensity at the firm level. This econometric analysis suggests that trade secret intensity is negatively related to firm size. Collectively, this thesis presents an empirical analysis of trade secrets.Stability and cycles in a cobweb model with heterogeneous expectationshttp://hdl.handle.net/10023/1534
We investigate the dynamics of a cobweb model with heterogeneous beliefs, generalizing the example of Brock and Hommes (1997). We examine situations where the agents form expectations by using either rational expectations, or a type of adaptive expectations with limited memory defined from the last two prices. We specify conditions that generate cycles. These conditions depend on a set of factors that includes the intensity of switching between beliefs and the adaption parameter. We show that both Flip bifurcation and Neimark-Sacker bifurcation can occur as primary bifurcation when the steady state is unstable.
Tue, 01 Nov 2005 00:00:00 GMThttp://hdl.handle.net/10023/15342005-11-01T00:00:00ZLasselle, LaurenceSvizzero, STisdell, CWe investigate the dynamics of a cobweb model with heterogeneous beliefs, generalizing the example of Brock and Hommes (1997). We examine situations where the agents form expectations by using either rational expectations, or a type of adaptive expectations with limited memory defined from the last two prices. We specify conditions that generate cycles. These conditions depend on a set of factors that includes the intensity of switching between beliefs and the adaption parameter. We show that both Flip bifurcation and Neimark-Sacker bifurcation can occur as primary bifurcation when the steady state is unstable.Explaining medium run swings in unemployment : shocks, monetary policy and labour market frictionshttp://hdl.handle.net/10023/974
The literature trying to link the increase in unemployment in many western European countries since the middle of the 1970s to an increase in labour market rigidity has
run into a number of problems. In particular, changes in labour market institutions do
not seem to be able to explain the evolution of unemployment across time.
We conclude that a new theory of medium run unemployment swings should explain
the increase in unemployment in many European countries and the lack thereof in the
United States. Furthermore, it should also help to explain the high degree of endogenous
unemployment persistence in the many European countries and findings suggesting a
link between disinflationary monetary policy and subsequent increases in the NAIRU.
To address these issues, we first develop an endogenous growth sticky price model.
We subject the model to an uncorrelated cost push shock, in order to mimic a scenario
akin to the one faced by central banks at the end of the 1970s. Monetary policy implements a disinflation by following an interest feedback rule calibrated to an estimate of a
Bundesbank reaction function. 40 quarters after the shock has vanished, unemployment
is still about 1.8 percentage points above its steady state. The model also partly explains cross country differences in the unemployment evolution by drawing on differences in the size of the disinflation, the monetary policy
reaction function and wage setting.
We then draw some conclusions about optimal monetary policy in the presence of
endogenous growth and find that optimal policy is substantially less hawkish than in
an identical economy without endogenous growth.
The second model introduces duration dependent skill decay among the unemployed
into a New-Keynesian model with hiring frictions developed by Blanchard/Gali (2008).
If the central bank responds only to inflation and quarterly skill decay is above a threshold level, determinacy requires a coefficient on inflation smaller than one. The threshold
level is plausible with little steady-state hiring and firing ("Continental European Calibration") but implausibly high in the opposite case ("American calibration"). Neither
interest rate smoothing nor responding to the output gap helps to restore determinacy
if skill decay exceeds the threshold level. However, a modest response to unemployment
guarantees determinacy.
Moreover, under indeterminacy, both an adverse sunspot shock and an adverse
technology shock increase unemployment extremely persistently.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/10023/9742010-01-01T00:00:00ZRannenberg, AnsgarThe literature trying to link the increase in unemployment in many western European countries since the middle of the 1970s to an increase in labour market rigidity has
run into a number of problems. In particular, changes in labour market institutions do
not seem to be able to explain the evolution of unemployment across time.
We conclude that a new theory of medium run unemployment swings should explain
the increase in unemployment in many European countries and the lack thereof in the
United States. Furthermore, it should also help to explain the high degree of endogenous
unemployment persistence in the many European countries and findings suggesting a
link between disinflationary monetary policy and subsequent increases in the NAIRU.
To address these issues, we first develop an endogenous growth sticky price model.
We subject the model to an uncorrelated cost push shock, in order to mimic a scenario
akin to the one faced by central banks at the end of the 1970s. Monetary policy implements a disinflation by following an interest feedback rule calibrated to an estimate of a
Bundesbank reaction function. 40 quarters after the shock has vanished, unemployment
is still about 1.8 percentage points above its steady state. The model also partly explains cross country differences in the unemployment evolution by drawing on differences in the size of the disinflation, the monetary policy
reaction function and wage setting.
We then draw some conclusions about optimal monetary policy in the presence of
endogenous growth and find that optimal policy is substantially less hawkish than in
an identical economy without endogenous growth.
The second model introduces duration dependent skill decay among the unemployed
into a New-Keynesian model with hiring frictions developed by Blanchard/Gali (2008).
If the central bank responds only to inflation and quarterly skill decay is above a threshold level, determinacy requires a coefficient on inflation smaller than one. The threshold
level is plausible with little steady-state hiring and firing ("Continental European Calibration") but implausibly high in the opposite case ("American calibration"). Neither
interest rate smoothing nor responding to the output gap helps to restore determinacy
if skill decay exceeds the threshold level. However, a modest response to unemployment
guarantees determinacy.
Moreover, under indeterminacy, both an adverse sunspot shock and an adverse
technology shock increase unemployment extremely persistently.Determinacy and learning stability of economic policy in asymmetric monetary union modelshttp://hdl.handle.net/10023/972
This thesis examines determinacy and E-stability of economic policy in monetary union models. Monetary policy takes the form of either a contemporaneous
or a forecast based interest rate rule, while fiscal policy follows a contemporaneous government spending rule. In the absence of asymmetries, the results from
the closed economy literature on learning are retained. However, when introducing
asymmetries into monetary union frameworks, the determinacy and E-stability conditions for economic policy differ from both the closed and open economy cases.
We find that a monetary union with heterogeneous price rigidities is more
likely to be determinate and E-stable. Specifically, the Taylor principle, a key stability condition for the closed economy, is now relaxed. Furthermore, an interest
rate rule that stabilizes the terms of trade in addition to output and inflation, is more
likely to induce determinacy and local stability under RLS learning. If monetary
policy is sufficiently aggressive in stabilizing the terms of trade, then determinacy
and E-stability of the union economy can be achieved without direct stabilization
of output and inflation.
A fiscal policy rule that supports demand for domestic goods following a
shock to competitiveness, can destabilize the union economy regardless of the interest rate rule employed by the union central bank. In this case, determinacy and
E-stability conditions have to be simultaneously and independently met by both
fiscal and monetary policy for the union economy to be stable. When fiscal policy instead stabilizes domestic output gaps while monetary policy stabilizes union
output and inflation, fiscal policy directly affects the stability of monetary policy.
A contemporaneous monetary policy rule has to be more aggressive to satisfy the
Taylor principle, the more aggressive fiscal policy is. On the other hand, when
monetary policy is forward looking, an aggressive fiscal policy rule can help induce
determinacy.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/10023/9722010-01-01T00:00:00ZBoumediene, Farid JimmyThis thesis examines determinacy and E-stability of economic policy in monetary union models. Monetary policy takes the form of either a contemporaneous
or a forecast based interest rate rule, while fiscal policy follows a contemporaneous government spending rule. In the absence of asymmetries, the results from
the closed economy literature on learning are retained. However, when introducing
asymmetries into monetary union frameworks, the determinacy and E-stability conditions for economic policy differ from both the closed and open economy cases.
We find that a monetary union with heterogeneous price rigidities is more
likely to be determinate and E-stable. Specifically, the Taylor principle, a key stability condition for the closed economy, is now relaxed. Furthermore, an interest
rate rule that stabilizes the terms of trade in addition to output and inflation, is more
likely to induce determinacy and local stability under RLS learning. If monetary
policy is sufficiently aggressive in stabilizing the terms of trade, then determinacy
and E-stability of the union economy can be achieved without direct stabilization
of output and inflation.
A fiscal policy rule that supports demand for domestic goods following a
shock to competitiveness, can destabilize the union economy regardless of the interest rate rule employed by the union central bank. In this case, determinacy and
E-stability conditions have to be simultaneously and independently met by both
fiscal and monetary policy for the union economy to be stable. When fiscal policy instead stabilizes domestic output gaps while monetary policy stabilizes union
output and inflation, fiscal policy directly affects the stability of monetary policy.
A contemporaneous monetary policy rule has to be more aggressive to satisfy the
Taylor principle, the more aggressive fiscal policy is. On the other hand, when
monetary policy is forward looking, an aggressive fiscal policy rule can help induce
determinacy.Four essays in dynamic macroeconomicshttp://hdl.handle.net/10023/941
The dissertation contains essays concerning the linkages between macroeconomy and financial market or the conduct of monetary policy via DSGE modelling. The dissertation contributes to the questions of fitting macroeconomic models to the data, and so contributes to our understanding of the driving forces of fluctuations in macroeconomic and financial variables.
Chapter one offers an introduction to my thesis and outlines in detail the main results and methodologies.
In Chapter two I introduce a statistical measure for model evaluation and selection based on the full information of sample second moments in data. A model is said to outperform its counterpart if it produces closer similarity in simulated data variance-covariance matrix when compared with the actual data. The "distance method" is generally feasible and simple to conduct. A flexible price two-sector open economy model is studied to match the observed puzzles of international finance data. The statistical distance approach favours a model with dominant role played by the expectational errors in foreign exchange market which breaks the international interest rate parity.
Chapter three applies the distance approach to a New Keynesian model augmented with habit formation and backward-looking component of pricing behaviour. A macro-finance model of yield curve is developed to showcase the dynamics of implied forward yields. This exercise, with the distance approach, reiterate the inability of macro model in explaining yield curve dynamics. The method also reveals remarkable interconnection between real quantity and bond yield slope.
In Chapter four I study a general equilibrium business cycle model with sticky prices and labour market rigidities. With costly matching on labour market, output responds in a hump-shaped and persistent manner to monetary shocks and the resulting Phillips curve seems to radically change the scope for monetary policy because (i) there are speed limit effects for policy and (ii) there is a cost channel for monetary policy. Labour reforms such as in mid-1980s UK can trigger more effective monetary policy. Research on monetary policy shall pay greater attention to output when labour market adjustments are persistent.
Chapter five analyzes the link between money and financial spread, which is oft missed in specification of monetary policy making analysis. When liquidity provision by banks dominates the demand for money from the real economy, money may contain information of future output and inflation due to its impact on financial spreads. I use a sign-restriction Bayesian VAR estimation to separate the liquidity provision impact from money market equilibrium. The decomposition exercise shows supply shocks dominate the money-price nexus in the short to medium term. It also uncovers distinctive policy stance of two central banks.
Finally Chapter six concludes, providing a brief summary of the research work as well as a discussion of potential limitations and possible directions for future research.
Fri, 25 Jun 2010 00:00:00 GMThttp://hdl.handle.net/10023/9412010-06-25T00:00:00ZSun, QiThe dissertation contains essays concerning the linkages between macroeconomy and financial market or the conduct of monetary policy via DSGE modelling. The dissertation contributes to the questions of fitting macroeconomic models to the data, and so contributes to our understanding of the driving forces of fluctuations in macroeconomic and financial variables.
Chapter one offers an introduction to my thesis and outlines in detail the main results and methodologies.
In Chapter two I introduce a statistical measure for model evaluation and selection based on the full information of sample second moments in data. A model is said to outperform its counterpart if it produces closer similarity in simulated data variance-covariance matrix when compared with the actual data. The "distance method" is generally feasible and simple to conduct. A flexible price two-sector open economy model is studied to match the observed puzzles of international finance data. The statistical distance approach favours a model with dominant role played by the expectational errors in foreign exchange market which breaks the international interest rate parity.
Chapter three applies the distance approach to a New Keynesian model augmented with habit formation and backward-looking component of pricing behaviour. A macro-finance model of yield curve is developed to showcase the dynamics of implied forward yields. This exercise, with the distance approach, reiterate the inability of macro model in explaining yield curve dynamics. The method also reveals remarkable interconnection between real quantity and bond yield slope.
In Chapter four I study a general equilibrium business cycle model with sticky prices and labour market rigidities. With costly matching on labour market, output responds in a hump-shaped and persistent manner to monetary shocks and the resulting Phillips curve seems to radically change the scope for monetary policy because (i) there are speed limit effects for policy and (ii) there is a cost channel for monetary policy. Labour reforms such as in mid-1980s UK can trigger more effective monetary policy. Research on monetary policy shall pay greater attention to output when labour market adjustments are persistent.
Chapter five analyzes the link between money and financial spread, which is oft missed in specification of monetary policy making analysis. When liquidity provision by banks dominates the demand for money from the real economy, money may contain information of future output and inflation due to its impact on financial spreads. I use a sign-restriction Bayesian VAR estimation to separate the liquidity provision impact from money market equilibrium. The decomposition exercise shows supply shocks dominate the money-price nexus in the short to medium term. It also uncovers distinctive policy stance of two central banks.
Finally Chapter six concludes, providing a brief summary of the research work as well as a discussion of potential limitations and possible directions for future research.The determinants of corporate growthhttp://hdl.handle.net/10023/918
Corporate growth is a concept that has been widely treated in a specific way or as part of strategy theories, in definition and in econometric models and has also been studied in many different aspects and approaches. The author describes in depth the main variables affecting corporate growth and the underlying business processes.
This empirical research has focused on sales, profit-cash flow, risk, created shareholder value, market value and overall performance econometric models. These panel data models are based on the 500 Companies of the Standard & Poor’s 500. The methodology used has been very strict in identifying exogenous variables, walking through the different alternative econometric models, discussing results, and, in the end, describing the practical implications in today’s business corporate management.
We basically assume that the functions/departments act independently in the same company, many times with different objectives, and in this situation clear processes are key to clarify the situations, roles and responsibilities. We also assume that growth implies interactions among the different functions in a company and the CEO acts to lead and coach his immediate Directors as a referee of the key conflicts through his operating mechanism.
The objective of this PhD dissertation is to clarify the business priorities and identify the most relevant variables in every process leading to the highest efficiency in reaching a sustainable and profitable growth. It covers the lack of academic studies on the nature and specific driving factors of corporate growth and provides a working framework for entrepreneurs and management leading to the company’s success.
Wed, 23 Jun 2010 00:00:00 GMThttp://hdl.handle.net/10023/9182010-06-23T00:00:00ZRosique, FranciscoCorporate growth is a concept that has been widely treated in a specific way or as part of strategy theories, in definition and in econometric models and has also been studied in many different aspects and approaches. The author describes in depth the main variables affecting corporate growth and the underlying business processes.
This empirical research has focused on sales, profit-cash flow, risk, created shareholder value, market value and overall performance econometric models. These panel data models are based on the 500 Companies of the Standard & Poor’s 500. The methodology used has been very strict in identifying exogenous variables, walking through the different alternative econometric models, discussing results, and, in the end, describing the practical implications in today’s business corporate management.
We basically assume that the functions/departments act independently in the same company, many times with different objectives, and in this situation clear processes are key to clarify the situations, roles and responsibilities. We also assume that growth implies interactions among the different functions in a company and the CEO acts to lead and coach his immediate Directors as a referee of the key conflicts through his operating mechanism.
The objective of this PhD dissertation is to clarify the business priorities and identify the most relevant variables in every process leading to the highest efficiency in reaching a sustainable and profitable growth. It covers the lack of academic studies on the nature and specific driving factors of corporate growth and provides a working framework for entrepreneurs and management leading to the company’s success.Endogenous Price Flexibility and Optimal Monetary Policyhttp://hdl.handle.net/10023/905
Much of the literature on optimal monetary policy uses models in which the degree of nominal price flexibility is exogenous. There are, however, good reasons to suppose that the degree of price flexibility adjusts endogenously to changes in monetary conditions. This paper extends the standard New Keynesian model to incorporate an endogenous degree of price flexibility. The model shows that endogenising the degree of price flexibility tends to shift optimal monetary policy towards complete inflation stabilisation, even when shocks take the form of cost-push distur¬bances. This contrasts with the standard result obtained in models with exogenous price flexibility, which show that optimal monetary policy should allow some degree of inflation volatility in order to stabilise the welfare-relevant output gap.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/10023/9052010-01-01T00:00:00ZSutherland, AlanSenay, OzgeMuch of the literature on optimal monetary policy uses models in which the degree of nominal price flexibility is exogenous. There are, however, good reasons to suppose that the degree of price flexibility adjusts endogenously to changes in monetary conditions. This paper extends the standard New Keynesian model to incorporate an endogenous degree of price flexibility. The model shows that endogenising the degree of price flexibility tends to shift optimal monetary policy towards complete inflation stabilisation, even when shocks take the form of cost-push distur¬bances. This contrasts with the standard result obtained in models with exogenous price flexibility, which show that optimal monetary policy should allow some degree of inflation volatility in order to stabilise the welfare-relevant output gap.Market segmentation and dual-listed stock price premium - an empirical investigation of the Chinese stock markethttp://hdl.handle.net/10023/894
This thesis comprises, firstly, a careful and detailed description of the institutional workings of the Chinese stock market; secondly, a literature review of the Chinese segmented markets and dual-listed shares price premium; and thirdly, three evidence-based contributions designed to cast new light on the Chinese A-shares premium puzzle. Publicly-listed firms in China, under certain criteria, can issue two different types of shares, namely A-shares and B-shares, to local and foreign investors respectively. These shares carry the same rights and obligations, but are however priced differently due to market segmentation. After a review of the literature on determinants of the premium, the first contribution offers a complementary explanation. I propose that the premium reflects the difference in valuation preferences between the local and foreign investors, i.e., local investors pay more attention to stock liquidity, while foreign investors pay more attention to firm’s intrinsic value, and so firms having more favorable fundamentals tend to have lower premia. The second contribution involves the examination of a controversial question that which investor group is better informed about local assets, by testing the direction of information flows between the A- and B-shares markets. Both time series methods, and panel data techniques which are used for the first time in this context, are employed, in order to get a distinct and more insightful picture against the current literature. The third contribution compares and contrasts institutional settings of China, Singapore and Thailand which have similar market segmentation and dual-listing systems; examines whether or not the premia in the three countries are caused by same factors; and tries to answer why foreign investors in China pay less, rather than more, as commonly observed in other segmented markets, for identical assets. It provides the first cross-country comparison evidence after 1999 with updated data.
Tue, 08 Dec 2009 00:00:00 GMThttp://hdl.handle.net/10023/8942009-12-08T00:00:00ZLiang, JingThis thesis comprises, firstly, a careful and detailed description of the institutional workings of the Chinese stock market; secondly, a literature review of the Chinese segmented markets and dual-listed shares price premium; and thirdly, three evidence-based contributions designed to cast new light on the Chinese A-shares premium puzzle. Publicly-listed firms in China, under certain criteria, can issue two different types of shares, namely A-shares and B-shares, to local and foreign investors respectively. These shares carry the same rights and obligations, but are however priced differently due to market segmentation. After a review of the literature on determinants of the premium, the first contribution offers a complementary explanation. I propose that the premium reflects the difference in valuation preferences between the local and foreign investors, i.e., local investors pay more attention to stock liquidity, while foreign investors pay more attention to firm’s intrinsic value, and so firms having more favorable fundamentals tend to have lower premia. The second contribution involves the examination of a controversial question that which investor group is better informed about local assets, by testing the direction of information flows between the A- and B-shares markets. Both time series methods, and panel data techniques which are used for the first time in this context, are employed, in order to get a distinct and more insightful picture against the current literature. The third contribution compares and contrasts institutional settings of China, Singapore and Thailand which have similar market segmentation and dual-listing systems; examines whether or not the premia in the three countries are caused by same factors; and tries to answer why foreign investors in China pay less, rather than more, as commonly observed in other segmented markets, for identical assets. It provides the first cross-country comparison evidence after 1999 with updated data.Application of stochastic differential games and real option theory in environmental economicshttp://hdl.handle.net/10023/893
This thesis presents several problems based on papers written jointly by the author and Dr. Christian-Oliver Ewald. Firstly, the author extends the model presented by Fershtman and Nitzan (1991), which studies a deterministic differential public good game. Two types of volatility are considered. In the first case the volatility of the diffusion term is dependent on the current level of public good, while in the second case the volatility is dependent on the current rate of public good provision by the agents. The result in the latter case is qualitatively different from the first one. These results are discussed in detail, along with numerical examples. Secondly, two existing lines of research in game theoretic studies of fisheries are combined and extended. The first line of research is the inclusion of the aspect of predation and the consideration of multi-species fisheries within classical game theoretic fishery models. The second line of research includes continuous time and uncertainty. This thesis considers a two species fishery game and compares the results of this with several cases. Thirdly, a model of a fishery is developed in which the dynamic of the unharvested fish population is given by the stochastic logistic growth equation and it is assumed that the fishery harvests the fish population following a constant effort strategy. Explicit formulas for optimal fishing effort are derived in problems considered and the effects of uncertainty, risk aversion and mean reversion speed on fishing efforts are investigated. Fourthly, a Dixit and Pindyck type irreversible investment problem in continuous time is solved, using the assumption that the project value follows a Cox-Ingersoll- Ross process. This solution differs from the two classical cases of geometric Brownian motion and geometric mean reversion and these differences are examined. The aim is to find the optimal stopping time, which can be applied to the problem of extracting resources.
Wed, 23 Dec 2009 00:00:00 GMThttp://hdl.handle.net/10023/8932009-12-23T00:00:00ZWang, Wen-KaiThis thesis presents several problems based on papers written jointly by the author and Dr. Christian-Oliver Ewald. Firstly, the author extends the model presented by Fershtman and Nitzan (1991), which studies a deterministic differential public good game. Two types of volatility are considered. In the first case the volatility of the diffusion term is dependent on the current level of public good, while in the second case the volatility is dependent on the current rate of public good provision by the agents. The result in the latter case is qualitatively different from the first one. These results are discussed in detail, along with numerical examples. Secondly, two existing lines of research in game theoretic studies of fisheries are combined and extended. The first line of research is the inclusion of the aspect of predation and the consideration of multi-species fisheries within classical game theoretic fishery models. The second line of research includes continuous time and uncertainty. This thesis considers a two species fishery game and compares the results of this with several cases. Thirdly, a model of a fishery is developed in which the dynamic of the unharvested fish population is given by the stochastic logistic growth equation and it is assumed that the fishery harvests the fish population following a constant effort strategy. Explicit formulas for optimal fishing effort are derived in problems considered and the effects of uncertainty, risk aversion and mean reversion speed on fishing efforts are investigated. Fourthly, a Dixit and Pindyck type irreversible investment problem in continuous time is solved, using the assumption that the project value follows a Cox-Ingersoll- Ross process. This solution differs from the two classical cases of geometric Brownian motion and geometric mean reversion and these differences are examined. The aim is to find the optimal stopping time, which can be applied to the problem of extracting resources.Economics of entry into marriagehttp://hdl.handle.net/10023/721
This thesis contains three studies on the economics of entry into marriage; a life event that has been shown to have significant implications for the well-being (economic and otherwise) of men, women and their children.
The first study examines the effect of family background on the timing of first marriage of 7,853 individuals born in 1970 in Great Britain. Hazard model analysis reveals that high levels of parental resources serve to delay entry into marriage for both males and females, although this effect fades as a young adult ages. Consistent with theories of “resource dilution”, a greater number of siblings present in the household during adolescence is associated with early marriage for both sexes. It is also found that the presence of a younger sibling in the household hastens marriage for males, while the presence of a younger brother is associated with early marriage for both sexes.
The second study investigates how changes in abortion policy in Eastern Europe during the late-eighties and early-nineties may have affected female first-marriage rates. Previous studies have suggested that more liberal abortion laws should lead to a decrease in marriage rates among young women as ‘shotgun weddings’ are no longer necessary. Empirical evidence from the United States lends support to that hypothesis. This study presents an alternative theory of abortion access and marriage based on the cost of search that suggests that more liberal abortion laws may actually promote young marriage. An empirical examination of marriage data from Eastern Europe shows that countries that liberalized their abortion laws during the late-eighties and early-nineties saw an increase in marriage rates among non-teenage women.
The third study uses a unique and comprehensive panel of 2441 U.S. counties spanning from 1970 to 1999 to examine the relationship between the cost of owner-occupied housing and entry into marriage. It is found that the burden of housing costs negatively affects the marriage rate. Further, it is reported that the greater the difference between the annual cost of owning a house and the annual cost of renting, the lower the marriage rate. These are important findings since they imply that government policies designed to reduce the cost of housing (such as tax advantages to owner-occupiers) have the potential to encourage entry into marriage.
Fri, 26 Jun 2009 00:00:00 GMThttp://hdl.handle.net/10023/7212009-06-26T00:00:00ZBowmaker, Simon W.This thesis contains three studies on the economics of entry into marriage; a life event that has been shown to have significant implications for the well-being (economic and otherwise) of men, women and their children.
The first study examines the effect of family background on the timing of first marriage of 7,853 individuals born in 1970 in Great Britain. Hazard model analysis reveals that high levels of parental resources serve to delay entry into marriage for both males and females, although this effect fades as a young adult ages. Consistent with theories of “resource dilution”, a greater number of siblings present in the household during adolescence is associated with early marriage for both sexes. It is also found that the presence of a younger sibling in the household hastens marriage for males, while the presence of a younger brother is associated with early marriage for both sexes.
The second study investigates how changes in abortion policy in Eastern Europe during the late-eighties and early-nineties may have affected female first-marriage rates. Previous studies have suggested that more liberal abortion laws should lead to a decrease in marriage rates among young women as ‘shotgun weddings’ are no longer necessary. Empirical evidence from the United States lends support to that hypothesis. This study presents an alternative theory of abortion access and marriage based on the cost of search that suggests that more liberal abortion laws may actually promote young marriage. An empirical examination of marriage data from Eastern Europe shows that countries that liberalized their abortion laws during the late-eighties and early-nineties saw an increase in marriage rates among non-teenage women.
The third study uses a unique and comprehensive panel of 2441 U.S. counties spanning from 1970 to 1999 to examine the relationship between the cost of owner-occupied housing and entry into marriage. It is found that the burden of housing costs negatively affects the marriage rate. Further, it is reported that the greater the difference between the annual cost of owning a house and the annual cost of renting, the lower the marriage rate. These are important findings since they imply that government policies designed to reduce the cost of housing (such as tax advantages to owner-occupiers) have the potential to encourage entry into marriage.Welfare, growth and environment: a sceptical review of the skeptical environmentalist (Bjørn Lomborg, Cambridge University Press, 2001)http://hdl.handle.net/10023/659
In his wide ranging attempt to review the literature on economic development and welfare in
relation to the environment, Lomborg claims balance and objectivity, but actually presents a
thoroughly misleading picture of environmental prospects and research, global economic
development, and the real determinants of human welfare. Statistician Lomborg blatantly
distorts the evidence by systematically selecting statistics to support his claims that global
welfare is generally improving and environmental policy is unnecessary, while denying
catastrophic risks such as prolonged drought in major food growing areas (though such
events cannot be ruled out by climate models). In spite of its numerous errors and biases,
"the Lomborg scam" (as leading biologist E.O.Wilson aptly calls it) has been welcomed by
gullible or like-minded journalists and politicians.
Previously in the University eprints HAIRST pilot service at http://eprints.st-andrews.ac.uk/archive/00000052/; March 2002. Forthcoming as a review article in the Scottish Journal of Political Economy
Tue, 01 Jan 2002 00:00:00 GMThttp://hdl.handle.net/10023/6592002-01-01T00:00:00ZFitzRoy, FelixSmith, IanIn his wide ranging attempt to review the literature on economic development and welfare in
relation to the environment, Lomborg claims balance and objectivity, but actually presents a
thoroughly misleading picture of environmental prospects and research, global economic
development, and the real determinants of human welfare. Statistician Lomborg blatantly
distorts the evidence by systematically selecting statistics to support his claims that global
welfare is generally improving and environmental policy is unnecessary, while denying
catastrophic risks such as prolonged drought in major food growing areas (though such
events cannot be ruled out by climate models). In spite of its numerous errors and biases,
"the Lomborg scam" (as leading biologist E.O.Wilson aptly calls it) has been welcomed by
gullible or like-minded journalists and politicians.Universities and fundamental research: reflections on the growth of university-industry partnershiphttp://hdl.handle.net/10023/658
The recent rise in university-industry partnerships has stimulated an
important public policy debate regarding how these relationships affect
fundamental research. In this paper, we examine the antecedents and
consequences of policies to promote university-industry alliances. Although the
preliminary evidence appears to suggest that these partnerships have not had a
deleterious effect on the quantity and quality of basic research, some legitimate
concerns have been raised about these activities that require additional analysis.
We conclude that additional research is needed to provide a more accurate
assessment of the optimal level of commercialisation.
Previously in the University eprints HAIRST pilot service at http://eprints.st-andrews.ac.uk/archive/00000053/; [Originally] November 2001. This version January 2002. Forthcoming in Oxford review of economic policy
Tue, 01 Jan 2002 00:00:00 GMThttp://hdl.handle.net/10023/6582002-01-01T00:00:00ZPoyago-Theotoky, JoannaBeath, JohnSiegel, Donald S.The recent rise in university-industry partnerships has stimulated an
important public policy debate regarding how these relationships affect
fundamental research. In this paper, we examine the antecedents and
consequences of policies to promote university-industry alliances. Although the
preliminary evidence appears to suggest that these partnerships have not had a
deleterious effect on the quantity and quality of basic research, some legitimate
concerns have been raised about these activities that require additional analysis.
We conclude that additional research is needed to provide a more accurate
assessment of the optimal level of commercialisation.The cost of political intervention in monetary policyhttp://hdl.handle.net/10023/657
Data from a unique monetary ‘experiment’ conducted in the UK during the period
1994-97 are used to investigate the cost of political intervention in monetary policy.
The paper finds that the difference between government bond yields in Germany (but
not the US) and the UK was systematically related to an index of the credibility of
monetary policy constructed on the basis of the frequency of agreements/
disagreements between the Minister of Finance who took the decisions on interest
rates and the Bank of England, whose recommendations were published with a lag,
with disagreements causing an increase in the yield differential.
Previously in the University eprints HAIRST pilot service at http://eprints.st-andrews.ac.uk/archive/00000055/; Revised November 2001
Mon, 01 Jan 2001 00:00:00 GMThttp://hdl.handle.net/10023/6572001-01-01T00:00:00ZCobham, DavidPapadopoulos, AthanasiosZis, GeorgeData from a unique monetary ‘experiment’ conducted in the UK during the period
1994-97 are used to investigate the cost of political intervention in monetary policy.
The paper finds that the difference between government bond yields in Germany (but
not the US) and the UK was systematically related to an index of the credibility of
monetary policy constructed on the basis of the frequency of agreements/
disagreements between the Minister of Finance who took the decisions on interest
rates and the Bank of England, whose recommendations were published with a lag,
with disagreements causing an increase in the yield differential.Taxation, unemployment and working time in models of economic growthhttp://hdl.handle.net/10023/656
This paper combines collective bargaining over wages and working time with models of
endogenous and neoclassical growth. Public expenditure is funded by taxes on capital and labour
supplied by infinitely-lived households in a closed economy. Taxes on labour are generally
inefficient in both growth models, there is a “dynamic Laffer Curve”, and employment is increased
by a reduction of working hours below the collective bargaining level – except in the case of a
monopoly union. Although growth is maximised by competitive (efficient) hours, welfare-optimal
working time is below the collective bargain when union are ‘too weak’, and vice-versa.
Previously in the University eprints HAIRST pilot service at http://eprints.st-andrews.ac.uk/archive/00000056/; Revised August 2001
Wed, 01 Aug 2001 00:00:00 GMThttp://hdl.handle.net/10023/6562001-08-01T00:00:00ZFitzRoy, FelixFunke, MichaelNolan, Michael A.This paper combines collective bargaining over wages and working time with models of
endogenous and neoclassical growth. Public expenditure is funded by taxes on capital and labour
supplied by infinitely-lived households in a closed economy. Taxes on labour are generally
inefficient in both growth models, there is a “dynamic Laffer Curve”, and employment is increased
by a reduction of working hours below the collective bargaining level – except in the case of a
monopoly union. Although growth is maximised by competitive (efficient) hours, welfare-optimal
working time is below the collective bargain when union are ‘too weak’, and vice-versa.Heterogeneous beliefs and instabilityhttp://hdl.handle.net/10023/655
While Rational Expectations have dominated the paradigm of expectations formation,
they have been more recently challenged on the empirical ground such as, for
instance, in the dynamics of the exchange rate. This challenge has led to the
introduction of heterogeneous expectations in economic modeling. More specifically,
the forecasts of the market participants have been drawn from competing views. Two
behaviours are usually considered: agents are either fundamentalist or chartist.
Moreover, the possibility of switching from one behaviour to the other one is also
assumed.
In a simple cobweb model, we study the dynamics associated with different
endogenous switching process based on the path of prices. We provide an example
with an asymmetric endogenous switching process built on the dynamics of past
prices. This example confirms the widespread belief that fundamentalist market
behaviour as compared with that of chartist tends to promote market stability.
Previously in the University eprints HAIRST pilot service at http://eprints.st-andrews.ac.uk/archive/00000057/
Mon, 01 Jan 2001 00:00:00 GMThttp://hdl.handle.net/10023/6552001-01-01T00:00:00ZLasselle, LaurenceSvizzero, SergeTisdell, ClemWhile Rational Expectations have dominated the paradigm of expectations formation,
they have been more recently challenged on the empirical ground such as, for
instance, in the dynamics of the exchange rate. This challenge has led to the
introduction of heterogeneous expectations in economic modeling. More specifically,
the forecasts of the market participants have been drawn from competing views. Two
behaviours are usually considered: agents are either fundamentalist or chartist.
Moreover, the possibility of switching from one behaviour to the other one is also
assumed.
In a simple cobweb model, we study the dynamics associated with different
endogenous switching process based on the path of prices. We provide an example
with an asymmetric endogenous switching process built on the dynamics of past
prices. This example confirms the widespread belief that fundamentalist market
behaviour as compared with that of chartist tends to promote market stability.Renormalization method and its economic applicationshttp://hdl.handle.net/10023/654
The purpose of this paper is to give new insights of the method of Helleman (1980) in the
context of macrodynamics. This method explains how a difference equation can be
locally studied from the Feigenbaum equation in the case of a constant Jacobian matrix.
First we introduce this technique. Second we apply it in two models: the model of
Matsuyama (1999) and the model of Kaldor (1957). Finally we present an extension of
the technique in the case of non constant (linear) Jacobian matrix and apply this extension
in the model of Médio (1992).
Previously in the University eprints HAIRST pilot service at http://eprints.st-andrews.ac.uk/archive/00000059/
Mon, 01 Jan 2001 00:00:00 GMThttp://hdl.handle.net/10023/6542001-01-01T00:00:00ZBriec, WalterLasselle, LaurenceThe purpose of this paper is to give new insights of the method of Helleman (1980) in the
context of macrodynamics. This method explains how a difference equation can be
locally studied from the Feigenbaum equation in the case of a constant Jacobian matrix.
First we introduce this technique. Second we apply it in two models: the model of
Matsuyama (1999) and the model of Kaldor (1957). Finally we present an extension of
the technique in the case of non constant (linear) Jacobian matrix and apply this extension
in the model of Médio (1992).Growing through subsidieshttp://hdl.handle.net/10023/653
We consider an overlapping generation model based on Matsuyama (1999)
and show that, whenever actual capital accumulation falls below its balanced
growth path, subsidising innovators by taxing consumers has stabilising effects
and increases welfare. Further, if the steady state is unstable under
laissez faire, the introduction of the subsidy can make the steady state stable.
Such a policy has positive welfare effects as it fosters output growth
along the transitional adjustment path. Therefore, fast growing economies,
in which high factor accumulation plays a crucial role alongside innovative
sectors that enjoy temporary monopoly rents, should follow an unorthodox
approach to stabilisation. Namely, taxing the consumers and reallocate resources
to the innovative sectors.
Previously in the University eprints HAIRST pilot service at http://eprints.st-andrews.ac.uk/archive/00000060/
Mon, 01 Jan 2001 00:00:00 GMThttp://hdl.handle.net/10023/6532001-01-01T00:00:00ZAloi, MartaLasselle, LaurenceWe consider an overlapping generation model based on Matsuyama (1999)
and show that, whenever actual capital accumulation falls below its balanced
growth path, subsidising innovators by taxing consumers has stabilising effects
and increases welfare. Further, if the steady state is unstable under
laissez faire, the introduction of the subsidy can make the steady state stable.
Such a policy has positive welfare effects as it fosters output growth
along the transitional adjustment path. Therefore, fast growing economies,
in which high factor accumulation plays a crucial role alongside innovative
sectors that enjoy temporary monopoly rents, should follow an unorthodox
approach to stabilisation. Namely, taxing the consumers and reallocate resources
to the innovative sectors.On the persistence of output fluctuations in high technology sectorshttp://hdl.handle.net/10023/652
Fatás (2000) argues that in a cross-section analysis of countries there exists a positive
correlation between long-term growth rates and the persistence of output fluctuations.
The current paper extends this line of research by examining manufacturing sectors of an
economy which can be characterised by two levels of technology; a low level and a high
level. Analysis of the data reveals a positive correlation between long-term growth rates
and the persistence of output fluctuations in ‘high-tech’ sectors. This empirical analysis is
further supported by reformulating the model of Matsuyama (1999b) in a stochastic
environment. Within this framework the model is able to capture the two main theories of
growth, namely; the Solow model and the Romer model. The stochastic nature of the
long run output trend is endogenous and based on technological shocks. Despite the
cyclical nature of the shocks we are able to show that output fluctuations are more
persistent in ‘high-tech’ sectors.
Previously in the University eprints HAIRST pilot service at http://eprints.st-andrews.ac.uk/archive/00000061/
Sat, 01 Jan 2000 00:00:00 GMThttp://hdl.handle.net/10023/6522000-01-01T00:00:00ZLasselle, LaurenceAloi, MartaMcMillan, David G.Fatás (2000) argues that in a cross-section analysis of countries there exists a positive
correlation between long-term growth rates and the persistence of output fluctuations.
The current paper extends this line of research by examining manufacturing sectors of an
economy which can be characterised by two levels of technology; a low level and a high
level. Analysis of the data reveals a positive correlation between long-term growth rates
and the persistence of output fluctuations in ‘high-tech’ sectors. This empirical analysis is
further supported by reformulating the model of Matsuyama (1999b) in a stochastic
environment. Within this framework the model is able to capture the two main theories of
growth, namely; the Solow model and the Romer model. The stochastic nature of the
long run output trend is endogenous and based on technological shocks. Despite the
cyclical nature of the shocks we are able to show that output fluctuations are more
persistent in ‘high-tech’ sectors.On the determinants of initial public offering underpricinghttp://hdl.handle.net/10023/575
The initial public offering (IPO) underpricing phenomenon has frequently been noticed and generally is accepted as a puzzle in financial economics. Some of the new theories, such as behavioural finance, take the underpricing puzzle as one important form of evidence. However, some aspects of IPO underpricing have not yet been fully documented and discussed in the existing literature. This thesis tries to contribute in the following three specific areas.
First, we focus on the time series properties of the level of underpricing of IPO shares and document the IPO market in the Hong Kong market from 1999 to 2005. In the data sample, strong autocorrelation within the level of underpricing has been discovered. Evidence suggests the initial selling volume plays an important role in the relationship. The links between underpricing and clustering of IPOs within different industries are weak, suggesting the reasons for underpricing are related to the market liquidity rather than to the industry-specific risk characteristics.
Second, we investigate the underwriting networks to explore the relationship between underwriting business and IPO related puzzles. We find that in repeated IPOs, underwriters build up reputation and accumulate knowledge of their underwriting services. One of the great advantages of the top ranked underwriters is their relationship networks with other underwriters and institutional investors. We perform a careful examination of the underwriter syndicate and investigate the relationship of the structure of the syndicate in respect of IPO performance. Moreover, the pattern of distribution in the size of syndicates is identified and is found to be significantly related to the IPO performance. The research shows that the perspective from the underwriter syndicate is not only interesting, also necessary to understand IPOs.
Third, we analyse the coordination problem in the IPO. In the research, we consider the auction method as a one-stage selling and the bookbuilding method as a two-stage selling method. The model suggests that the relationship between the underpricing level and the quality of IPO shares is non-monotone. This implication is consistent with empirical observations. In addition, regarding the issuers' proceeds in the IPOs, the auction method is better than the bookbuilding method in both noisy and noisy vanishing equilibria. The bookbuilding method may be helpful in other ways, such as maintaining liquidity or price support in secondary market.
By studying liquidity, business networks and the coordination problem, the thesis does not only complement the existing research by providing unique explanations for the IPO underpricing and other related puzzles, but also opens some interesting venues for future research.
Thu, 27 Nov 2008 00:00:00 GMThttp://hdl.handle.net/10023/5752008-11-27T00:00:00ZQiao, YongyuanThe initial public offering (IPO) underpricing phenomenon has frequently been noticed and generally is accepted as a puzzle in financial economics. Some of the new theories, such as behavioural finance, take the underpricing puzzle as one important form of evidence. However, some aspects of IPO underpricing have not yet been fully documented and discussed in the existing literature. This thesis tries to contribute in the following three specific areas.
First, we focus on the time series properties of the level of underpricing of IPO shares and document the IPO market in the Hong Kong market from 1999 to 2005. In the data sample, strong autocorrelation within the level of underpricing has been discovered. Evidence suggests the initial selling volume plays an important role in the relationship. The links between underpricing and clustering of IPOs within different industries are weak, suggesting the reasons for underpricing are related to the market liquidity rather than to the industry-specific risk characteristics.
Second, we investigate the underwriting networks to explore the relationship between underwriting business and IPO related puzzles. We find that in repeated IPOs, underwriters build up reputation and accumulate knowledge of their underwriting services. One of the great advantages of the top ranked underwriters is their relationship networks with other underwriters and institutional investors. We perform a careful examination of the underwriter syndicate and investigate the relationship of the structure of the syndicate in respect of IPO performance. Moreover, the pattern of distribution in the size of syndicates is identified and is found to be significantly related to the IPO performance. The research shows that the perspective from the underwriter syndicate is not only interesting, also necessary to understand IPOs.
Third, we analyse the coordination problem in the IPO. In the research, we consider the auction method as a one-stage selling and the bookbuilding method as a two-stage selling method. The model suggests that the relationship between the underpricing level and the quality of IPO shares is non-monotone. This implication is consistent with empirical observations. In addition, regarding the issuers' proceeds in the IPOs, the auction method is better than the bookbuilding method in both noisy and noisy vanishing equilibria. The bookbuilding method may be helpful in other ways, such as maintaining liquidity or price support in secondary market.
By studying liquidity, business networks and the coordination problem, the thesis does not only complement the existing research by providing unique explanations for the IPO underpricing and other related puzzles, but also opens some interesting venues for future research.Sustainable monetary policy : lessons and evidence from the bank suspension period, 1797-1821http://hdl.handle.net/10023/555
This thesis re-examines the suspension of the gold standard rule in Britain between 1797 and 1821 within the framework of the theory of credible and time consistent monetary policy. By combining both historical and theoretical analysis the thesis challenges the prevailing theory in which the gold standard is considered as a contingent rule and the suspension as an exogenously credible regime.
Firstly, the thesis analyses what made the suspension credible in the absence of the gold standard rule. It is proposed that the suspension was a credible regime, because the resumption of the gold standard at the old par value in the future was a sustainable plan. It is shown that monetary policy during the bad state -- such as war -- can still be time consistent in the absence of the formal commitment rule, if the policy maker's plan is to resume the original commitment rule when the economy returns to the good state. The equilibrium is based on trigger strategies where private agents retaliate if a policy maker deviates from its policy plan to resume the gold standard rule.
Secondly, the thesis aims to establish why the gold standard rule was suspended for twenty-four years. Both historical analysis and a dynamic general equilibrium model demonstrate that the gold standard was a shock amplifier when the shocks became persistent in the 1790s, and suspension was used to restore monetary stability during the French Wars. As the suspension of cash payments was a credible regime, it maintained the value and circulation of paper currency that in turn stabilised production and consumption. Suspension increased the degree of flexibility in the economic policy as the monetary authority had an opportunity to stimulate the economy by issuing fiat money during the war, on the understanding that the fiat money so issued would be withdrawn from circulation before the gold standard resumed.
Finally, it is explained why the gold standard was resumed after the relatively successful Suspension Period. The gold standard was seen as a solution to the problem that arose from the Bank of England's ambiguous role as a public and private institution. Rules were considered to be better than discretion, and the gold convertibility was a transparent principle, which maximised the long-run welfare of the society. The thesis demonstrates how already in the eighteenth century commitment to the gold standard rule had increased the efficiency of capital markets and enabled Britain to finance its eighteenth-century wars by using deficit finance. Maintaining these abilities through the gold standard was desirable.
Fri, 25 Jul 2008 00:00:00 GMThttp://hdl.handle.net/10023/5552008-07-25T00:00:00ZNewby, Elisa Maria SusannaThis thesis re-examines the suspension of the gold standard rule in Britain between 1797 and 1821 within the framework of the theory of credible and time consistent monetary policy. By combining both historical and theoretical analysis the thesis challenges the prevailing theory in which the gold standard is considered as a contingent rule and the suspension as an exogenously credible regime.
Firstly, the thesis analyses what made the suspension credible in the absence of the gold standard rule. It is proposed that the suspension was a credible regime, because the resumption of the gold standard at the old par value in the future was a sustainable plan. It is shown that monetary policy during the bad state -- such as war -- can still be time consistent in the absence of the formal commitment rule, if the policy maker's plan is to resume the original commitment rule when the economy returns to the good state. The equilibrium is based on trigger strategies where private agents retaliate if a policy maker deviates from its policy plan to resume the gold standard rule.
Secondly, the thesis aims to establish why the gold standard rule was suspended for twenty-four years. Both historical analysis and a dynamic general equilibrium model demonstrate that the gold standard was a shock amplifier when the shocks became persistent in the 1790s, and suspension was used to restore monetary stability during the French Wars. As the suspension of cash payments was a credible regime, it maintained the value and circulation of paper currency that in turn stabilised production and consumption. Suspension increased the degree of flexibility in the economic policy as the monetary authority had an opportunity to stimulate the economy by issuing fiat money during the war, on the understanding that the fiat money so issued would be withdrawn from circulation before the gold standard resumed.
Finally, it is explained why the gold standard was resumed after the relatively successful Suspension Period. The gold standard was seen as a solution to the problem that arose from the Bank of England's ambiguous role as a public and private institution. Rules were considered to be better than discretion, and the gold convertibility was a transparent principle, which maximised the long-run welfare of the society. The thesis demonstrates how already in the eighteenth century commitment to the gold standard rule had increased the efficiency of capital markets and enabled Britain to finance its eighteenth-century wars by using deficit finance. Maintaining these abilities through the gold standard was desirable.High technology firm performance, innovation, and networks : an empirical analysis of firms in Scottish high technology clustershttp://hdl.handle.net/10023/539
This thesis is an empirical analysis of the performance, innovation and networks of high
technology firms. It is conducted at the micro-economic level, based on new empirical
evidence by fieldwork methods, from the primary source data on firms in the five Scottish
hi-tech clusters. The questionnaire design is cross-sectional, to which was added a time
series element, and involves many unique features. It enabled the gathering of rich
quantitative and qualitative data on all stages of the dynamic innovation process. The
database was used in cross-sectional analysis of many key hypotheses in the hi-tech
context, by robust econometric models of export, innovation (e.g. Schumpeterian
hypothesis), and growth (e.g. Gibrat’s Law of Proportionate Effect) performances. The hi-
tech firm’s networks, internationalisation and embeddeddness, are analysed using novel
measures.
A structural simultaneous equations model is developed to explain the relationship between
networks, innovation and performance, by establishing a link between the innovation input,
the innovation output, and performance, based on the empirical knowledge production
function model. The 2-stage, 4 equations model, (using Heckman’s procedure) deals with
both simultaneity and sample selection bias. Robust estimation techniques (I3SLS, Tobit)
are used for estimation.
The results highlight the simultaneity and selectivity issue. The hi-tech firms with
aggressive innovation strategies, international markets and global products, still find it vital
to be embedded in local networks, which in turn raise their performance. Technology-push
factors, research networks, knowledge spillovers from markets, and a firm’s radical
innovation attempts determine its innovation input intensity. Firms are unable to attain
innovation success through innovation investments alone; integration of internal and
external resources is important. The innovation sales intensity are not determined by
innovation input, but by the demand-pull factors like customer networks, exporting, and
market expansion strategies. This also applies to their export intensity. Lack of internal
resources, capabilities, and government support are the major obstacles to
commercialisation of innovation.
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/10023/5392008-01-01T00:00:00ZUjjual, VandanaThis thesis is an empirical analysis of the performance, innovation and networks of high
technology firms. It is conducted at the micro-economic level, based on new empirical
evidence by fieldwork methods, from the primary source data on firms in the five Scottish
hi-tech clusters. The questionnaire design is cross-sectional, to which was added a time
series element, and involves many unique features. It enabled the gathering of rich
quantitative and qualitative data on all stages of the dynamic innovation process. The
database was used in cross-sectional analysis of many key hypotheses in the hi-tech
context, by robust econometric models of export, innovation (e.g. Schumpeterian
hypothesis), and growth (e.g. Gibrat’s Law of Proportionate Effect) performances. The hi-
tech firm’s networks, internationalisation and embeddeddness, are analysed using novel
measures.
A structural simultaneous equations model is developed to explain the relationship between
networks, innovation and performance, by establishing a link between the innovation input,
the innovation output, and performance, based on the empirical knowledge production
function model. The 2-stage, 4 equations model, (using Heckman’s procedure) deals with
both simultaneity and sample selection bias. Robust estimation techniques (I3SLS, Tobit)
are used for estimation.
The results highlight the simultaneity and selectivity issue. The hi-tech firms with
aggressive innovation strategies, international markets and global products, still find it vital
to be embedded in local networks, which in turn raise their performance. Technology-push
factors, research networks, knowledge spillovers from markets, and a firm’s radical
innovation attempts determine its innovation input intensity. Firms are unable to attain
innovation success through innovation investments alone; integration of internal and
external resources is important. The innovation sales intensity are not determined by
innovation input, but by the demand-pull factors like customer networks, exporting, and
market expansion strategies. This also applies to their export intensity. Lack of internal
resources, capabilities, and government support are the major obstacles to
commercialisation of innovation.Monetary frameworks in developing countries : central bank independence and exchange rate arrangementshttp://hdl.handle.net/10023/476
The objective of the thesis was to study monetary policy frameworks in developing
countries. The thesis focused on three aspects of the monetary framework; the
degree of central bank independence, the monetary policy strategy and the exchange rate regime. The research applied quantitative empirical analysis and in-depth
case studies on Egypt, Jordan and Lebanon.
The empirical research investigated three areas: 1) the phenomenon of ‘fear of
floating’ and the correlation between exchange rate and macroeconomic volatility;
2) the degree of monetary policy independence in developing countries in the
context of their increased integration into the global economic system; and 3) the
degree of central bank independence and how it impacts both ‘fear of floating’ and
monetary policy independence. The case studies allowed for an in-depth
understanding of the process of setting monetary policy and the constraints under
which it is formulated in developing countries.
The results that emerged from the quantitative analysis highlight the impact of central bank independence in influencing the other aspects of the monetary framework, as it can mitigate fear of floating and contribute to increased monetary policy independence of world interest rates in developing countries.
The case studies detailed the evolution of monetary frameworks in three countries
with varying degrees of central bank independence. The degree of central bank
independence increased in Egypt and Jordan as a result of severe currency crises in
each country, while Lebanon provides a very different example of a developing
country with an independent central bank since its inception.
The conclusions that emerged from the cases suggest that central bank independence is critical in achieving exchange rate and price stability; however, developing countries should avoid focusing on exchange rate stability at the expense of other considerations for extended periods of time. In that, the results point to the benefits of proactively and pre-emptively managing the exchange rate regime. The cases also highlight the importance of the coordination between fiscal and monetary policies, as conditions of fiscal profligacy can undermine even the
most independent central bank.
Sun, 01 Jun 2008 00:00:00 GMThttp://hdl.handle.net/10023/4762008-06-01T00:00:00ZMaziad, SamarThe objective of the thesis was to study monetary policy frameworks in developing
countries. The thesis focused on three aspects of the monetary framework; the
degree of central bank independence, the monetary policy strategy and the exchange rate regime. The research applied quantitative empirical analysis and in-depth
case studies on Egypt, Jordan and Lebanon.
The empirical research investigated three areas: 1) the phenomenon of ‘fear of
floating’ and the correlation between exchange rate and macroeconomic volatility;
2) the degree of monetary policy independence in developing countries in the
context of their increased integration into the global economic system; and 3) the
degree of central bank independence and how it impacts both ‘fear of floating’ and
monetary policy independence. The case studies allowed for an in-depth
understanding of the process of setting monetary policy and the constraints under
which it is formulated in developing countries.
The results that emerged from the quantitative analysis highlight the impact of central bank independence in influencing the other aspects of the monetary framework, as it can mitigate fear of floating and contribute to increased monetary policy independence of world interest rates in developing countries.
The case studies detailed the evolution of monetary frameworks in three countries
with varying degrees of central bank independence. The degree of central bank
independence increased in Egypt and Jordan as a result of severe currency crises in
each country, while Lebanon provides a very different example of a developing
country with an independent central bank since its inception.
The conclusions that emerged from the cases suggest that central bank independence is critical in achieving exchange rate and price stability; however, developing countries should avoid focusing on exchange rate stability at the expense of other considerations for extended periods of time. In that, the results point to the benefits of proactively and pre-emptively managing the exchange rate regime. The cases also highlight the importance of the coordination between fiscal and monetary policies, as conditions of fiscal profligacy can undermine even the
most independent central bank.Macroeconomic variables and the stock market : an empirical comparison of the US and Japanhttp://hdl.handle.net/10023/464
In this thesis, extensive research regarding the relationship between macroeconomic variables and the stock market is carried out. For this purpose the two largest stock markets in the world, namely the US and Japan, are chosen. As a proxy for the US stock market we use the S&P500 and for Japan the Nikkei225. Although there are many empirical investigations of the US stock market, Japan has lagged behind. Especially the severe boom and bust sequence in Japan is unique in the developed world in recent economic history and it is important to shed more light on the causes of this development. First, we investigate the long-run relationship between selected macroeconomic variables and the stock market in a cointegration framework. As expected, we can support existing findings in the US, whereas Japan does not follow the same relationships as the US. Further econometric analysis reveals a structural break in Japan in the early 1990s. Before that break, the long-run relationship is comparable to the US, whereas after the break this relationship breaks down. We believe that a liquidity trap in a deflationary environment might have caused the normal relationship to break down. Secondly, we increase the variable set and apply a non-linear estimation technique to investigate non-linear behaviour between macroeconomic variables and the stock market. We find the non-linear models to have better in and out of sample performance than the appropriate linear models. Thirdly, we test a particular non-linear model of noise traders that interact with arbitrage traders in the dividend yield for the US and Japanese stock market. A two-regime switching model is supported with an inner random or momentum regime and an outer mean reversion regime. Overall, we recommend investors and policymakers to be aware that a liquidity trap in a deflationary environment could also cause severe downturn in the US if appropriate measures are not implemented accordingly.
Wed, 25 Jun 2008 00:00:00 GMThttp://hdl.handle.net/10023/4642008-06-25T00:00:00ZHumpe, AndreasIn this thesis, extensive research regarding the relationship between macroeconomic variables and the stock market is carried out. For this purpose the two largest stock markets in the world, namely the US and Japan, are chosen. As a proxy for the US stock market we use the S&P500 and for Japan the Nikkei225. Although there are many empirical investigations of the US stock market, Japan has lagged behind. Especially the severe boom and bust sequence in Japan is unique in the developed world in recent economic history and it is important to shed more light on the causes of this development. First, we investigate the long-run relationship between selected macroeconomic variables and the stock market in a cointegration framework. As expected, we can support existing findings in the US, whereas Japan does not follow the same relationships as the US. Further econometric analysis reveals a structural break in Japan in the early 1990s. Before that break, the long-run relationship is comparable to the US, whereas after the break this relationship breaks down. We believe that a liquidity trap in a deflationary environment might have caused the normal relationship to break down. Secondly, we increase the variable set and apply a non-linear estimation technique to investigate non-linear behaviour between macroeconomic variables and the stock market. We find the non-linear models to have better in and out of sample performance than the appropriate linear models. Thirdly, we test a particular non-linear model of noise traders that interact with arbitrage traders in the dividend yield for the US and Japanese stock market. A two-regime switching model is supported with an inner random or momentum regime and an outer mean reversion regime. Overall, we recommend investors and policymakers to be aware that a liquidity trap in a deflationary environment could also cause severe downturn in the US if appropriate measures are not implemented accordingly.Optimal monetary and fiscal policy in economies with multiple distortionshttp://hdl.handle.net/10023/438
This thesis aims to contribute towards a better understanding of the optimal coordination of monetary and fiscal policy in complex economic environments.
We analyze the characteristics of optimal dynamics in an economy in which neither prices nor wages adjust instantaneously and lump-sum taxes are unavailable as a source of government finance. We then propose that monetary and fiscal policy should be coordinated to satisfy a pair of simple `specific targeting rules', a rule for inflation and a rule for the growth of real wages. We show that such simple rule-based conduct of policy can do remarkably well in replicating the dynamics of the economy under optimal policy following a given shock.
We study optimal policy coordination in the context of an economy where a constant proportion of agents lacks access to the asset market. We find that the optimal economy moves along an analogue of a conventional inflation-output variance frontier in response to a government spending shock, as the population share of non-Ricardian agents rises. The optimal output response rises, while inflation volatility subsides. There is little evidence that increased government spending would crowd in private consumption in the optimal economy.
We investigate the optimal properties and wider implications of a macroeconomic policy framework aimed at meeting an unconditional debt target. We show that the best stationary policy in terms of an unconditional welfare measure is characterized by highly persistent debt dynamics, less history-dependence in the conduct of policy, less reliance on debt finance and more short-term volatility following a government spending shock compared with the non-stationary `timelessly optimal' plan.
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/10023/4382008-01-01T00:00:00ZHorvath, MichalThis thesis aims to contribute towards a better understanding of the optimal coordination of monetary and fiscal policy in complex economic environments.
We analyze the characteristics of optimal dynamics in an economy in which neither prices nor wages adjust instantaneously and lump-sum taxes are unavailable as a source of government finance. We then propose that monetary and fiscal policy should be coordinated to satisfy a pair of simple `specific targeting rules', a rule for inflation and a rule for the growth of real wages. We show that such simple rule-based conduct of policy can do remarkably well in replicating the dynamics of the economy under optimal policy following a given shock.
We study optimal policy coordination in the context of an economy where a constant proportion of agents lacks access to the asset market. We find that the optimal economy moves along an analogue of a conventional inflation-output variance frontier in response to a government spending shock, as the population share of non-Ricardian agents rises. The optimal output response rises, while inflation volatility subsides. There is little evidence that increased government spending would crowd in private consumption in the optimal economy.
We investigate the optimal properties and wider implications of a macroeconomic policy framework aimed at meeting an unconditional debt target. We show that the best stationary policy in terms of an unconditional welfare measure is characterized by highly persistent debt dynamics, less history-dependence in the conduct of policy, less reliance on debt finance and more short-term volatility following a government spending shock compared with the non-stationary `timelessly optimal' plan.Sunk cost accounting and entrapment in corporate acquisitions and financial markets : an experimental analysishttp://hdl.handle.net/10023/427
Sunk cost accounting refers to the empirical finding that individuals tend to let their
decisions be influenced by costs made at an earlier time in such a way that they are
more risk seeking than they would be had they not incurred these costs. Such
behaviour violates the axioms of economic theory which states individuals should
only consider incremental costs and benefits when executing investments. This
dissertation is concerned whether the pervasive sunk cost phenomenon extends to
corporate acquisitions and financial markets. 122 students from the University of St
Andrews participated in three experiments exploring the use of sunk costs in
interactive negotiation contexts and financial markets. Experiment I elucidates that
subjects value the sunk cost issue higher than other issues in a multi-issue negotiation.
Experiment II illustrates that bidders are influenced by the sunk costs of competing
bidders in a first price, sealed-bid, common-value auction. In financial markets their
exists an analogous concept to sunk cost accounting known as the disposition effect.
This explains the tendency of investors to sell “winning” stocks and hold “losing”
stocks. Experiment III demonstrates that trading strategies in an experimental equity
market are influenced by a pre-trading brokerage cost. Not only are subjects
influenced in the direction that reduces the disposition effect but also trading is
diminished. Without the brokerage cost there was a significant disposition effect.
JEL-Classifications
C70, C90, D44, D80, D81, G11
Sun, 01 Jun 2008 00:00:00 GMThttp://hdl.handle.net/10023/4272008-06-01T00:00:00ZKelly, BenjaminSunk cost accounting refers to the empirical finding that individuals tend to let their
decisions be influenced by costs made at an earlier time in such a way that they are
more risk seeking than they would be had they not incurred these costs. Such
behaviour violates the axioms of economic theory which states individuals should
only consider incremental costs and benefits when executing investments. This
dissertation is concerned whether the pervasive sunk cost phenomenon extends to
corporate acquisitions and financial markets. 122 students from the University of St
Andrews participated in three experiments exploring the use of sunk costs in
interactive negotiation contexts and financial markets. Experiment I elucidates that
subjects value the sunk cost issue higher than other issues in a multi-issue negotiation.
Experiment II illustrates that bidders are influenced by the sunk costs of competing
bidders in a first price, sealed-bid, common-value auction. In financial markets their
exists an analogous concept to sunk cost accounting known as the disposition effect.
This explains the tendency of investors to sell “winning” stocks and hold “losing”
stocks. Experiment III demonstrates that trading strategies in an experimental equity
market are influenced by a pre-trading brokerage cost. Not only are subjects
influenced in the direction that reduces the disposition effect but also trading is
diminished. Without the brokerage cost there was a significant disposition effect.
JEL-Classifications
C70, C90, D44, D80, D81, G11Factors which affect the dynamics of privately-owned Chinese firms: an interdisciplinary empirical evaluationhttp://hdl.handle.net/10023/372
The thesis focuses on those factors which affect firm growth in the setting of the Chinese transition economy, such as size, age, entrepreneurship, resources, and environment. As regards the complexity of the business expansion mechanism, an interdisciplinary approach combining the fields of economics and management is adopted. Using fieldwork methods, new data were gathered in face-to-face interviews with 83 owner-managers of the Chinese privately owned firms in P. R. China in 2004, as well as in follow-up telephone interviews in 2006. The unique body of qualitative and quantitative data in terms of firm operation, human resources management, finance, technology and innovation, enterprise culture and competitive environment, were collected by a specially designed survey instrument, and enabled a number of new hypotheses to be tested in both economic and managerial aspects.
With respect to the modern developments of Gibrat’s Law (1931) and Jovanovic’s Learning Theory (1982) in economics, the effects of two “stylized factors”, namely size and age, along with a vector of firm-specific, environmental and selection bias variables, on firm growth, were examined in Heckman’s (1979) two-step selection model with the correction for sample selection bias and heteroscedasticity. The results indicated that the “stylized facts” that smaller and younger firms grew faster were also valid in the setting of China.
This thesis also explored managerial factors contributing to firm growth – viz. entrepreneurship theory, resource-based view in strategic management, and contingency theory in organizational behaviour. A variety of statistical methods were utilized to operationalize entrepreneurial orientation (EO), intangible assets (IA), and contingency factors (e.g. structure, environment, strategy, etc), and econometric models were estimated to examine their relationship with firm dynamics. The evidence suggested that IA might be more capable of facilitating firm growth than EO. However, when both were disaggregated into a lower level of attributes, the influences on growth may vary. Further, contingency theory, originally proposed for the case of larger firms in the west, was also validated in this study on the Chinese sampled firm. The combination of organizational forms and contingency configurations presented a higher power to explain business expansion. It implied that “the good fit” of contingency factors influenced firm dynamics only in a moderate way, whereas “the badness of fit” in configuration could engender either the highest or lowest firm growth, subject to their organizational structures.
Fri, 30 Nov 2007 00:00:00 GMThttp://hdl.handle.net/10023/3722007-11-30T00:00:00ZXu, ZhibinThe thesis focuses on those factors which affect firm growth in the setting of the Chinese transition economy, such as size, age, entrepreneurship, resources, and environment. As regards the complexity of the business expansion mechanism, an interdisciplinary approach combining the fields of economics and management is adopted. Using fieldwork methods, new data were gathered in face-to-face interviews with 83 owner-managers of the Chinese privately owned firms in P. R. China in 2004, as well as in follow-up telephone interviews in 2006. The unique body of qualitative and quantitative data in terms of firm operation, human resources management, finance, technology and innovation, enterprise culture and competitive environment, were collected by a specially designed survey instrument, and enabled a number of new hypotheses to be tested in both economic and managerial aspects.
With respect to the modern developments of Gibrat’s Law (1931) and Jovanovic’s Learning Theory (1982) in economics, the effects of two “stylized factors”, namely size and age, along with a vector of firm-specific, environmental and selection bias variables, on firm growth, were examined in Heckman’s (1979) two-step selection model with the correction for sample selection bias and heteroscedasticity. The results indicated that the “stylized facts” that smaller and younger firms grew faster were also valid in the setting of China.
This thesis also explored managerial factors contributing to firm growth – viz. entrepreneurship theory, resource-based view in strategic management, and contingency theory in organizational behaviour. A variety of statistical methods were utilized to operationalize entrepreneurial orientation (EO), intangible assets (IA), and contingency factors (e.g. structure, environment, strategy, etc), and econometric models were estimated to examine their relationship with firm dynamics. The evidence suggested that IA might be more capable of facilitating firm growth than EO. However, when both were disaggregated into a lower level of attributes, the influences on growth may vary. Further, contingency theory, originally proposed for the case of larger firms in the west, was also validated in this study on the Chinese sampled firm. The combination of organizational forms and contingency configurations presented a higher power to explain business expansion. It implied that “the good fit” of contingency factors influenced firm dynamics only in a moderate way, whereas “the badness of fit” in configuration could engender either the highest or lowest firm growth, subject to their organizational structures.Productivity trends in the Thai manufacturing sector: the pre- and post-crisis evidence relating to the 1997 economic crisishttp://hdl.handle.net/10023/369
The principal aim of this thesis is to examine the validity of the claim that low productivity led to a decline in Thailand’s competitiveness, and hence, to the 1997 economic crisis. For a decade from 1985 to 1995, Thailand was one of the world’s fastest-growing economies with an average real annual GDP growth of 8.4 percent. However, such growth was criticized as being simply the result of large inward investment and rapid accumulation of capital, leading to very little productivity growth, and therefore, being unsustainable in the long run. Worse still, the later surges of capital inflows came in mainly as speculative stashes, instead of as foreign direct investments in production and businesses. Hence, as predicted, the boom finally came to a sudden end in 1997. The economic growth statistics recorded severe contraction, financial market collapsed, the currency was battered, domestic demand slumped, severe excess capacity was experienced, employment deteriorated, personal and corporate income diminished, inflation and the cost of living mounted, and finally, poverty surged.
This thesis utilizes a stochastic production frontier approach to verify the claim that low productivity lessened Thailand’s competitiveness. This approach, unlike the standard econometric approach, allows the existence of technical inefficiency in the production process. It also, unlike other non-parametric approaches, recognizes that such inefficiency can sometimes occur as a result of external factors that are out of the firms’ direct control, such as statistical errors and random shocks. The period covered in this thesis is from 1990 to 2002. This is divided into 2 sub-periods, i.e. the pre-crisis period (1990 – 1996) and the post-crisis period (1997 – 2002). The estimation results indicate a structural shift in the Thai manufacturing sector, from being labour intensive in the pre-crisis period to being capital intensive in the post-crisis period. The productivity level also improved post-crisis, as compared to the pre-crisis level, and is shown to follow an increasing trend. The low productive investment level in the pre-crisis period is identified as having led to the decline in the manufacturing sector’s efficiency. The thesis concludes that this low productivity level did indeed lead to the decline in Thailand’s competitiveness, and hence, to the decline of export growth, which was at that time the main source of Thailand’s economic growth; in turn, playing an important role in precipitating the 1997 economic crisis.
Fri, 13 Jul 2007 00:00:00 GMThttp://hdl.handle.net/10023/3692007-07-13T00:00:00ZArunsawadiwong, SuwanneeThe principal aim of this thesis is to examine the validity of the claim that low productivity led to a decline in Thailand’s competitiveness, and hence, to the 1997 economic crisis. For a decade from 1985 to 1995, Thailand was one of the world’s fastest-growing economies with an average real annual GDP growth of 8.4 percent. However, such growth was criticized as being simply the result of large inward investment and rapid accumulation of capital, leading to very little productivity growth, and therefore, being unsustainable in the long run. Worse still, the later surges of capital inflows came in mainly as speculative stashes, instead of as foreign direct investments in production and businesses. Hence, as predicted, the boom finally came to a sudden end in 1997. The economic growth statistics recorded severe contraction, financial market collapsed, the currency was battered, domestic demand slumped, severe excess capacity was experienced, employment deteriorated, personal and corporate income diminished, inflation and the cost of living mounted, and finally, poverty surged.
This thesis utilizes a stochastic production frontier approach to verify the claim that low productivity lessened Thailand’s competitiveness. This approach, unlike the standard econometric approach, allows the existence of technical inefficiency in the production process. It also, unlike other non-parametric approaches, recognizes that such inefficiency can sometimes occur as a result of external factors that are out of the firms’ direct control, such as statistical errors and random shocks. The period covered in this thesis is from 1990 to 2002. This is divided into 2 sub-periods, i.e. the pre-crisis period (1990 – 1996) and the post-crisis period (1997 – 2002). The estimation results indicate a structural shift in the Thai manufacturing sector, from being labour intensive in the pre-crisis period to being capital intensive in the post-crisis period. The productivity level also improved post-crisis, as compared to the pre-crisis level, and is shown to follow an increasing trend. The low productive investment level in the pre-crisis period is identified as having led to the decline in the manufacturing sector’s efficiency. The thesis concludes that this low productivity level did indeed lead to the decline in Thailand’s competitiveness, and hence, to the decline of export growth, which was at that time the main source of Thailand’s economic growth; in turn, playing an important role in precipitating the 1997 economic crisis.Towards the microfoundations of finance and growthhttp://hdl.handle.net/10023/331
We take a critical view of the standard approach to finance and growth. The mapping between the theory and empirics is shown to be poorly understood, and this is traced to deficiencies in our understanding of the microeconomics at play. By looking at both primary and secondary historical evidence we argue that issues of aggregation are critical, and that spatial factors are also prevalent. Further, we suggest that these disaggregated elements can change over the course of an industrial revolution. A model in the spirit of standard finance and growth theories is extended to consider these further effects, and we calibrate the model to data on historical growth paths.
In order to advance our understanding of the microeconomic factors that cause the observed phenomena in the finance-growth nexus, we develop a general equilibrium theory of financial intermediation in which exchange costs are endogenously determined by technologies, endowments and preferences. We suggest that incomplete contracts might be central to these phenomena. We link this framework to an understanding of power and political economy in a setting with heterogeneous agents. We develop these results numerically, showing a number of interesting interactions between markets, exchange costs and institutions in economies with different levels of wealth.
The model of endogenous exchange costs can be thought of in terms of the findings coming out of our historical analysis. We outline in some detail the further steps that need to be taken before we can speak of the microfoundations of finance and growth with any confidence. First, a fully dynamic model of markets and coalitions must be embedded within a story of economic growth that can match the dynamic observations. Second, we must develop our conception of incomplete contracting and the link with institutions and political economy. The thesis thus opens a number of interesting avenues for future research.
Wed, 20 Jun 2007 00:00:00 GMThttp://hdl.handle.net/10023/3312007-06-20T00:00:00ZTrew, AlexWe take a critical view of the standard approach to finance and growth. The mapping between the theory and empirics is shown to be poorly understood, and this is traced to deficiencies in our understanding of the microeconomics at play. By looking at both primary and secondary historical evidence we argue that issues of aggregation are critical, and that spatial factors are also prevalent. Further, we suggest that these disaggregated elements can change over the course of an industrial revolution. A model in the spirit of standard finance and growth theories is extended to consider these further effects, and we calibrate the model to data on historical growth paths.
In order to advance our understanding of the microeconomic factors that cause the observed phenomena in the finance-growth nexus, we develop a general equilibrium theory of financial intermediation in which exchange costs are endogenously determined by technologies, endowments and preferences. We suggest that incomplete contracts might be central to these phenomena. We link this framework to an understanding of power and political economy in a setting with heterogeneous agents. We develop these results numerically, showing a number of interesting interactions between markets, exchange costs and institutions in economies with different levels of wealth.
The model of endogenous exchange costs can be thought of in terms of the findings coming out of our historical analysis. We outline in some detail the further steps that need to be taken before we can speak of the microfoundations of finance and growth with any confidence. First, a fully dynamic model of markets and coalitions must be embedded within a story of economic growth that can match the dynamic observations. Second, we must develop our conception of incomplete contracting and the link with institutions and political economy. The thesis thus opens a number of interesting avenues for future research.Environment and health in Central Asia: quantifying the determinants of child survivalhttp://hdl.handle.net/10023/330
The impact of environmental degradation on well-being is largely ignored in terms of economic costs of development. Due in large part to measurement difficulties, the environment in the daily welfare of the world's poorest remains inadequately accounted for in development policies. The aim of this work is, therefore, to advance our understanding of the relationship between the environment and human health. Anthropogenic activities in Central Asia have severely disrupted the natural environment. The poorest, most vulnerable members of society are at an increased risk of mortality and a life-time of illness associated with worsening ecological conditions in the region. The work is by nature inter-disciplinary and pulls from many social sciences in an attempt to provide new insight into the role of long term environmental degradation and the impact on social welfare.
There are three main original contributions of this work. Firstly, the research demonstrates the traditional emphasis in the literature on socioeconomic factors in explaining high rates of child mortality in Central Asia is inadequate. Secondly, for the first time in an international cross-section examining the determinants of child survival, the macro-level environment is put forth as a key determinant of excess child mortality in Central Asia. An improved measure of income is used for the first time in such a study to control for important distributional effects within and between countries. The results confirm the hypothesis that traditional determinants do not account for endemically high rates of mortality in the region. Secondly, using administrative (oblast) data from Uzbekistan, Chapter 6 presents the first study of its kind to incorporate important geographic as well as socioeconomic information in explaining variation in infant mortality due likely to ecological degradation. Ultimately, the findings demonstrate the environment must be adequately considered in all policy making aimed at improving health outcomes in the region.
Wed, 20 Jun 2007 00:00:00 GMThttp://hdl.handle.net/10023/3302007-06-20T00:00:00ZFranz, Jennifer SueThe impact of environmental degradation on well-being is largely ignored in terms of economic costs of development. Due in large part to measurement difficulties, the environment in the daily welfare of the world's poorest remains inadequately accounted for in development policies. The aim of this work is, therefore, to advance our understanding of the relationship between the environment and human health. Anthropogenic activities in Central Asia have severely disrupted the natural environment. The poorest, most vulnerable members of society are at an increased risk of mortality and a life-time of illness associated with worsening ecological conditions in the region. The work is by nature inter-disciplinary and pulls from many social sciences in an attempt to provide new insight into the role of long term environmental degradation and the impact on social welfare.
There are three main original contributions of this work. Firstly, the research demonstrates the traditional emphasis in the literature on socioeconomic factors in explaining high rates of child mortality in Central Asia is inadequate. Secondly, for the first time in an international cross-section examining the determinants of child survival, the macro-level environment is put forth as a key determinant of excess child mortality in Central Asia. An improved measure of income is used for the first time in such a study to control for important distributional effects within and between countries. The results confirm the hypothesis that traditional determinants do not account for endemically high rates of mortality in the region. Secondly, using administrative (oblast) data from Uzbekistan, Chapter 6 presents the first study of its kind to incorporate important geographic as well as socioeconomic information in explaining variation in infant mortality due likely to ecological degradation. Ultimately, the findings demonstrate the environment must be adequately considered in all policy making aimed at improving health outcomes in the region.