OUPbloghttps://blog.oup.com
OUPblogThu, 17 Aug 2017 14:55:21 +0000en-UShourly1https://wordpress.org/?v=4.7.5(c) Oxford University Press blog@oup.com (Oxford University Press)blog@oup.com (Oxford University Press)Education1440http://blog.oup.com/wp-content/uploads/2012/04/oup-icon.jpgOUPbloghttps://blog.oup.com
Academic insights for the thinking world.Academic insights for the thinking world.Oxford, Comment, OUP, publishing, books, education, University, Press, podcastOxford University PressOxford University Pressblog@oup.comnonohttps://blog.oup.com/2017/08/arts-in-health/Is there a place for the arts in health?http://feedproxy.google.com/~r/oupbloglawpolitics/~3/xPWjiDP_8AQ/
http://feeds.feedblitz.com/~/437603114/_/oupblog/#respondThu, 17 Aug 2017 11:30:14 +0000https://blog.oup.com/?p=132540In a utopian world of abundant health budgets and minimal health challenges, it is probably fair to say that few would object to including the arts within hospitals or promoting them as a part of healthy lifestyles. Certainly, we have a long history of incorporating the arts into health (stretching back around 40,000 years), so it’s a concept many people are familiar with. But in an era of austerity, the value that the arts can bring comes under much closer scrutiny

]]>
In a utopian world of abundant health budgets and minimal health challenges, it is probably fair to say that few would object to including the arts within hospitals or promoting them as a part of healthy lifestyles. Certainly, we have a long history of incorporating the arts into health (stretching back around 40,000 years), so it’s a concept many people are familiar with. But in an era of austerity, the value that the arts can bring comes under much closer scrutiny: do they really bring value or could they even be a distraction or a drain on limited funds? In fact, research is suggesting the opposite. In times of austerity, it is more important than ever that we look for creative ways to improve health.

The arts in medicine
The research evidence around the effects of the arts in healthcare settings is much larger than many realise. Not just ‘nice to have’, studies have shown how a range of art forms can have tangible health benefits. For example, music has been helping patients with Parkinson’s disease to walk, practising magic tricks has been found to improve hand function in children with cerebral palsy, the design of emergency departments has been shown to reduce violence towards staff , background music prior to surgery has been found to reduce the quantity of morphine needed post-surgery, video games have been shown to increase medication adherence in children with cancer, and songs have been found to improve cognitive function in people with dementia. Some of the most effective programmes are involving ‘problem-based solution’ models, whereby clinicians and health professionals are highlighting key challenges in their daily work and artists are devising creative solutions to these problems. Such work is showing that the arts are not just means of entertainment – they play functional roles within hospitals, hospices, and care homes.

The arts in public health
We find a similar, growing evidence base for the impact of the arts in preventing ill health. An article in the BMJ in 1996 showed that people who only engage with the arts and culture occasionally are more likely to die prematurely than people who engage regularly, even when accounting for factors such as socio-economic status. This finding has since been replicated eight further times in different population databases, not just for all-cause mortality but also for cancer mortality and cardiovascular mortality. But why do we find such an effect? There are a range of mechanisms at play here: psychological, physiological, social, and behavioural. For example, engaging with the arts supports mental health, reducing negative symptoms of mental illness such as anxiety and depression, and increasing positive aspects such as enhancing wellbeing. The arts also affect cognition, supporting temporal and spatial abilities, language, and memory, reducing our risk of dementia. The arts also support social identities and cohesion within society. And they can increase our agency and self-esteem, which in turn increase our ability to change behaviours in other aspects of our lives. As the emphasis on preventative health increases, there is a growing opportunity to harness these effects to help people live happier, healthier lives.

The arts in health communication
However, even if we acknowledge the role of the arts in public health and medicine, it can seem a stretch to suggest they have a role in extreme situations. Is there really a place for the arts in the context of global emergencies, such as epidemics? The Ebola outbreak of 2013-2016 demonstrated precisely how important the arts are in such situations. During the epidemic, there were abundant rumours and misunderstandings about the disease amongst locals, instances of people who were affected hiding from medical staff, Ebola survivors being outcast from their societies, and even healthcare workers being murdered. To combat this misinformation and raise public understanding, short films based on oral traditions of storytelling, radio serial dramas, and even electro-dance rap songs were created and went viral. Sensitive to local dialects and cultural traditions, these art forms became a route to communicate critical information in a way that people could trust, combatting the misinformation and helping health workers to carry out their work. Across the world there are programmes harnessing this power of the arts in health communication, from theatre productions to educate about diabetes, to schools crafts workshops to explain the risk factors for cancer. The arts can be direct and accessible ways of communicating crucial messages in ways that people from a wide range of backgrounds can relate.

In times of austerity, it is more important than ever that we look for creative ways to improve health.

The health economics of the arts
The examples given here are by no means the only ways that the arts can play a role in health: work in medical humanities, healthcare design, arts therapies, and arts-based training for staff provide further arguments for their importance. Yet despite the evidence, there can still be caution about the cost of engaging the arts in health. Should we be spending money on arts when such tight budgets mean there are staffing shortages or a lack of beds? Surprisingly, much of the spend on the arts in healthcare does not come from healthcare budgets; arts organisations, charities, and philanthropists fund a plethora of work around the UK and internationally. Further, many arts in health programmes have been shown to save money in health budgets. People who engage with the arts and culture in their communities are 65% more likely to report poor health, even when statistically controlling for socio-demographic and health-related factors. These people also make less use of health services. In fact, data from the Department for Culture, Media and Sport in the UK from 2015 shows that the estimated cost savings to the NHS for GP visits and psychotherapy alone as a result of people engaging with the arts and therefore using health services less is £695 million per year. In light of data such as this, Clinical Commissioning Groups in England have been trialling social prescribing and cultural commissioning programmes involving arts projects. Gloucester Clinical Commissioning Group runs a programme called Artlift whereby for certain health conditions such as chronic pain and low-lying mental health conditions, GPs have been referring people not for medication or psychotherapy, but to participatory arts workshops. The evaluation of the programme has found that, for people who took part, not only did they report improvements in their health, but over the following year the number of GP consultations and the number of hospital admissions dropped. This equated to a saving of £576 per patient, compared to a cost of just £360. The savings from 90 patients alone in the 12 months following involvement were over £42,000. These are just two examples. There are growing numbers of health economic evaluations that are providing more insight into how and where the arts can play a role in supporting health budgets, suggesting that far from being a drain on resources, the arts could be an additional means of support to healthcare.

The World Health Organisation defined health as a ‘state of complete physical, mental and social wellbeing and not merely the absence of disease or infirmity’. In times of austerity, it can be easy for us to forget everything that seems peripheral to the core mission of avoiding disease or infirmity. But keeping sight of the importance of creativity, arts, culture, and community engagement is critical. The arts are a powerful tool we should be harnessing.

]]>http://feeds.feedblitz.com/~/437603114/_/oupblog/feed/0mental health,*Featured,arts in health,creative therapies,music therapy,Psychology & Neuroscience,daisy fancourt,Arts & Humanities,Health & Medicine,public health,Books,art therapy,healthcare,communication,creative therapy,physical health,wellbeing,arts,austerity,fancourt,Designing and researching interventions,health,psychological wellbeingIn a utopian world of abundant health budgets and minimal health challenges, it is probably fair to say that few would object to including the arts within hospitals or promoting them as a part of healthy lifestyles. Certainly, we have a long history of incorporating the arts into health (stretching back around 40,000 years), so it’s a concept many people are familiar with. But in an era of austerity, the value that the arts can bring comes under much closer scrutiny: do they really bring value or could they even be a distraction or a drain on limited funds? In fact, research is suggesting the opposite. In times of austerity, it is more important than ever that we look for creative ways to improve health.
The arts in medicine
The research evidence around the effects of the arts in healthcare settings is much larger than many realise. Not just ‘nice to have’, studies have shown how a range of art forms can have tangible health benefits. For example, music has been helping patients with Parkinson’s disease to walk, practising magic tricks has been found to improve hand function in children with cerebral palsy, the design of emergency departments has been shown to reduce violence towards staff , background music prior to surgery has been found to reduce the quantity of morphine needed post-surgery, video games have been shown to increase medication adherence in children with cancer, and songs have been found to improve cognitive function in people with dementia. Some of the most effective programmes are involving ‘problem-based solution’ models, whereby clinicians and health professionals are highlighting key challenges in their daily work and artists are devising creative solutions to these problems. Such work is showing that the arts are not just means of entertainment – they play functional roles within hospitals, hospices, and care homes.
The arts in public health
We find a similar, growing evidence base for the impact of the arts in preventing ill health. An article in the BMJ in 1996 showed that people who only engage with the arts and culture occasionally are more likely to die prematurely than people who engage regularly, even when accounting for factors such as socio-economic status. This finding has since been replicated eight further times in different population databases, not just for all-cause mortality but also for cancer mortality and cardiovascular mortality. But why do we find such an effect? There are a range of mechanisms at play here: psychological, physiological, social, and behavioural. For example, engaging with the arts supports mental health, reducing negative symptoms of mental illness such as anxiety and depression, and increasing positive aspects such as enhancing wellbeing. The arts also affect cognition, supporting temporal and spatial abilities, language, and memory, reducing our risk of dementia. The arts also support social identities and cohesion within society. And they can increase our agency and self-esteem, which in turn increase our ability to change behaviours in other aspects of our lives. As the emphasis on preventative health increases, there is a growing opportunity to harness these effects to help people live happier, healthier lives.
The arts in health communication
However, even if we acknowledge the role of the arts in public health and medicine, it can seem a stretch to suggest they have a role in extreme situations. Is there really a place for the arts in the context of global emergencies, such as epidemics? The Ebola outbreak of 2013-2016 demonstrated precisely how important the arts are in such situations. During the epidemic, there were abundant rumours and misunderstandings about the disease amongst locals, instances of people who were affected hiding from medical staff, Ebola survivors being outcast from their societies, and even healthcare workers being murdered. To combat this misinformation and raise public ... In a utopian world of abundant health budgets and minimal health challenges, it is probably fair to say that few would object to including the arts within hospitals or promoting them as a part of healthy lifestyles. Certainly, we have a long history ... http://feeds.feedblitz.com/~/437603114/_/oupblog/https://blog.oup.com/2017/08/jane-austen-literature-timeline/The world of Jane Austen [timeline]http://feedproxy.google.com/~r/oupbloglawpolitics/~3/PLNX4tORMHY/
http://feeds.feedblitz.com/~/437513218/_/oupblog/#respondThu, 17 Aug 2017 09:30:09 +0000https://blog.oup.com/?p=133021Jane Austen was a British author whose six novels quietly revolutionized world literature. She is now considered one of the greatest writers of all time (with frequent comparisons to Shakespeare) and hailed as the first woman to earn inclusion in the established canon of English literature. Despite Austen’s current fame, her life is notable for its lack of traditional ‘major’ events. Discover Austen’s world, and its impact on her writing ….

]]>
Jane Austen was a British author whose six novels quietly revolutionized world literature. She is now considered one of the greatest writers of all time (with frequent comparisons to Shakespeare) and hailed as the first woman to earn inclusion in the established canon of English literature. Despite Austen’s current fame, her life is notable for its lack of traditional ‘major’ events. She did not marry, although she had several suitors, and any references to private intimacies or griefs were excised from Jane’s letters by her sister Cassandra after the author’s death. Austen struggled to get many of her novels published, and some of her best-loved writings (including Northanger Abbey and Persuasion) were published posthumously.

Despite this, recent biographers have tended to see Austen’s life through the prism of significant and shaping emotional events — personal distresses and disappointments as well as literary accomplishments. With this in mind, we have taken a look at some of the key events in Austen’s life; her unhappy move to Bath, the influence of family ties and the French Revolution, publishing deals, and subsequent critical responses. Discover Austen’s world, and its impact on her writing….

Featured image credit: Three young woman are sitting at table in a garden having afternoon tea by Kate Greenaway, from Wellcome Images. CC-BY-SA 4.0 via Wikimedia Commons.

]]>http://feeds.feedblitz.com/~/437513218/_/oupblog/feed/0History,*Featured,northanger abbey,French Revolution,jane austen,persuasion,Timelines,sense and sensibility,Arts & Humanities,bath,British,literary history,Online products,english literature,Oxford Reference,Literature,pride and prejudiceJane Austen was a British author whose six novels quietly revolutionized world literature. She is now considered one of the greatest writers of all time (with frequent comparisons to Shakespeare) and hailed as the first woman to earn inclusion in the established canon of English literature. Despite Austen’s current fame, her life is notable for its lack of traditional ‘major’ events. She did not marry, although she had several suitors, and any references to private intimacies or griefs were excised from Jane's letters by her sister Cassandra after the author's death. Austen struggled to get many of her novels published, and some of her best-loved writings (including Northanger Abbey and Persuasion) were published posthumously.
Despite this, recent biographers have tended to see Austen’s life through the prism of significant and shaping emotional events — personal distresses and disappointments as well as literary accomplishments. With this in mind, we have taken a look at some of the key events in Austen’s life; her unhappy move to Bath, the influence of family ties and the French Revolution, publishing deals, and subsequent critical responses. Discover Austen’s world, and its impact on her writing….
Featured image credit: Three young woman are sitting at table in a garden having afternoon tea by Kate Greenaway, from Wellcome Images. CC-BY-SA 4.0 via Wikimedia Commons.
The post The world of Jane Austen [timeline] appeared first on OUPblog. Jane Austen was a British author whose six novels quietly revolutionized world literature. She is now considered one of the greatest writers of all time (with frequent comparisons to Shakespeare) and hailed as the first woman to earn inclusion in ... http://feeds.feedblitz.com/~/437513218/_/oupblog/https://blog.oup.com/2017/08/appalachian-music-stereotypes/We’re not singing a hillbilly elegy: challenging stereotypes in contemporary Appalachian songhttp://feedproxy.google.com/~r/oupbloglawpolitics/~3/TSliIveRbCA/
http://feeds.feedblitz.com/~/437469902/_/oupblog/#respondThu, 17 Aug 2017 08:30:39 +0000https://blog.oup.com/?p=132375In the run-up to the 2016 U.S. presidential election, Appalachia took center stage as a potent symbol of the many ways that decades of economic globalization have marginalized the country’s white working-class voters.

]]>
In the run-up to the 2016 U.S. presidential election, Appalachia took center stage as a potent symbol of the many ways that decades of economic globalization have marginalized the country’s white working-class voters. Liberal and conservative commentators alike were quick to point to the decline of places like McDowell County, West Virginia–where unemployment and opioid addiction rates have skyrocketed in recent years–as evidence that their preferred policy interventions were desperately needed. Both Hillary Clinton and Donald Trump visited Appalachian communities during their respective presidential campaigns; Trump even famously posed before the media donning a miner’s safety helmet during a May 2016 rally in Charleston, West Virginia.

Aside from the current resident of the White House, perhaps no one has profited more from the politicization of Appalachia than author J.D. Vance, whose Hillbilly Elegy: A Memoir of a Family and Culture in Crisis (Harper, 2016) rocketed to the top of the New York Times bestseller list in the months leading up to the election. Recounting his own troubled family life in Ohio’s Miami Valley and his struggles to find stability in a household marred by drug abuse, precarious employment, and abuse, Vance was quick to point to what he sees as a decline in “hillbilly” values—including adherence to a Protestant Christian faith, an unrestrained and violent hypermasculinity, and a powerful work ethic–as the primary cause of the challenges he faced. Appalachia, he argued, is a place in decline that can only be saved by reinvigorating “traditional” values.

Hillbilly Elegy was quickly picked up by a host of media outlets seeking insights into the minds of Trump voters, and several of them conducted their own reporting to add depth and color to Vance’s narrative of decline. Reviews in the New York Times and The Atlantic were generally positive (but with caveats), and Vance was interviewed by numerous publications to help explain the widespread appeal of Donald Trump among Appalachian voters. Several universities—including those that attract significant segments of their student bodies from Appalachia—have adopted the book for their campus reading programs and have invited Vance to speak on their campuses. Director Ron Howard—who famously portrayed one of Appalachia’s most beloved television characters, Opie Taylor, on The Andy Griffith Show during the 1960s–has also announced his intentions to develop the book into a major motion picture.

Yet, as many Appalachian studies scholars, commentators, and community activists have pointed out, Vance’s book might be a rather uninspiring account of his own difficult circumstances, but it is far from an accurate representation of daily life in Appalachia, and its efforts to blame poor and working-class whites for their economic and health struggles ignores the widespread structural challenges that have been detrimental to their well-being. Rather, Hillbilly Elegy traffics in familiar stereotypes of Appalachian laziness, inebriation, and fecundity that have been circulating since the late nineteenth century. But just as troubling is Vance’s seeming obliviousness to the many local and regional efforts that are being undertaken by Appalachian residents to sustain various cultural traditions—including foodways, storytelling, and arts and crafts–and to develop a vibrant and diverse economy throughout the region that seeks to replace the dwindling resource extraction and manufacturing bases that sustained the region for several generations.

Not surprisingly, music plays a key role in the ongoing cultural and economic vibrancy of many Appalachian communities. From old-time jam sessions held at restaurants that attract tourists to nationally and internationally syndicated programs like West Virginia Public Radio’s Mountain Stage and Kentucky’s WoodSongs Old-Time Radio Hour, musical activities throughout the region often seek to challenge stereotypes and to push against the narrative of Appalachian decline that has been circulating for more than a century.

Some of the most exciting contributions to these broad musical efforts have come from young musicians who have immersed themselves in the various traditional musics of Appalachia — the blues, Anglo-American balladry, fiddle and banjo tunes, and bluegrass—and used them as a jumping-off point for their own original creations. Many, but not all, of these musicians also identify as activists who work to challenge received narratives such as those offered by Vance and to build a more inclusive Appalachia.

One of the leaders of this movement is Saro Lynch-Thomason, a ballad singer, songwriter, visual artist, storyteller, and activist who lives in Asheville, North Carolina. A native of Nashville, Tennessee, Lynch-Thomason gained national attention in 2012 with an adventurous album, website, and lecture series called Blair Pathways: A Musical Exploration of America’s Largest Labor Uprising. Growing out of her work as an environmental activist who was fighting against the widespread use of mountaintop removal mining techniques in the Appalachian coalfields, Blair Pathways is a detailed musical exploration of Blair Mountain, West Virginia’s place in the long history of coal mining in Appalachia, from the early twentieth-century efforts to unionize the coalfields (a period in West Virginia history known as “the Mine Wars”) to recent efforts to protect the mountain from mountaintop removal, featuring Lynch-Thomason’s powerful singing voice along with those of former Carolina Chocolate Drops member Dom Flemons, labor singer Elaine Purkey, and ballad singer Elizabeth LaPrelle. Lynch-Thomason followed the release of this extensive album with a series of lecture-performances that took her to universities, community centers, and churches around the United States to tell about the intersections of environmental and labor struggles in the region. More recently, her song “More Waters Rising” has garnered national attention as a powerful song of solidarity in the face of contemporary political challenges.

Similarly, Wytheville, Virginia old-time banjoist and songwriter Sam Gleaves has drawn significant national attention not only for his exceptional takes on traditional string band music, but for his willingness to highlight the stories of LGBT people in Appalachia. His 2015 song “Ain’t We Brothers” recounts the story of Sam Williams, a gay West Virginia coal miner who faced extensive backlash in his community. Gleaves, who is also openly gay, told NPR Music’s Jewly Hight that “traditional music and traditional art really appeals to queer people, because in a lot of ways[,] it’s the music of a struggle; it’s the music of people who have fought against oppression.”

Vance’s Hillbilly Elegy offers few solutions to overcome the rather significant economic, public health, and environmental challenges that have emerged following the decline of coal. Rather, he offers a critique of modern Appalachian life that is grounded in romantic visions of an idyllic Appalachian past. But modern Appalachia requires modern interventions to solve its problems, and the region’s musicians are making significant inroads toward building a more inclusive and compassionate Appalachia, an Appalachia that can be transformed by creative problem-solving and a willingness to join together in community.

]]>http://feeds.feedblitz.com/~/437469902/_/oupblog/feed/0The Oxford Handbook of Country Music,*Featured,oxford handbook,American musicians,American music,United States,Travis D. Stimeling,2016 election,Music,Bluegrass,Online products,appalachian trail,country music,contemporary musicIn the run-up to the 2016 U.S. presidential election, Appalachia took center stage as a potent symbol of the many ways that decades of economic globalization have marginalized the country’s white working-class voters. Liberal and conservative commentators alike were quick to point to the decline of places like McDowell County, West Virginia–where unemployment and opioid addiction rates have skyrocketed in recent years–as evidence that their preferred policy interventions were desperately needed. Both Hillary Clinton and Donald Trump visited Appalachian communities during their respective presidential campaigns; Trump even famously posed before the media donning a miner’s safety helmet during a May 2016 rally in Charleston, West Virginia.
Aside from the current resident of the White House, perhaps no one has profited more from the politicization of Appalachia than author J.D. Vance, whose Hillbilly Elegy: A Memoir of a Family and Culture in Crisis (Harper, 2016) rocketed to the top of the New York Times bestseller list in the months leading up to the election. Recounting his own troubled family life in Ohio’s Miami Valley and his struggles to find stability in a household marred by drug abuse, precarious employment, and abuse, Vance was quick to point to what he sees as a decline in “hillbilly” values—including adherence to a Protestant Christian faith, an unrestrained and violent hypermasculinity, and a powerful work ethic–as the primary cause of the challenges he faced. Appalachia, he argued, is a place in decline that can only be saved by reinvigorating “traditional” values.
Hillbilly Elegy was quickly picked up by a host of media outlets seeking insights into the minds of Trump voters, and several of them conducted their own reporting to add depth and color to Vance’s narrative of decline. Reviews in the New York Times and The Atlantic were generally positive (but with caveats), and Vance was interviewed by numerous publications to help explain the widespread appeal of Donald Trump among Appalachian voters. Several universities—including those that attract significant segments of their student bodies from Appalachia—have adopted the book for their campus reading programs and have invited Vance to speak on their campuses. Director Ron Howard—who famously portrayed one of Appalachia’s most beloved television characters, Opie Taylor, on The Andy Griffith Show during the 1960s–has also announced his intentions to develop the book into a major motion picture.
Yet, as many Appalachian studies scholars, commentators, and community activists have pointed out, Vance’s book might be a rather uninspiring account of his own difficult circumstances, but it is far from an accurate representation of daily life in Appalachia, and its efforts to blame poor and working-class whites for their economic and health struggles ignores the widespread structural challenges that have been detrimental to their well-being. Rather, Hillbilly Elegy traffics in familiar stereotypes of Appalachian laziness, inebriation, and fecundity that have been circulating since the late nineteenth century. But just as troubling is Vance’s seeming obliviousness to the many local and regional efforts that are being undertaken by Appalachian residents to sustain various cultural traditions—including foodways, storytelling, and arts and crafts–and to develop a vibrant and diverse economy throughout the region that seeks to replace the dwindling resource extraction and manufacturing bases that sustained the region for several generations.
Not surprisingly, music plays a key role in the ongoing cultural and economic vibrancy of many Appalachian communities. From old-time jam sessions held at restaurants that attract tourists to nationally and internationally syndicated programs like West Virginia ... In the run-up to the 2016 U.S. presidential election, Appalachia took center stage as a potent symbol of the many ways that decades of economic globalization have marginalized the country’s white working-class voters.http://feeds.feedblitz.com/~/437469902/_/oupblog/https://blog.oup.com/2017/08/nuts-nerds-word-origins/On nuts and nerdshttp://feedproxy.google.com/~r/oupbloglawpolitics/~3/BJ7e_BB22P4/
http://feeds.feedblitz.com/~/436419554/_/oupblog/#respondWed, 16 Aug 2017 11:30:30 +0000https://blog.oup.com/?p=133018For decades the English-speaking world has been wondering where the word nerd came from. The Internet is full of excellent essays: the documentation is complete, and all the known hypotheses have been considered, refuted, or cautiously endorsed. I believe one of the proposed etymologies to be convincing (go on reading!), but first let me say something about nut.

]]>
For decades the English-speaking world has been wondering where the word nerd came from. The Internet is full of excellent essays: the documentation is complete, and all the known hypotheses have been considered, refuted, or cautiously endorsed. I believe one of the proposed etymologies to be convincing (go on reading!), but first let me say something about nut. Although its etymology and history are puzzling, even mysterious, no one seems to care. Slang is a flower growing on a huge dunghill. Not unexpectedly, people tend to pay attention to the flower and disregard the dung.

Nut is an old word with excellent connections, and yet its etymology is obscure. The Old English for nut was hnutu. Many modern words beginning with n and l once had an h before those resonants. German Nuss and Dutch noot are obvious congeners of nut. Yet trouble begins early enough. German Nuss also means “a slap,” most often as an element in the compound Kopfnuss “a (light) slap on the back of the head” (Kopf means “head”). Opinions are divided on whether Nuss1 and Nuss2 are related. They may well be. But how are nuts connected with beating? There was indeed an Old English verb hnītan “to thrust, knock, come into collision,” with the uncertain by-form hnēotan. Did nuts regularly fall on people’s heads? More likely, nuts have always been associated with cracking (compare a hard nut to crack); hence the idea of beating and striking. If hnītan and nut are related, both must go back to some vague sound-imitating (onomatopoeic) base hVn, in which V stands for any vowel. (Direct ties between hnītan and hnutu cannot be established, because hnēotan may be a ghost word, while ī and short u belong to differed series of ablaut).

The Oxford Etymologist in disguise.

The plot thickens once we remember that the Latin for “nut” is nux, properly, nuk-s, whose root is only too familiar to us from nucleus and nuclear (nut “kernel”). Both the beginning and the end of the Latin word are “wrong” from the Germanic point of view. Since Old Engl. hnutu and its cognate began with an h, we should expect kn– in Latin (by the well-known law of the First Consonant Shift: Germanic f, th, and h correspond to non-Germanic p, t, and k), but find no k. Also, the root of hnut-u ends in t, while the last radical consonant of nuk-s is k. Therefore, strict etymologists deny the affinity between nut and nux. But special pleading is a tempting procedure, and the closeness of nut and nux is so great that all kinds of rules have been proposed to keep the two words in the same family. It would be nicer if the Latin noun began with a k (knux), but, of course, if words for knocking came from saying kn-kn-kn and if hnutu-nux are offspring of this root, regularity in this area can hardly be expected (for a similar problem search for kl-words in this blog).

It won’t surprise anyone that nut acquired various metaphorical meanings. Not unexpectedly, at one time, all kinds of round, especially small round, objects began to be called nuts (as seen, among others, in nuts and bolts). Nuts “testicles,” nut “head,” and even nut “a trifling object” need no additional explanation. The baffling move is from nut “head” to nut “blockhead, numskull” and other non-trivial metaphorical senses. Those senses were recorded extremely late, as the evidence in the OED shows. First, we find nuts “a source of pleasure or delight” and fornuts “for amusement, for fun” (1625), an isolated example (apparently, this slang existed underground for centuries). In 1895, not to be able to do a thing for nuts “to be incompetent” was attested (bean shared a similar fate). In 1917, the nuts “an excellent or first-rate person or thing” surfaced in a printed text. I have a suspicion that nuts “testicles” prompted all such uses and that they were current long before they appeared in books. Jocular (crude or simply humorous) references to the male genitals as a source of strength or joy are ubiquitous in uncensored speech. My suspicion is borne out by the existence of the adjective nutty “amorous (!); fond, enthusiastic.”

From nutty we may perhaps move to nut “fop, masher” (on masher see this blog for 12 January 2011). In England, the word enjoyed great popularity in the decades (or at least in the last decade) before the First World War. Most aptly, one of those who contributed to the discussion of nut “fop” in Notes and Queries quoted Lafeu’s (or Lafew’s) remark on Parolles in All’s Well that Ends Well: “There can be no kernel in this light nut; the soul of this man is his clothes” (II: 5, 44). Besides, let us not forget the semantic and phonetic surroundings of this nut, namely, natty “neatly smart” and neat (the latter is a possible source of natty). Not only birds but also words of a feather stick together and influence one another. A nut of that epoch was someone who made a fool of himself in the eyes of non-nuts but who was also an analog of today’s cool dude. Since the nut of a century ago, like his sibling masher, was keen on impressing women, he obviously needed “nuts.” Already then some wits (wags, cards) spelled nuts as knuts, pronounced the first consonant, and joked that King Cnut (Canute, Knútr) was the first nut. One of the indefensible etymologies of nerd traces this word to knurd (drunk, if read backwards).

Supposedly, the kernels of this squirrel’s nuts are pure gold.

It is the sense of nut “nitwit, madman, etc.,”which also became common only in the nineteenth century, that is the hardest to explain. Should we again return to Parolles’s “the light nut”? Was this the forgotten path to the metaphor: from “nut” to “light nut” and ultimately to “a dim-witted, dotty character”? Nut “the amount of money required for a venture; any amount of money” was coined in the US. Was it a product of the gold rush? Nuggets of gold must have been called nuts more than once. (Nug “lump,” the putative etymon of nugget, is another word of unknown origin.) I am slightly familiar with the old slang of Russian gold mining and find my guess not improbable.

A nerd. Isn’t he a dear?

There can be little doubt that nut has been an expressive word for centuries, and, as such, it could and did have expressive forms. There seems to be a consensus that the American coinage nertz ~ nerts “nonsense,” recorded only in 1929, is indeed an expressive variant of nuts. In this function, the syllable er is not uncommon (at least so in American English). Dr. Ari Hoptman called my attention to the pronunciation lurve for love in one of Woody Allen’s old movies. If nuts can be the etymon of nerts, I see no reason why nerd could not have the same source. To be sure, this idea has occurred to many people before me, but I wonder why it has not been accepted, why people keep pounding on an open door and say “etymology dubious, disputed, uncertain, unknown.” The etymology of such a word can never be “known,” but a sound hypothesis need not be listed for the sake of good manners along with all kinds of fanciful suggestions. Also, Dr. Seuss, who by chance coined his own nerd, should be left in peace amid his zoo. Nerd, like geek, wimp, and square, was launched as a derogatory term. With time, it acquired some endearing overtones. After all, not every intellectual is an old fogey or a social moron. But it is the origin, rather than the word’s later development, that interested us in this post.

]]>http://feeds.feedblitz.com/~/436419554/_/oupblog/feed/0Old English,Word Origins And How We Know Them,*Featured,Oxford Etymologist,nerd,nut,Books,nerds,nuts,The Oxford Etymologist,oxford english dictionary,english,word origins,anatoly liberman,oedFor decades the English-speaking world has been wondering where the word nerd came from. The Internet is full of excellent essays: the documentation is complete, and all the known hypotheses have been considered, refuted, or cautiously endorsed. I believe one of the proposed etymologies to be convincing (go on reading!), but first let me say something about nut. Although its etymology and history are puzzling, even mysterious, no one seems to care. Slang is a flower growing on a huge dunghill. Not unexpectedly, people tend to pay attention to the flower and disregard the dung.
Nut is an old word with excellent connections, and yet its etymology is obscure. The Old English for nut was hnutu. Many modern words beginning with n and l once had an h before those resonants. German Nuss and Dutch noot are obvious congeners of nut. Yet trouble begins early enough. German Nuss also means “a slap,” most often as an element in the compound Kopfnuss “a (light) slap on the back of the head” (Kopf means “head”). Opinions are divided on whether Nuss1 and Nuss2 are related. They may well be. But how are nuts connected with beating? There was indeed an Old English verb hnītan “to thrust, knock, come into collision,” with the uncertain by-form hnēotan. Did nuts regularly fall on people’s heads? More likely, nuts have always been associated with cracking (compare a hard nut to crack); hence the idea of beating and striking. If hnītan and nut are related, both must go back to some vague sound-imitating (onomatopoeic) base hVn, in which V stands for any vowel. (Direct ties between hnītan and hnutu cannot be established, because hnēotan may be a ghost word, while ī and short u belong to differed series of ablaut). The Oxford Etymologist in disguise.
The plot thickens once we remember that the Latin for “nut” is nux, properly, nuk-s, whose root is only too familiar to us from nucleus and nuclear (nut “kernel”). Both the beginning and the end of the Latin word are “wrong” from the Germanic point of view. Since Old Engl. hnutu and its cognate began with an h, we should expect kn– in Latin (by the well-known law of the First Consonant Shift: Germanic f, th, and h correspond to non-Germanic p, t, and k), but find no k. Also, the root of hnut-u ends in t, while the last radical consonant of nuk-s is k. Therefore, strict etymologists deny the affinity between nut and nux. But special pleading is a tempting procedure, and the closeness of nut and nux is so great that all kinds of rules have been proposed to keep the two words in the same family. It would be nicer if the Latin noun began with a k (knux), but, of course, if words for knocking came from saying kn-kn-kn and if hnutu-nux are offspring of this root, regularity in this area can hardly be expected (for a similar problem search for kl-words in this blog).
It won’t surprise anyone that nut acquired various metaphorical meanings. Not unexpectedly, at one time, all kinds of round, especially small round, objects began to be called nuts (as seen, among others, in nuts and bolts). Nuts “testicles,” nut “head,” and even nut “a trifling object” need no additional explanation. The baffling move is from nut “head” to nut “blockhead, numskull” and other non-trivial metaphorical senses. Those senses were recorded extremely late, as the evidence in the OED shows. First, we find nuts “a source of pleasure or delight” and for nuts “for amusement, for fun” (1625), an isolated example (apparently, this slang existed underground for centuries). In 1895, not to be able to do a thing for nuts “to be incompetent” was attested (bean shared a similar fate). In 1917, the nuts “an excellent or first-rate person or thing” surfaced in a printed text. I have a ... For decades the English-speaking world has been wondering where the word nerd came from. The Internet is full of excellent essays: the documentation is complete, and all the known hypotheses have been considered, refuted, or cautiously endorsed.http://feeds.feedblitz.com/~/436419554/_/oupblog/https://blog.oup.com/2017/08/shakespeare-travel-map/Travelling with Shakespearehttp://feedproxy.google.com/~r/oupbloglawpolitics/~3/nzoDDlPqGGo/
http://feeds.feedblitz.com/~/436277160/_/oupblog/#respondWed, 16 Aug 2017 08:30:12 +0000https://blog.oup.com/?p=132981William Shakespeare is celebrated as one of the greatest Englishmen who has ever lived and his presence in modern Britain is immense. His contributions to the English language are extraordinary, helping not only to standardize the language as a whole but also inspiring terms still used today (a prime example being “swag” derived from “swagger” first seen in the plays Henry V and A Midsummer Night’s Dream).

]]>
William Shakespeare is celebrated as one of the greatest Englishmen who has ever lived and his presence in modern Britain is immense. His contributions to the English language are extraordinary, helping not only to standardize the language as a whole but also inspiring terms still used today (a prime example being “swag” derived from “swagger” first seen in the plays Henry V and A Midsummer Night’s Dream). Shakespeare’s image even graced the British £20 note from 1970 to 1993, and people from all over the world come to visit the playwright’s birthplace in Stratford-upon-Avon and the famous Globe Theatre in London.

However, it is not only in England that Shakespeare has left his literary mark: “All the world’s a stage” begins a monologue from As You Like It, and Shakespeare sets up his fictional stages all over the early modern world, most frequently in Europe and the Mediterranean shores. From Macbeth’s dreary Scotland to the “Kronborg” of Hamlet’s Denmark or the sunnier climes of “fair Verona”, there is a celebrated internationality in Shakespeare’s work. Often based on historic events or legends deeply embedded in Europe’s continental and regional histories, Shakespearean settings influenced viewers and readers not only in Shakespeare’s time, but right up to the present day.

Delve deeper into Shakespeare’s Europe with the interactive map below, looking at some of his most famous plays and their settings. Where will the bard take you?

]]>http://feeds.feedblitz.com/~/436277160/_/oupblog/feed/0*Featured,OSEO,clickable map,nos,Shakespeare around the world,oupblog,Julius Caesar,verona,Arts & Humanities,romeo and juliet,Shakespeare settings,literary destinations,hamlet,jean-claude dezauche,Maps,international shakespeare,Rome,Literature,Multimedia,merchant of venice,william shakespeareWilliam Shakespeare is celebrated as one of the greatest Englishmen who has ever lived and his presence in modern Britain is immense. His contributions to the English language are extraordinary, helping not only to standardize the language as a whole but also inspiring terms still used today (a prime example being “swag” derived from “swagger” first seen in the plays Henry V and A Midsummer Night’s Dream). Shakespeare’s image even graced the British £20 note from 1970 to 1993, and people from all over the world come to visit the playwright’s birthplace in Stratford-upon-Avon and the famous Globe Theatre in London.
However, it is not only in England that Shakespeare has left his literary mark: “All the world’s a stage” begins a monologue from As You Like It, and Shakespeare sets up his fictional stages all over the early modern world, most frequently in Europe and the Mediterranean shores. From Macbeth’s dreary Scotland to the “Kronborg” of Hamlet’s Denmark or the sunnier climes of “fair Verona”, there is a celebrated internationality in Shakespeare’s work. Often based on historic events or legends deeply embedded in Europe’s continental and regional histories, Shakespearean settings influenced viewers and readers not only in Shakespeare’s time, but right up to the present day.
Delve deeper into Shakespeare’s Europe with the interactive map below, looking at some of his most famous plays and their settings. Where will the bard take you?
Featured image credit: “Carte d’Europe” by Jean-Claude Dezauche, 1789. Public Domain via Wikimedia Commons.
The post Travelling with Shakespeare appeared first on OUPblog. William Shakespeare is celebrated as one of the greatest Englishmen who has ever lived and his presence in modern Britain is immense. His contributions to the English language are extraordinary, helping not only to standardize the language as a ... http://feeds.feedblitz.com/~/436277160/_/oupblog/https://blog.oup.com/2017/08/mettray-reformaroty-french-history/A prison without walls? The Mettray reformatoryhttp://feedproxy.google.com/~r/oupbloglawpolitics/~3/B24dtwQkOK4/
http://feeds.feedblitz.com/~/435190198/_/oupblog/#respondTue, 15 Aug 2017 11:30:12 +0000https://blog.oup.com/?p=132993The Mettray reformatory was founded in 1839, some ten kilometres from Tours in the quiet countryside of the Loire Valley. Over almost a hundred years the reformatory imprisoned juvenile delinquent boys aged 7 to 21, particularly from Paris.

]]>
The Mettray reformatory was founded in 1839, some ten kilometres from Tours in the quiet countryside of the Loire Valley. Over almost a hundred years the reformatory imprisoned juvenile delinquent boys aged 7 to 21, particularly from Paris. It quickly became a model imitated by dozens of institutions across the Continent, in Britain and beyond. Mettray’s most celebrated inmate, gay thief Jean Genet depicts it in his influential novel Miracle of the Rose [1946] and it also features at the climax of philosopher Michel Foucault’s history of modern imprisonment, Discipline and Punish [1975]. The boys worked nine-hour days in the institution’s workshops – making brushes and other basic household implements; they dug its fields and broke stones in its quarries. Such labour was thought conducive to moral reform but it also enabled the institution – which was run by a profit-making private company – to balance its books. In his seventies, Jean Genet suspected that the reformatory was doing rather more than this: making then concealing huge profits from its forced labour. He set about, with a small team of helpers, trying to prove this by plundering the institution’s archives as well as reading widely in the historical sources. The result, The Language of the Wall, was a script for a three-part historical documentary drama for television which retells the story of Mettray from its foundation to its closure as a prison in 1937. Genet could find no hard evidence of profiteering but he dramatizes his own search for it in the script and retells the life of the institution over the hundred years by intertwining often violent incidents from the daily lives of prisoners with scenes from the corridors of power to show how closely the private institution worked alongside the state and how it survived successive changes of regime.

My return to Mettray’s archives in Tours was guided by Genet’s still unpublished script, versions of which are held at the regional archives in Tours and at IMEC. Although Genet rightly ridicules him for his reactionary politics, it is difficult not to admire the technical accomplishment of Mettray’s principal founder, the devout Frédéric-Auguste Demetz. Demetz had been sent to the United States by the French government with prison architect Abel Blouet in 1836 to follow up Alexis de Tocqueville’s earlier visit, the official purpose of which had been to investigate American prisons. Demetz and Blouet were to obtain more precise technical information about them, including detailed drawings.

Blouet subsequently produced drawings for Mettray’s chapel building which show cells in the crypt, as well as in the building behind, a unit which the institution ran as a money-making enterprise by encouraging middle-class families to send wayward sons for a short spell of “paternal correction,” during which they experienced individual tuition and could also hear Mass in the adjacent chapel from the privacy of their cells without having to mix with the working-class delinquents in the chapel. Demetz was a brilliant publicist for his institution, marketing it and carefully managing its visibility to the outside world. He boasted that Mettray was “sans grilles ni murailles” (without bars or walls) and in one sense this was true: there was no perimeter wall, yet in addition to the punishment cells concealed in the chapel and elsewhere there were frequent roll-calls and escapees were rounded up by the local peasantry, alerted by the ringing of the chapel bell, and incentivized by the payment of a reward. Sometimes they killed escapees. Demetz welcomed philanthropic tourists but only on Sundays, when they witnessed the institution’s weekly military parade, La Revue du Dimanche.

The institution provided a hotel and even postcards for visitors to spread the news of its success in taking wayward criminal children and remoulding them into disciplined servants of the established order: inmates could leave Mettray a few years early if they joined the armed forces so many did, becoming troops in the colonial armies. Genet’s script is accurate in showing the presence of soldiers from Mettray at key moments in the colonisation of Algeria, as well as in Mexico and Indochina. Demetz was an expert carceral entrepreneur who not only governed the prison but also the surrounding local population, enlisting them as – in effect – auxiliary prison guards to substitute for the absent perimenter wall. In Paris he garnered financial support from the governing classes in a similar way. By capitalising on the fear of crime to further his own institutional agenda, Demetz’s approach points forward to the ubiquitous work of today’s “(in)security professionals,” to use Didier Bigo’s suggestive formulation.

]]>http://feeds.feedblitz.com/~/435190198/_/oupblog/feed/0History,French,*Featured,Journals,journals,The Mettray reformatory,France,Mettray,Europe,French prisons,criminals,Oliver Davis,French History,criminal justice,prisonThe Mettray reformatory was founded in 1839, some ten kilometres from Tours in the quiet countryside of the Loire Valley. Over almost a hundred years the reformatory imprisoned juvenile delinquent boys aged 7 to 21, particularly from Paris. It quickly became a model imitated by dozens of institutions across the Continent, in Britain and beyond. Mettray’s most celebrated inmate, gay thief Jean Genet depicts it in his influential novel Miracle of the Rose [1946] and it also features at the climax of philosopher Michel Foucault’s history of modern imprisonment, Discipline and Punish [1975]. The boys worked nine-hour days in the institution’s workshops – making brushes and other basic household implements; they dug its fields and broke stones in its quarries. Such labour was thought conducive to moral reform but it also enabled the institution – which was run by a profit-making private company – to balance its books. In his seventies, Jean Genet suspected that the reformatory was doing rather more than this: making then concealing huge profits from its forced labour. He set about, with a small team of helpers, trying to prove this by plundering the institution’s archives as well as reading widely in the historical sources. The result, The Language of the Wall, was a script for a three-part historical documentary drama for television which retells the story of Mettray from its foundation to its closure as a prison in 1937. Genet could find no hard evidence of profiteering but he dramatizes his own search for it in the script and retells the life of the institution over the hundred years by intertwining often violent incidents from the daily lives of prisoners with scenes from the corridors of power to show how closely the private institution worked alongside the state and how it survived successive changes of regime.
My return to Mettray’s archives in Tours was guided by Genet’s still unpublished script, versions of which are held at the regional archives in Tours and at IMEC. Although Genet rightly ridicules him for his reactionary politics, it is difficult not to admire the technical accomplishment of Mettray’s principal founder, the devout Frédéric-Auguste Demetz. Demetz had been sent to the United States by the French government with prison architect Abel Blouet in 1836 to follow up Alexis de Tocqueville’s earlier visit, the official purpose of which had been to investigate American prisons. Demetz and Blouet were to obtain more precise technical information about them, including detailed drawings. Abel Blouet, Maison Paternelle et Chapelle de la Colonie plan élévation, courtesy of Les Archives Départementales d’Indre-et-Loire, 114J173
Blouet subsequently produced drawings for Mettray’s chapel building which show cells in the crypt, as well as in the building behind, a unit which the institution ran as a money-making enterprise by encouraging middle-class families to send wayward sons for a short spell of “paternal correction,” during which they experienced individual tuition and could also hear Mass in the adjacent chapel from the privacy of their cells without having to mix with the working-class delinquents in the chapel. Demetz was a brilliant publicist for his institution, marketing it and carefully managing its visibility to the outside world. He boasted that Mettray was “sans grilles ni murailles” (without bars or walls) and in one sense this was true: there was no perimeter wall, yet in addition to the punishment cells concealed in the chapel and elsewhere there were frequent roll-calls and escapees were rounded up by the local peasantry, alerted by the ringing of the chapel bell, and incentivized by the payment of a reward. Sometimes they killed escapees. Demetz welcomed philanthropic tourists but only on Sundays, when they witnessed the institution’s weekly ... The Mettray reformatory was founded in 1839, some ten kilometres from Tours in the quiet countryside of the Loire Valley. Over almost a hundred years the reformatory imprisoned juvenile delinquent boys aged 7 to 21, particularly from Paris.http://feeds.feedblitz.com/~/435190198/_/oupblog/https://blog.oup.com/2017/08/cosmic-ripples/Cosmic rippleshttp://feedproxy.google.com/~r/oupbloglawpolitics/~3/3-ATsvvhYYI/
http://feeds.feedblitz.com/~/435140078/_/oupblog/#respondTue, 15 Aug 2017 10:30:32 +0000https://blog.oup.com/?p=132759Michael Faraday transformed our understanding of the physical world when he realised that electromagnetic forces are carried by a field permeating the whole of space. This idea was formalized by James Clerk-Maxwell who constructed a unified theory of electromagnetism in which beams of light are undulations in the electromagnetic field. Maxwell’s theory implies that visible light is just one part of the electromagnetic spectrum.

]]>
Michael Faraday transformed our understanding of the physical world when he realised that electromagnetic forces are carried by a field permeating the whole of space. This idea was formalized by James Clerk-Maxwell who constructed a unified theory of electromagnetism in which beams of light are undulations in the electromagnetic field. Maxwell’s theory implies that visible light is just one part of the electromagnetic spectrum. Heinrich Hertz confirmed this experimentally in 1887 by generating and detecting radio waves. The invention of radio followed, along with television, radar, mobile phones, and many other applications. Electromagnetic waves are emitted whenever electrically charged objects, such as electrons, are shaken.

The gravitational field

When Einstein formulated his new theory of gravity – general relativity – he aimed to explain gravity as a theory of fields. In this he was successful. Remarkably, it turned out the appropriate field is spacetime itself.

In general relativity, spacetime is analogous to the electromagnetic field and mass is analogous to electric charge. One implication of the theory is that vigorously whirling large masses around will generate gravitational waves, and as gravity is described as the warping and curvature of spacetime, these gravitational waves are simply ripples in the fabric of space.

Detecting electromagnetic waves is easy. We do it whenever we open our eyes, turn on the television, use Wifi, or heat a cup of tea in a microwave oven. Detecting gravitational waves is rather more difficult, because gravity is incredibly weak compared to the electromagnetic force.

We live in an environment where gravity is very important and this gives a false impression of its strength. But it takes a planet-sized amount of matter pulling together for gravity to have a significant effect, and even then it is easy to pick up metal objects with a small magnet, defying the gravitational attraction of the entire Earth.

Gravity is so weak that even shaking huge masses generates barely the tiniest gravitational ripple. Only the most violent cosmic events produce waves that could conceivably be detected, these include supernova explosions, neutron star collisions and black hole mergers. Any instrument sensitive enough to detect them must measure changes in distance between two points several kilometres apart by less than one thousandth of the diameter of a proton. Incredibly, such instruments now exist.

Detecting the ripples

In the centenary year of Einstein’s general relativity, researchers achieved their first success. It had taken decades to develop the technology to build LIGO (Laser Interferometer Gravitational-wave Observatory) consisting of two facilities 3000 km apart in the United States, at Hanford, Washington and Livingston, Louisiana. (Two well-separated detectors are required to distinguish true gravitational wave events from the inevitable local background disturbances.)

The facilities are L-shaped with two perpendicular 4 km arms housed within an ultrahigh vacuum. A laser is directed at a beam-splitter sending half the beam down each arm. The light travels 1,600 km, bouncing back and forth 400 times between two mirrors in each arm, before the two half-beams are recombined. The apparatus is designed so that the recombined half-beams completely cancel, with the peaks in the light waves of one beam meeting the troughs in the other, and no light passes to the photodetector. Whenever a passing gravitational wave ripples through the apparatus, however, the lengths of the arms alter very slightly, so the distances travelled by the half-beams changes and their phases shift (by much less than a single wavelength). There is no longer perfect cancellation and some light arrives at the photodetector. The sensitivity of LIGO is extraordinary, as it must be if there is any chance of detecting gravitational waves.

Extreme violence in the depths of space

The upgraded LIGO programme was scheduled to begin on 18 September 2015. Four days before the official start something wonderful happened. An unmistakable and identical signal was measured by the detectors in Hanford and Livingston within a few milliseconds of each other.

The first ever gravitational wave signal detected by the LIGO observatories. Still taken from a video recording the sound of two black holes colliding. Public domain via Caltech/MIT/LIGO Lab.

Researchers have studied computer models of black hole mergers and other violent cosmic processes so they can recognise the signatures of events detected by LIGO. According to the models, binary black holes produce a continuous stream of gravitational waves that drains energy from the binary system and the black holes gradually spiral together. In the final moments of inspiral the amplitude of the waves increases dramatically. Initially, the newly merged black hole is rather asymmetrical, but it rapidly settles down with a final blast of gravitational waves known as the ring-down.

Much information has been extracted from the brief signal detected by LIGO. It came from an event 1.3 billion light years away that was detonated by two merging black holes during their final inspiral and ring-down. The masses of the black holes are deduced to be 29 and 36 solar masses, and they coalesced into a rapidly spinning black hole of 62 solar masses. What is truly staggering is that during the merger process three times the mass of the sun was converted into pure energy in the form of gravitational waves.

This was the first ever detection of a binary black hole system and the most direct observation of black holes ever made. It also confirmed that gravitational waves travel at the speed of light, as expected.

The difference in arrival time at the two observatories indicates the direction towards the event, at least roughly. When further gravitational wave observatories come on line around the world, this will allow more precise determinations of the sources of gravitational waves.

GOTO Observatory

Earlier this month a new wide-field telescope was inaugurated on the Roque de Los Muchachos high above La Palma in the Canary Islands. It is known as the Gravitational-wave Optical Transient Observer (GOTO) and will seek the source of any gravitational waves detected by LIGO, providing further valuable insights into their origin.

Since 2015, two more signals have been detected by LIGO. Both are due to black hole mergers. The era of gravitational wave astronomy has begun.

]]>http://feeds.feedblitz.com/~/435140078/_/oupblog/feed/0*Featured,Science & Medicine,Gravitational Waves,An Inspirational Tour of Fundamental Physics,OUPScience,The Physical World,Books,black hole,Nicholas Mee,relativity,Physics & Chemistry,particle physics,LIGO,space,gravityMichael Faraday transformed our understanding of the physical world when he realised that electromagnetic forces are carried by a field permeating the whole of space. This idea was formalized by James Clerk-Maxwell who constructed a unified theory of electromagnetism in which beams of light are undulations in the electromagnetic field. Maxwell’s theory implies that visible light is just one part of the electromagnetic spectrum. Heinrich Hertz confirmed this experimentally in 1887 by generating and detecting radio waves. The invention of radio followed, along with television, radar, mobile phones, and many other applications. Electromagnetic waves are emitted whenever electrically charged objects, such as electrons, are shaken.
The gravitational field
When Einstein formulated his new theory of gravity – general relativity – he aimed to explain gravity as a theory of fields. In this he was successful. Remarkably, it turned out the appropriate field is spacetime itself.
In general relativity, spacetime is analogous to the electromagnetic field and mass is analogous to electric charge. One implication of the theory is that vigorously whirling large masses around will generate gravitational waves, and as gravity is described as the warping and curvature of spacetime, these gravitational waves are simply ripples in the fabric of space. Schematic illustration of a binary black hole system generating gravitational waves. Copyright: Nicholas Mee. Used with permission.
Detecting electromagnetic waves is easy. We do it whenever we open our eyes, turn on the television, use Wifi, or heat a cup of tea in a microwave oven. Detecting gravitational waves is rather more difficult, because gravity is incredibly weak compared to the electromagnetic force.
We live in an environment where gravity is very important and this gives a false impression of its strength. But it takes a planet-sized amount of matter pulling together for gravity to have a significant effect, and even then it is easy to pick up metal objects with a small magnet, defying the gravitational attraction of the entire Earth.
Gravity is so weak that even shaking huge masses generates barely the tiniest gravitational ripple. Only the most violent cosmic events produce waves that could conceivably be detected, these include supernova explosions, neutron star collisions and black hole mergers. Any instrument sensitive enough to detect them must measure changes in distance between two points several kilometres apart by less than one thousandth of the diameter of a proton. Incredibly, such instruments now exist.
Detecting the ripples
In the centenary year of Einstein’s general relativity, researchers achieved their first success. It had taken decades to develop the technology to build LIGO (Laser Interferometer Gravitational-wave Observatory) consisting of two facilities 3000 km apart in the United States, at Hanford, Washington and Livingston, Louisiana. (Two well-separated detectors are required to distinguish true gravitational wave events from the inevitable local background disturbances.)
The facilities are L-shaped with two perpendicular 4 km arms housed within an ultrahigh vacuum. A laser is directed at a beam-splitter sending half the beam down each arm. The light travels 1,600 km, bouncing back and forth 400 times between two mirrors in each arm, before the two half-beams are recombined. The apparatus is designed so that the recombined half-beams completely cancel, with the peaks in the light waves of one beam meeting the troughs in the other, and no light passes to the photodetector. Whenever a passing gravitational wave ripples through the apparatus, however, the lengths of the arms alter very slightly, so the distances travelled by the half-beams changes and their phases shift (by much less than a single wavelength). There is no longer perfect cancellation and some light arrives at the photodetector. The sensitivity of ... Michael Faraday transformed our understanding of the physical world when he realised that electromagnetic forces are carried by a field permeating the whole of space. This idea was formalized by James Clerk-Maxwell who constructed a unified theory ... http://feeds.feedblitz.com/~/435140078/_/oupblog/https://blog.oup.com/2017/08/10-facts-india-economy/10 facts about the Indian economyhttp://feedproxy.google.com/~r/oupbloglawpolitics/~3/wWf-88MM5-g/
http://feeds.feedblitz.com/~/435098036/_/oupblog/#respondTue, 15 Aug 2017 09:30:40 +0000https://blog.oup.com/?p=13251715 August 2017 marks the 70th year anniversary since the British withdrew their colonial rule over India, leaving it to be one of the first countries to gain independence. Since then it has become the sixth largest economy in the world and is categorised as one of the major G-20 economies. To mark the occasion we have compiled a wide array of facts around the Indian economy pre and post-independence.

]]>
15 August 2017 marks the 70th year anniversary since the British withdrew their colonial rule over India, leaving it to be one of the first countries to gain independence. Since then it has become the sixth largest economy in the world and is categorised as one of the major G20 economies. India is seen as a newly industrialised country and was the world’s fastest growing major economy in 2014. With a population of 1.2 billion and the second largest in the world, its economy has a lot to offer and has evolved over the years.

To mark the occasion we have compiled a wide array of facts around the Indian economy pre and post-independence.

1. India’s economy can be considered a paradoxical one. Despite being one of the fastest growing economies in the world, around one-third of the population live below the poverty line.

2. Before the age of European colonization, India accounted for about 25% of the world’s manufactured goods. In the 13th century, India emerged with a great trading capacity and was able to achieve a state of economic dominance within the wider Indian Ocean world. The main manufacture was cotton textiles which were being produced in all parts of the country for both domestic and export trade.

3. The East India Company was formed in Britain to pursue trade within the East Indies and Southeast Asia. Instead, it mainly did most of its trade within the Indian subcontinent and China. Some of the most popular items of trade were tea, silk, cotton, and opium.

4. India is one of the BRIC countries as its economy experienced fast growth in the 2000s and is predicted to surpass many of the world’s largest economies by 2050. It shares this place along with Brazil, Russia, and China.

5. India is well-known as the home of spices and is the world’s largest producer and exporter of spices. It also accounts for half of the trading in spices globally.

6. The rupee is the currency of India and it has its amount written in 17 of the Indian languages. It is also the currency of the Maldives, Mauritius, Nepal, Pakistan, Seychelles, and Sri Lanka.

7. The agricultural economy in India during the 1930s was greatly impacted by the Great Depression. It affected some of the most popular export staples ranging from jute, tea, and cotton.

8. In November 2016, Narendara Modi, the current Prime Minster of India announced that all 500 and, 1,00 rupee notes will be demonetised in order to tackle illicit cash handling and illegal activity.

9. India is a major exporter of IT and software services and this sector is considered to be one of the fastest growing within the economy. Its net IT exports grew from virtually nothing in 1990 to around $70 billion two decades later.

10. 58% of the rural households in India depend on agriculture as their main source of income and is one of the largest contributors to India’s Gross Domestic Product (GDP).

Featured image credit: ancient antique army art by Pexels. Public domain via Pixabay.

]]>http://feeds.feedblitz.com/~/435098036/_/oupblog/feed/0Indian Economics,The East India Company,*Featured,spices,Land Question in India,Books,Asia,Indian economy,Online products,East India Company,Social Sciences,BRIC,Indian independence,Rupee,Business & Economics,agriculture,british empire15 August 2017 marks the 70th year anniversary since the British withdrew their colonial rule over India, leaving it to be one of the first countries to gain independence. Since then it has become the sixth largest economy in the world and is categorised as one of the major G20 economies. India is seen as a newly industrialised country and was the world's fastest growing major economy in 2014. With a population of 1.2 billion and the second largest in the world, its economy has a lot to offer and has evolved over the years.
To mark the occasion we have compiled a wide array of facts around the Indian economy pre and post-independence.
1. India’s economy can be considered a paradoxical one. Despite being one of the fastest growing economies in the world, around one-third of the population live below the poverty line.
2. Before the age of European colonization, India accounted for about 25% of the world’s manufactured goods. In the 13th century, India emerged with a great trading capacity and was able to achieve a state of economic dominance within the wider Indian Ocean world. The main manufacture was cotton textiles which were being produced in all parts of the country for both domestic and export trade.
3. The East India Company was formed in Britain to pursue trade within the East Indies and Southeast Asia. Instead, it mainly did most of its trade within the Indian subcontinent and China. Some of the most popular items of trade were tea, silk, cotton, and opium.
4. India is one of the BRIC countries as its economy experienced fast growth in the 2000s and is predicted to surpass many of the world’s largest economies by 2050. It shares this place along with Brazil, Russia, and China.
5. India is well-known as the home of spices and is the world’s largest producer and exporter of spices. It also accounts for half of the trading in spices globally. Curry spices Indian cinnamon by PDPics. Public domain via Pixabay.
6. The rupee is the currency of India and it has its amount written in 17 of the Indian languages. It is also the currency of the Maldives, Mauritius, Nepal, Pakistan, Seychelles, and Sri Lanka.
7. The agricultural economy in India during the 1930s was greatly impacted by the Great Depression. It affected some of the most popular export staples ranging from jute, tea, and cotton.
8. In November 2016, Narendara Modi, the current Prime Minster of India announced that all 500 and, 1,00 rupee notes will be demonetised in order to tackle illicit cash handling and illegal activity.
9. India is a major exporter of IT and software services and this sector is considered to be one of the fastest growing within the economy. Its net IT exports grew from virtually nothing in 1990 to around $70 billion two decades later.
10. 58% of the rural households in India depend on agriculture as their main source of income and is one of the largest contributors to India’s Gross Domestic Product (GDP).
Featured image credit: ancient antique army art by Pexels. Public domain via Pixabay.
The post 10 facts about the Indian economy appeared first on OUPblog. 15 August 2017 marks the 70th year anniversary since the British withdrew their colonial rule over India, leaving it to be one of the first countries to gain independence. Since then it has become the sixth largest economy in the world and is ... http://feeds.feedblitz.com/~/435098036/_/oupblog/https://blog.oup.com/2017/08/orchestra-musicians-job/Do your job, part 1http://feedproxy.google.com/~r/oupbloglawpolitics/~3/Xgm6Cssp93o/
http://feeds.feedblitz.com/~/435050620/_/oupblog/#respondTue, 15 Aug 2017 08:30:48 +0000https://blog.oup.com/?p=132486In TBSH, there is a chapter devoted to expectations, from and of both the ensemble and the conductor, of each other and of themselves. Built around a worksheet entitled “Orchestral Bill of Rights and Responsibilities,” I attempt therein to design a framework for a long overdue discussion to occur, about what our actual jobs are, how we perceive them and how our neighbors in the orchestral community perceive them, divisions of labor, and what we have the “right to expect” from each other.

]]>
In The Beat Stops Here, there is a chapter devoted to expectations, from and of both the ensemble and the conductor, of each other and of themselves. Built around a worksheet entitled “Orchestral Bill of Rights and Responsibilities,” I attempt therein to design a framework for a long overdue discussion to occur, about what our actual jobs are, how we perceive them and how our neighbors in the orchestral community perceive them, divisions of labor, and what we have the “right to expect” from each other. That noted, it is time to take the next step, a step prompted, for me at least, by a situation in which every assumption I have made about orchestral playing has been challenged.

I am working presently with an orchestra in China, a group of fine players individually, whose expectation of me is totally different from anything I have experienced up to now, even after some 35 years on the podium. In this situation, the orchestra is performing a work they have never done from a set of new, unmarked, unbowed parts. The tradition of the orchestra is that they do not prepare in any form for the first reading; they do not listen to the work, they literally sightread their parts at the first rehearsal. Their basic concept of rehearsal differs from mine; for this ensemble, “rehearsal” is repeating a passage over and over again, slowly, then more quickly, until it is “learned.”

Expressing surprise and some dismay over the state of the parts and the bowings (or lack thereof), I dove in to the first reading following my usual method, go through the work and then start rehearsing. It soon became obvious that the orchestra had no idea how the piece went (Falla’s Three-Cornered Hat ballet, complete), and they expected me to stop every time a tempo or meter changed and explain what I was going to beat in and what the tempo would be. I suggested, gently at first, then more forcefully, that I wasn’t going to work that way; that the language of my conducting, assuming people were looking at all, would readily be apparent if they just looked up. This assumption was faulty, as it did not take into account the 2nd of one of TBSH’s immutable 3-part truths: “If the orchestra doesn’t know the piece, it doesn’t make any difference where you put your hands.”

Furthermore, this orchestra was unusually vocal about what they expected of me. The question, “How can we play if we don’t know what you are beating in?,” came up, from a principal wind player. And it was not posed politely, I might add. I demonstrated (a bit snarkily, I confess) what 1, 2, and 3 patterns looked like, and said, “This is what conducting IS. Just look at it.”

That didn’t go over well. The response was, “Well, we are just sight reading it for the first time!” Worse. My response, “Why? What would you think of me if I came into the first rehearsal sight-reading the score?” And worse. The remainder of the first rehearsal was, well, chilly. My assistant urged me to reconsider my choice not to tell the orchestra what I was going to “beat” in, and I said no. To have done so would have violated my core beliefs about what my job is, what the orchestra’s job is, and what conducting is.

Later on, in a relatively simple passage, the strings were not together at all. I said “Let’s play together, please.” The concertmaster responded, “Maestro, this is our first time seeing the piece,” and I said, testily (by this point, I was frankly ticked), “I don’t see how that is my problem.” The rules of orchestra playing don’t change – looking, listening to the person next to you, communicating with the principal stands, keeping in touch with the conductor. UNLESS. Unless the “rules” are broken from the outset; unless the “rules” never existed in the first place, which clearly they didn’t here.

Ah. They weren’t breaking any rules; the rules with which I am familiar, under which I function, literally never existed here.

What to do? How can we move forward with mutual respect and purpose? How will we resolve the impasse?

The saga is not over, it is playing out day to day. I am happy to report that yesterday went much better; we will see what today holds. But I leave it to the reader to consider the scenario offered above; it is neither hypothetical nor theoretical, it is part of the here and now. It is a situation that every conductor may face (or have faced). How will you respond? How have you dealt with it?

Part 2 of “DO YOUR JOB” is shortly forthcoming, as soon as I figure out how “art” emerges from our process this week. In the meantime, consider the scenario and ask yourself, “What would I do? What is my job under these circumstances? How and where do we find “art” under these conditions.”

]]>http://feeds.feedblitz.com/~/435050620/_/oupblog/feed/0*Featured,Conductor,The Beat Stops Here,Arts & Humanities,Books,orchestra,china,Music,Mark Gibson,Conducting,Orchestral Music,The Beat Stops Here: Lessons on and off the Podium for Today's ConductorIn The Beat Stops Here, there is a chapter devoted to expectations, from and of both the ensemble and the conductor, of each other and of themselves. Built around a worksheet entitled “Orchestral Bill of Rights and Responsibilities,” I attempt therein to design a framework for a long overdue discussion to occur, about what our actual jobs are, how we perceive them and how our neighbors in the orchestral community perceive them, divisions of labor, and what we have the “right to expect” from each other. That noted, it is time to take the next step, a step prompted, for me at least, by a situation in which every assumption I have made about orchestral playing has been challenged.
I am working presently with an orchestra in China, a group of fine players individually, whose expectation of me is totally different from anything I have experienced up to now, even after some 35 years on the podium. In this situation, the orchestra is performing a work they have never done from a set of new, unmarked, unbowed parts. The tradition of the orchestra is that they do not prepare in any form for the first reading; they do not listen to the work, they literally sightread their parts at the first rehearsal. Their basic concept of rehearsal differs from mine; for this ensemble, “rehearsal” is repeating a passage over and over again, slowly, then more quickly, until it is “learned.”
Expressing surprise and some dismay over the state of the parts and the bowings (or lack thereof), I dove in to the first reading following my usual method, go through the work and then start rehearsing. It soon became obvious that the orchestra had no idea how the piece went (Falla’s Three-Cornered Hat ballet, complete), and they expected me to stop every time a tempo or meter changed and explain what I was going to beat in and what the tempo would be. I suggested, gently at first, then more forcefully, that I wasn’t going to work that way; that the language of my conducting, assuming people were looking at all, would readily be apparent if they just looked up. This assumption was faulty, as it did not take into account the 2nd of one of TBSH’s immutable 3-part truths: “If the orchestra doesn’t know the piece, it doesn’t make any difference where you put your hands.”
Furthermore, this orchestra was unusually vocal about what they expected of me. The question, “How can we play if we don’t know what you are beating in?,” came up, from a principal wind player. And it was not posed politely, I might add. I demonstrated (a bit snarkily, I confess) what 1, 2, and 3 patterns looked like, and said, “This is what conducting IS. Just look at it.”
That didn’t go over well. The response was, “Well, we are just sight reading it for the first time!” Worse. My response, “Why? What would you think of me if I came into the first rehearsal sight-reading the score?” And worse. The remainder of the first rehearsal was, well, chilly. My assistant urged me to reconsider my choice not to tell the orchestra what I was going to “beat” in, and I said no. To have done so would have violated my core beliefs about what my job is, what the orchestra’s job is, and what conducting is.
Later on, in a relatively simple passage, the strings were not together at all. I said “Let’s play together, please.” The concertmaster responded, “Maestro, this is our first time seeing the piece,” and I said, testily (by this point, I was frankly ticked), “I don’t see how that is my problem.” The rules of orchestra playing don’t change – looking, listening to the person next to you, communicating with the principal stands, keeping in ... In The Beat Stops Here, there is a chapter devoted to expectations, from and of both the ensemble and the conductor, of each other and of themselves. Built around a worksheet entitled “Orchestral Bill of Rights and Responsibilities,”http://feeds.feedblitz.com/~/435050620/_/oupblog/https://blog.oup.com/2017/08/power-of-vision-climate-change/The power of vision in the age of climate changehttp://feedproxy.google.com/~r/oupbloglawpolitics/~3/3E_Q4XditvQ/
http://feeds.feedblitz.com/~/434987178/_/oupblog/#respondTue, 15 Aug 2017 07:30:16 +0000https://blog.oup.com/?p=132986Helen Keller once said, "The only thing worse than being blind is having sight but no vision.” The sustainability revolution is unstoppable. Signs are everywhere; policy makers and the private sector are veering towards a decarbonized development model. The adoption of the Paris Agreement on Climate Change on December 2015 marked the political turning point.

]]>
Helen Keller once said, “The only thing worse than being blind is having sight but no vision.” The sustainability revolution is unstoppable. Signs are everywhere; policy makers and the private sector are veering towards a decarbonized development model. The adoption of the Paris Agreement on Climate Change on December 2015 marked the political turning point. This moment was only possible, however, due to the previous surge in momentum from non-state and sub-national actors.

Perhaps the greatest achievement of the Paris Agreement is that all 197 Parties to the United Nations Framework Convention on Climate Change (UNFCCC) agreed to hold the increase in the global average temperature to well below 2°C above pre-industrial levels, and to pursue efforts to limit this increase to 1.5°C. As a long-term goal, Parties also agreed to reach global peaking of greenhouse gases (GHG) as soon as possible, and to undertake rapid reductions afterwards so as to achieve a balance between the level of emissions and of removals by sinks (zero net emissions) by the second half of this century. The biggest change brought by the Agreement to the UNFCCC regime, is that all Parties must pursue mitigation efforts – not only developed countries.

Parties were also aware of the fact that some climate change impacts are already inevitable. With this in mind they agreed to increase the ability to adapt to the adverse impacts of climate change, and foster resilience and low greenhouse gas emissions development. Importantly, they agreed to make finance flows consistent with a pathway towards low greenhouse gas emissions and climate-resilient development. The Paris Agreement also mandates Parties to strengthen capacities and transfer technology of those Parties in need, so that they, in turn, can also lower GHG emissions and adapt to climate change.

The provisions of the Paris Agreement are however not enough to achieve the desired goal of 1.5°C temperature increase by the end of this century. This is so because the pledges presented by the Parties to the UNFCCC in their Intended Nationally Determined Contributions (INDCs) for the years 2020 to 2025/2030 are not ambitious enough to reach the goal (under the Agreement, Parties are free to determine the nature and extent of their pledge).

“U.S. Secretary of State John Kerry participates in the event on the UN Paris Agreement Entry into Force at the United Nations” by U.S. Department of State. Public Domain via Wikimedia Commons.

The Paris Agreement also follows a more managerial and procedural approach. This is, it sets strong transparency obligations on Parties to present their national inventories of GHG and the information needed to track progress in the implementation of their pledges under a transparency mechanism. This transparency mechanism is accompanied by two additional mechanisms: a mechanism to facilitate and promote compliance, and a global stocktake. The global stocktake mechanism is a review and ratchet process where Parties review all pledges every five years, assess the progress towards achieving the purpose and long-term goals of the Agreement, and then set more ambitious ones. Unfortunately, waiting for this procedural machinery to start working and for the results of the next global stocktake in 2023 before increasing ambition would be too late.

According to the latest scientific research, in order to achieve the objective of maintaining the increase in average global temperature at below 2°C by the end of the century (least to say 1.5°C), GHG emissions should peak by 2020. According to Mission2020, convened by the former UNFCCC Secretary, Christiana Figueres, several additional milestones should be achieved by 2030 if the long-term goal of decarbonization by the end of the century is to become a reality:

Renewables should make up at least 30% of the world’s electric supply;

cities and states should be running and sufficiently funding programs to achieve the decarbonization of buildings and infrastructure;

at least 15% of all new cars sold globally should be electric, accompanied by an increase in decarbonized public transportation in cities, as well as by more fuel efficiency and less GHG emissions from aviation;

deforestation should stop, and afforestation and reforestation should increase in such a degree so as to create the necessary carbon sinks to reach net zero emissions;

heavy industry should be developing plans for halving emissions before 2050; and

the finance sector (private and public sources) should be mobilizing at least 1 trillion US dollars a year for climate action.

Surely, these milestones are challenging but also possible and desirable.

Several countries and regions are already on the vanguard, pushing for a successful transition towards a decarbonized economy. Iceland, Norway, and Costa Rica have announced their desire to become climate neutral. Bhutan goes a step further and is the only carbon negative country in the world (absorbing more greenhouse gases than it produces), with the intent of remaining so. The Gambia and Morocco’s pledges are also very ambitious.

A second group of countries has presented pledges that are acceptable, but do not contribute enough to the common effort. Their pledges have been rated as “medium” by Climate Action Tracker (which rates countries’ pledges and policies against whether they are consistent with a country’s fair share effort to holding warming to below 2°C). The most important of these (in light of their share in global emissions) are the European Union, China, Brazil, India, and Mexico. These countries and region should step up in their efforts and policies, as by doing so they have much to win.

A last group of countries has announced mitigation goals that are clearly insufficient. Among these are other big emitters such as Canada, Australia, Russia, Japan, Saudi Arabia, South Africa, Ukraine, South Korea, and the United Arab Emirates. The greatest disappointment and laggard in this third group is the United States, whose current federal government has sadly chosen to follow a path in the wrong direction.

It is clear – even for those with limited vision – that the countries and private companies now betting on decarbonization will be the great winners of the future. First of all, they will provide the knowledge and technology necessary for the transformation. Secondly, they will substantially improve the living conditions of their inhabitants. And thirdly, the economically stronger of them will be the leaders of tomorrow’s global economy.

The race towards decarbonization is picking up speed. Hopefully we will be able to agree with Rudi Dornbusch who said that “things take longer to happen than you think they will, and then they happen faster than you ever thought they could”. We must transform our economy, but most importantly, we must transform our minds. We see the world through “carbon eyeglasses” and measure all our activities according to their carbon imprint on the world. The recipe for success is simple to remember: all of us – states, regions, cities, companies, and individuals – should cut our carbon footprint in half by the end of each of the coming three decades. We can do it and the vision of a better, cleaner future will become our reality.

]]>http://feeds.feedblitz.com/~/434987178/_/oupblog/feed/0*Featured,environmental sustainability,climate change,International Environmental Law,Books,environment & energy law,Paris Agreement,public international law,Law,maría pía carazo,the paris agreement on climate change,sustainability,environmental law,United Nations Framework Convention on Climate Change,the paris agreement on climate change: analysis and commentaryHelen Keller once said, “The only thing worse than being blind is having sight but no vision.” The sustainability revolution is unstoppable. Signs are everywhere; policy makers and the private sector are veering towards a decarbonized development model. The adoption of the Paris Agreement on Climate Change on December 2015 marked the political turning point. This moment was only possible, however, due to the previous surge in momentum from non-state and sub-national actors.
Perhaps the greatest achievement of the Paris Agreement is that all 197 Parties to the United Nations Framework Convention on Climate Change (UNFCCC) agreed to hold the increase in the global average temperature to well below 2°C above pre-industrial levels, and to pursue efforts to limit this increase to 1.5°C. As a long-term goal, Parties also agreed to reach global peaking of greenhouse gases (GHG) as soon as possible, and to undertake rapid reductions afterwards so as to achieve a balance between the level of emissions and of removals by sinks (zero net emissions) by the second half of this century. The biggest change brought by the Agreement to the UNFCCC regime, is that all Parties must pursue mitigation efforts – not only developed countries.
Parties were also aware of the fact that some climate change impacts are already inevitable. With this in mind they agreed to increase the ability to adapt to the adverse impacts of climate change, and foster resilience and low greenhouse gas emissions development. Importantly, they agreed to make finance flows consistent with a pathway towards low greenhouse gas emissions and climate-resilient development. The Paris Agreement also mandates Parties to strengthen capacities and transfer technology of those Parties in need, so that they, in turn, can also lower GHG emissions and adapt to climate change.
The provisions of the Paris Agreement are however not enough to achieve the desired goal of 1.5°C temperature increase by the end of this century. This is so because the pledges presented by the Parties to the UNFCCC in their Intended Nationally Determined Contributions (INDCs) for the years 2020 to 2025/2030 are not ambitious enough to reach the goal (under the Agreement, Parties are free to determine the nature and extent of their pledge). “U.S. Secretary of State John Kerry participates in the event on the UN Paris Agreement Entry into Force at the United Nations” by U.S. Department of State. Public Domain via Wikimedia Commons.
The Paris Agreement also follows a more managerial and procedural approach. This is, it sets strong transparency obligations on Parties to present their national inventories of GHG and the information needed to track progress in the implementation of their pledges under a transparency mechanism. This transparency mechanism is accompanied by two additional mechanisms: a mechanism to facilitate and promote compliance, and a global stocktake. The global stocktake mechanism is a review and ratchet process where Parties review all pledges every five years, assess the progress towards achieving the purpose and long-term goals of the Agreement, and then set more ambitious ones. Unfortunately, waiting for this procedural machinery to start working and for the results of the next global stocktake in 2023 before increasing ambition would be too late.
According to the latest scientific research, in order to achieve the objective of maintaining the increase in average global temperature at below 2°C by the end of the century (least to say 1.5°C), GHG emissions should peak by 2020. According to Mission2020, convened by the former UNFCCC Secretary, Christiana Figueres, several additional milestones should be achieved by 2030 if the long-term goal of decarbonization by the end of the century is to become a reality:
- Renewables should make up at least 30% of the world’s electric supply; - cities and states should be ... Helen Keller once said, “The only thing worse than being blind is having sight but no vision.” The sustainability revolution is unstoppable. Signs are everywhere; policy makers and the private sector are veering towards a decarbonized ... http://feeds.feedblitz.com/~/434987178/_/oupblog/https://blog.oup.com/2017/08/last-minute-guide-total-solar-eclipse/Last minute guide to the total solar eclipsehttp://feedproxy.google.com/~r/oupbloglawpolitics/~3/Qr9nbX7SqyY/
http://feeds.feedblitz.com/~/433748842/_/oupblog/#respondMon, 14 Aug 2017 10:30:08 +0000https://blog.oup.com/?p=132937The moon is 400 times smaller than the sun, but it’s also 400 times closer to earth, which means that remarkably, the two bodies appear to us as exactly the same size. For 14 days a month, the orbiting moon is on the ‘sunny’ side of the spinning earth, and the sunlight casts a shadow. Almost all the time, that shadow is projected way off into space; but on very particular occasions, the shadow falls onto the earth - the moon is obscuring our view of the sun.

]]>
In exactly a week, a total eclipse will be visible, spanning along a narrow path through the United States. In readiness for the event on 21 August 2017, we asked physicist and eclipse-chaser professor Frank Close eight questions about eclipses, and how to watch them.

What is a solar eclipse?

The moon is 400 times smaller than the sun, but it’s also 400 times closer to earth, which means that remarkably, the two bodies appear to us as exactly the same size. For 14 days a month, the orbiting moon is on the ‘sunny’ side of the spinning earth, and the sunlight casts a shadow.

Almost all the time, that shadow is projected way off into space; but on very particular occasions, the shadow falls onto the earth – the moon is obscuring our view of the sun.

From the human observer’s point of view, your attention is of course on the sun, as you watch the moon slowly move in front of it.

Relative to the earth, the moon and sun are moving at slightly different speeds, which means the shadow sweeps across the earth’s surface; it passes from West to East at about 2,000 miles an hour.

Are all eclipses the same size?

No. Because the moon’s orbit around the earth is elliptical, sometimes it’s closer, sometimes it’s further away. So for each eclipse, the size of the shadow will be different.

On an occasion when the moon is very far from the earth, it will appear fractionally smaller than the sun. Rather than total, the eclipse is ‘annular’ – with a thin ring of sun visible around the moon.

How wide is the path of the eclipse?

Typically, the whole shadow will be several thousand miles across; the central area which gives ‘totality’ will usually be between ten and 100 miles across.

That depends on the size of the shadow. The longest totality has been around seven minutes; most are around three, and some are shorter than that.

It also depends very much on where you are standing. The closer you are to the centre-line of the shadow’s path, the longer you will experience totality. If you are say 30 miles off-centre, you will only get a few seconds.

How often does totality occur?

Twice a year (or, in fact, twice in 355 days) some sort of solar eclipse happens somewhere on the earth’s surface. But the spectacular total eclipses happen about six times a decade – and most often in locations not easily accessible to most of us.

How long will I have to wait to see the next eclipse?

It depends how far you want to travel! Since 1999 I have seen six: Cornwall in 1999, Zambia in 2001, Sahara Desert in 2006, two in the South Pacific, and one off the Cape Verde Islands. The August 2017 in Wyoming will be my seventh.

First, don’t walk about. It’s surprisingly easy to trip over in the strange darkness, especially when there are lots of other people around you.

Second, be absolutely certain to look away as totality finishes. During totality, you will be looking at the blackened disk that is (or is not) the sun. The sunlight returns literally in a flash, and your pupils will be wide open.

Third, watch out for wild animals! When totality falls, birds and animals behave as if it’s night-time. Birds roost, crickets chirp, night-time predators set about their business. In 2001, I was watching from the banks of the Zambezi river; fortunately, our guards were ready for the hippopotami.

Fourth, save up. Experiencing a total eclipse is the most extraordinary and wonderful sensation. You are sure to want to finance the journey to another one, another year.

]]>http://feeds.feedblitz.com/~/433748842/_/oupblog/feed/0path of totality,Sun,*Featured,Science & Medicine,Frank Close,Journeys to the Dark Side of the Moon,Moon,total solar eclipse,natural phenomena,solar eclipse,Books,earth,Totality,guide to eclipse,Physics & Chemistry,eclipse-chasing,eclipseIn exactly a week, a total eclipse will be visible, spanning along a narrow path through the United States. In readiness for the event on 21 August 2017, we asked physicist and eclipse-chaser professor Frank Close eight questions about eclipses, and how to watch them.
What is a solar eclipse?
The moon is 400 times smaller than the sun, but it’s also 400 times closer to earth, which means that remarkably, the two bodies appear to us as exactly the same size. For 14 days a month, the orbiting moon is on the ‘sunny’ side of the spinning earth, and the sunlight casts a shadow.
Almost all the time, that shadow is projected way off into space; but on very particular occasions, the shadow falls onto the earth – the moon is obscuring our view of the sun.
From the human observer’s point of view, your attention is of course on the sun, as you watch the moon slowly move in front of it.
Relative to the earth, the moon and sun are moving at slightly different speeds, which means the shadow sweeps across the earth’s surface; it passes from West to East at about 2,000 miles an hour.
Are all eclipses the same size?
No. Because the moon’s orbit around the earth is elliptical, sometimes it’s closer, sometimes it’s further away. So for each eclipse, the size of the shadow will be different.
On an occasion when the moon is very far from the earth, it will appear fractionally smaller than the sun. Rather than total, the eclipse is ‘annular’ – with a thin ring of sun visible around the moon.
How wide is the path of the eclipse?
Typically, the whole shadow will be several thousand miles across; the central area which gives ‘totality’ will usually be between ten and 100 miles across. Watching the Solar Eclipse by Karel Fort. CC BY 2.0 via Flickr.
How long does totality last?
That depends on the size of the shadow. The longest totality has been around seven minutes; most are around three, and some are shorter than that.
It also depends very much on where you are standing. The closer you are to the centre-line of the shadow’s path, the longer you will experience totality. If you are say 30 miles off-centre, you will only get a few seconds.
How often does totality occur?
Twice a year (or, in fact, twice in 355 days) some sort of solar eclipse happens somewhere on the earth’s surface. But the spectacular total eclipses happen about six times a decade – and most often in locations not easily accessible to most of us.
How long will I have to wait to see the next eclipse?
It depends how far you want to travel! Since 1999 I have seen six: Cornwall in 1999, Zambia in 2001, Sahara Desert in 2006, two in the South Pacific, and one off the Cape Verde Islands. The August 2017 in Wyoming will be my seventh.
Which cities will experience totality in 2017?
The shadow makes landfall south of Portland, Oregon, and it leaves the continent in South Carolina. Cities on the path include: Corvallis, Albany; Lebanon, Oregon; Idaho Falls, Idaho; Casper, Wyoming; Lincoln, Nebraska; St Joseph, Missouri; Kansas City, Kansas; St Louis, Missouri; Nashville, Tennessee; Greenville, South Carolina.
What safety tips would you recommend?
First, don’t walk about. It’s surprisingly easy to trip over in the strange darkness, especially when there are lots of other people around you.
Second, be absolutely certain to look away as totality finishes. During totality, you will be looking at the blackened disk that is (or is not) the sun. The sunlight returns literally in a flash, and your pupils will be wide open.
Third, watch out for wild animals! When totality falls, birds and animals behave as if it’s night-time. Birds roost, crickets chirp, night-time predators set about their business. In 2001, I was watching from the banks of the Zambezi river; fortunately, our guards were ready for the ... In exactly a week, a total eclipse will be visible, spanning along a narrow path through the United States. In readiness for the event on 21 August 2017, we asked physicist and eclipse-chaser professor Frank Close eight questions about eclipses, and ... http://feeds.feedblitz.com/~/433748842/_/oupblog/https://blog.oup.com/2017/08/the-iliad-excerpt/The Iliad and The Trojan War [excerpt]http://feedproxy.google.com/~r/oupbloglawpolitics/~3/1U7Dm-EffQQ/
http://feeds.feedblitz.com/~/433701688/_/oupblog/#respondMon, 14 Aug 2017 09:30:09 +0000https://blog.oup.com/?p=132973The Iliad tells the story of Achilles’ anger, but also encompasses, within its narrow focus, the whole of the Trojan War. The title promises "a poem about Ilium" (i.e. Troy), and the poem lives up to that description. The first books recapitulate the origins and early stages of the Trojan War.

]]>
The Iliad tells the story of Achilles’ anger, but also encompasses, within its narrow focus, the whole of the Trojan War. The title promises “a poem about Ilium” (i.e. Troy), and the poem lives up to that description. The first books recapitulate the origins and early stages of the Trojan War. The quarrel over Briseïs mirrors the original cause of the war, for it too is a fight between two men over one woman. The Catalogue of Ships in book 2 acts as a reminder of the expedition; book 3 introduces Helen and her two husbands; book 4 dramatizes how a private quarrel over a woman can become a war; in book 5 the fighting escalates; and book 6 takes us into the city of Troy. The narrative now looks forward to the time when the Achaeans will capture the city: it anticipates the end of the poem, and of the war itself. The bulk of the Iliad is devoted to the fighting on the battlefield. It describes only a few days of war, but the sheer scale of the narrative, and its relentless succession of deaths, come to represent the whole war.

The poet is specific about the horrors of the battlefield: wounds, for example, are described in precise and painful detail. At 13.567–9 Meriones pursues Adamas and stabs him “between the genitals and navel, in the place where battle-death comes most painfully to wretched mortals.” At 15.489–500 Peneleos thrusts his spear through Ilioneus’ eye-socket, then cuts off his head and brandishes it aloft. At 20.469–1 Tros tries to touch Achilles’ knees in supplication, but Achilles stabs him

. . . in the liver with his sword,
and the liver slid out of his body, and the dark blood from it
filled his lap . . .

No Hollywood version of the Iliad is as graphic as the poem itself. Descriptions of the physical impact of war are matched by an unflinching psychological account of those who fight in it. Homer shows exactly what it takes to step forward in the first line of battle, towards the spear of the enemy. He describes the adrenaline, the social conditioning, the self-delusion required. And the shame of failure, which is worse than death.

The truth and vividness of the Iliad have struck many readers. In her towering exploration of violence, Simone Weil, for example, calls the Iliad “the most flawless of mirrors,” because it shows how war “makes the human being a thing quite literally, that is, a dead body. The Iliad never tires of showing that tableau.” Weil was writing in 1939: her L’Iliade ou le poème de la force did not just describe the Trojan War; it anticipated the Second World War, and prophesied how it would again turn people into things. Just like Weil, women inside the Iliad make powerful statements against violence—and even against the courage of their own men. Hector’s wife Andromache, for example, tells him that his own prowess will kill him, and that he will make her a widow (6.431–2). When confronted with his wife’s words, Hector claims he would rather die on the battlefield than witness her suffering (6.464–5). He then tries to console her in the only way he knows: by imagining more wars. He picks up his baby son and prays that he may be stronger than him and, one day, bring home the spoils of the enemy, so that his mother may rejoice (6.476–81). This is how the poet Michael Longley, in the context of the Troubles in Northern Ireland, paraphrases Hector’s prayer: he “kissed the babbie and dandled him in his arms and | prayed that his son might grow up bloodier than him.”

The Trojan War, the Second World War, the Troubles: the Iliad is intertwined with all stories about all wars. Already in antiquity it was part of a wider tradition of poetry, which found its inspiration in the ruins of a Bronze Age city, well visible on the coast of Asia Minor. The Iliad often refers to that wider tradition. For example, when Hector picks up his baby and dandles him in his arms, his gesture recalls that of an enemy soldier who will soon pick up the little boy—and throw him off the walls of Troy. Other early poems described the death of Astyanax in a manner that clearly recalled his last meeting with his father. Some stories about the fall of Troy were known to the poet of the Iliad and his earliest audiences; others were inspired by it. As a result, the Iliad became more allusive and complex in the course of time. This is how Zachary Mason describes the situation in a recent novel inspired by Homer:

It is not widely understood that the epics attributed to Homer were in fact written by the gods before the Trojan war—these divine books are the archetypes of that war rather than its history. In fact, there have been innumerable Trojan wars, each played out according to an evolving aesthetic, each representing a fresh attempt at bringing the terror of battle into line with the lucidity of the authorial intent. Inevitably, each particular war is a distortion of its antecedent, an image in a warped hall of mirrors.

Mirrors and distorted mirrors: what readers ask of the Iliad is whether things can be different. Whether we must imagine wars and more wars, like Hector when he prays for his son, or whether there can perhaps be peace—and even a poetics of peace. This is, for example, the insistent question of the poet Peter Handke in his explorations of the cold war, clear answer, only fleeting im ages of peace in the form of distant memories, startling comparisons, and doomed aspirations. Hector runs past the place where the Trojan women used to wash their clothes before the war (22.153–9). Andromache wishes Hector had died in his own bed (24.743–5). Athena deflects an arrow like a mother brushing away a fly from her sleeping baby (4.129–33). On the shield of Achilles—which is a representation of the whole world—there is a city at war, but there is also a city at peace. There is a wedding, and the vintage, and a row of boys and girls dancing to music (18.478–608). These images are precious, because they are so very rare.

Featured image credit: “Greece” by GregMontani. CC0 Public Domain viaPixabay.

]]>http://feeds.feedblitz.com/~/433701688/_/oupblog/feed/0Series & Columns,*Featured,Oxford World's Classics,Homer,the iliad,trojan war,Literature,Troy,Classics & Archaeology,Helen of TroyThe Iliad tells the story of Achilles’ anger, but also encompasses, within its narrow focus, the whole of the Trojan War. The title promises “a poem about Ilium” (i.e. Troy), and the poem lives up to that description. The first books recapitulate the origins and early stages of the Trojan War. The quarrel over Briseïs mirrors the original cause of the war, for it too is a fight between two men over one woman. The Catalogue of Ships in book 2 acts as a reminder of the expedition; book 3 introduces Helen and her two husbands; book 4 dramatizes how a private quarrel over a woman can become a war; in book 5 the fighting escalates; and book 6 takes us into the city of Troy. The narrative now looks forward to the time when the Achaeans will capture the city: it anticipates the end of the poem, and of the war itself. The bulk of the Iliad is devoted to the fighting on the battlefield. It describes only a few days of war, but the sheer scale of the narrative, and its relentless succession of deaths, come to represent the whole war.
The poet is specific about the horrors of the battlefield: wounds, for example, are described in precise and painful detail. At 13.567–9 Meriones pursues Adamas and stabs him “between the genitals and navel, in the place where battle-death comes most painfully to wretched mortals.” At 15.489–500 Peneleos thrusts his spear through Ilioneus’ eye-socket, then cuts off his head and brandishes it aloft. At 20.469–1 Tros tries to touch Achilles’ knees in supplication, but Achilles stabs him
. . . in the liver with his sword,
and the liver slid out of his body, and the dark blood from it
filled his lap . . .
No Hollywood version of the Iliad is as graphic as the poem itself. Descriptions of the physical impact of war are matched by an unflinching psychological account of those who fight in it. Homer shows exactly what it takes to step forward in the first line of battle, towards the spear of the enemy. He describes the adrenaline, the social conditioning, the self-delusion required. And the shame of failure, which is worse than death.
The truth and vividness of the Iliad have struck many readers. In her towering exploration of violence, Simone Weil, for example, calls the Iliad “the most flawless of mirrors,” because it shows how war “makes the human being a thing quite literally, that is, a dead body. The Iliad never tires of showing that tableau.” Weil was writing in 1939: her L’Iliade ou le poème de la force did not just describe the Trojan War; it anticipated the Second World War, and prophesied how it would again turn people into things. Just like Weil, women inside the Iliad make powerful statements against violence—and even against the courage of their own men. Hector’s wife Andromache, for example, tells him that his own prowess will kill him, and that he will make her a widow (6.431–2). When confronted with his wife’s words, Hector claims he would rather die on the battlefield than witness her suffering (6.464–5). He then tries to console her in the only way he knows: by imagining more wars. He picks up his baby son and prays that he may be stronger than him and, one day, bring home the spoils of the enemy, so that his mother may rejoice (6.476–81). This is how the poet Michael Longley, in the context of the Troubles in Northern Ireland, paraphrases Hector’s prayer: he “kissed the babbie and dandled him in his arms and | prayed that his son might grow up bloodier than him.”
The Trojan War, the Second World War, the Troubles: the Iliad is intertwined with all stories about all wars. Already in antiquity it was part of a wider tradition of poetry, which found its inspiration in the ruins of a Bronze Age city, well visible on the coast of Asia Minor. The Iliad often refers to that wider tradition. For example, when Hector ... The Iliad tells the story of Achilles’ anger, but also encompasses, within its narrow focus, the whole of the Trojan War. The title promises “a poem about Ilium” (i.e. Troy), and the poem lives up to that description.http://feeds.feedblitz.com/~/433701688/_/oupblog/https://blog.oup.com/2017/08/barth-mordernism-literature-menard/Barth, the Menardianhttp://feedproxy.google.com/~r/oupbloglawpolitics/~3/jQD-VdG_uVw/
http://feeds.feedblitz.com/~/433639706/_/oupblog/#respondMon, 14 Aug 2017 08:30:35 +0000https://blog.oup.com/?p=132968For the better part of half a century, John Barth was synonymous with what was the last self-conscious attempt at constructing a universal aesthetic movement speaking for all of humanity but recognizing only its bourgeois, white constituent. Much like Virginia Woolf once could claim that “on or about December, 1910, human character changed,” Barth would argue that literary modernism was over.

]]>
For the better part of half a century, John Barth was synonymous with what was the last self-conscious attempt at constructing a universal aesthetic movement speaking for all of humanity but recognizing only its bourgeois, white constituent. Much like Virginia Woolf once could claim that “on or about December, 1910, human character changed,” Barth would argue (with about the same dose of provincialism) that literary modernism was over.

If Woolf’s epochal declaration of modernism was a program for an exploration of human consciousness beyond European realism, the modernism of the day was something else. Now, in the late sixties, things had to change.

Barth’s exemplar of the devolution of modernism was Samuel Becket, who, Barth claimed, had “progressed from marvelously constructed English sentences through terser and terser French ones to the unsyntactical, unpunctuated prose of Comment C’est and ‘ultimately’ to wordless mimes.” Becket’s modernist ultimacy engendered only silence and exhaustion, Barth claimed. Barth’s project would therefore be one of sound … and lots of it. Instead of removing character and plot from literature, the new novelist needed to approach the question of originality in a different way. The novel ought to embrace its condition of belatedness, Barth argued, take all its “felt ultimacies” and turn them into a new criterion.

Here Jorge Luis Borges serves as Barth’s exemplar. In the story, “Pierre Menard, Author of the Quixote,” Borges’ invention turns around “a Symbolist from Nîmes,” who by following a “philological fragment by Novalis—which outlines the theme of a total identification with a given author” as a preparation for translation, ends up composing several chapters of Cervantes’s novel. The result of this re-composition, Borges’ narrator tells us, is “astounding”: what “is a mere rhetorical praise of history” when coming from the seventeenth century “lay genius” Cervantes, is an argument about history as the origin of reality when coming from Menard. The quixotic has become ontological, Barth suggests, in a literary universe replete with dizzying mirrors and labyrinths, self-proclaimed precursors, and unacknowledged antecedents.

Just as the concept of modernism wasn’t quite available to Woolf when she published Mr. Bennett and Mrs. Brown in 1924, postmodernism hadn’t quite established itself at the time of Barth’s reflections on The Literature of Exhaustion in 1967. But once it did, Barth became one of its poster boys, along with Joseph Heller, Donald Barthelme, Thomas Berger, Thomas Pynchon, John Hawkes, Kurt Vonnegut Jr., William Gass and Robert Coover—all male, white and provincially middle-class—just like, presumably, Borges’ Symbolist from Nîmes.

In what remained of the 60s, throughout 70s, 80s, 90s, and the first decade of the new century, Barth continued to publish novels and short-stories in this Borgesian-Menardian vein. The satirical re-invention of the eighteenth-century novel that had earned him a name in the early 60s, now gave way to more deeply philosophical explorations. At the heart of Barth’s narratives would always be a writer, who in writing in a semi-autobiographical, realistic style, would find himself literally swept away by the fictions of the past, Homer’s Odysseus, Poe’s Pym, but above all, Scheherazade’s One Thousand and One Nights. Barth’s string of Mr. Bennetts—Fenwick Turner from Sabbatical (1982), Peter Sagamore from The Tidewater Tales (1987), and Simon William Behler, of The Last Voyage of Somebody the Sailor (1991)—are in this sense antithetical to Woolf’s Mrs. Brown, but in a different way as her Mr. Bennett.

Whereas Woolf argues that conventional realism isn’t able to penetrate deeply enough into human sense making, and thus leaving vast parts of reality unaccounted for, Barth’s narrators, although committed to novelistic realism, lose themselves in the mirror worlds and labyrinths of the previously written and told, whether they be the mythic fictions of culture or the alternative facts and realities of the CIA.

If the novel, as it has been argued time and again, is a genre of fiction that depends on philosophical realism and what Ian Watt has called “the individual apprehension of reality,” then Barth’s novels might perhaps best be read as tours-de-force of storytelling and world building, which to boot, are philosophical arguments about how words, works, and worlds interlink and interact in the construction of our, by now, manifold and thoroughly provincialized realities. In this way, Barth’s art goes a long way in acknowledging that human consciousness doesn’t begin in the sensorium of a perceiving subject, and that it therefore cannot be the Archimedean point from which a sense of the real develops. Barth’s novels seem to suggest that fiction and fact are as related as their etymologies, and that our capacity for storytelling, therefore, cannot be exhausted or grow old. On the contrary, the ability to narrativize, this mythic capacity to imagine a center of consciousness (a hero, a subject) in the process of traversing a plot space, might be where both consciousness and history begin. Barth knows this, just as Pierre Menard knew it and maybe Borges too. Cervantes certainly did.

Barth should be celebrated as a master storyteller, passionate virtuoso, teacher, and sage, and perhaps also—hopefully—prophet and truth-teller.

]]>http://feeds.feedblitz.com/~/433639706/_/oupblog/feed/0pierre menard,*Featured,postmodern fiction,Samuel Beckett,metafiction,Jorge Luis Borges,Kurt Vonnegut,Arts & Humanities,john barth,novalis,Oxford Bibliographies in American Literature,20th Century Literature,the literature of exhaustion,Online products,OBO,don quixote,Literature,PostmodernismFor the better part of half a century, John Barth was synonymous with what was the last self-conscious attempt at constructing a universal aesthetic movement speaking for all of humanity but recognizing only its bourgeois, white constituent. Much like Virginia Woolf once could claim that “on or about December, 1910, human character changed,” Barth would argue (with about the same dose of provincialism) that literary modernism was over.
If Woolf’s epochal declaration of modernism was a program for an exploration of human consciousness beyond European realism, the modernism of the day was something else. Now, in the late sixties, things had to change.
Barth’s exemplar of the devolution of modernism was Samuel Becket, who, Barth claimed, had “progressed from marvelously constructed English sentences through terser and terser French ones to the unsyntactical, unpunctuated prose of Comment C'est and ‘ultimately’ to wordless mimes.” Becket’s modernist ultimacy engendered only silence and exhaustion, Barth claimed. Barth’s project would therefore be one of sound … and lots of it. Instead of removing character and plot from literature, the new novelist needed to approach the question of originality in a different way. The novel ought to embrace its condition of belatedness, Barth argued, take all its “felt ultimacies” and turn them into a new criterion.
Here Jorge Luis Borges serves as Barth’s exemplar. In the story, “Pierre Menard, Author of the Quixote,” Borges’ invention turns around “a Symbolist from Nîmes,” who by following a “philological fragment by Novalis—which outlines the theme of a total identification with a given author” as a preparation for translation, ends up composing several chapters of Cervantes's novel. The result of this re-composition, Borges’ narrator tells us, is “astounding”: what “is a mere rhetorical praise of history” when coming from the seventeenth century “lay genius” Cervantes, is an argument about history as the origin of reality when coming from Menard. The quixotic has become ontological, Barth suggests, in a literary universe replete with dizzying mirrors and labyrinths, self-proclaimed precursors, and unacknowledged antecedents. Jorge Luis Borges by Grete Stern, 1951. Public Domain via Wikimedia Commons.
Just as the concept of modernism wasn’t quite available to Woolf when she published Mr. Bennett and Mrs. Brown in 1924, postmodernism hadn’t quite established itself at the time of Barth’s reflections on The Literature of Exhaustion in 1967. But once it did, Barth became one of its poster boys, along with Joseph Heller, Donald Barthelme, Thomas Berger, Thomas Pynchon, John Hawkes, Kurt Vonnegut Jr., William Gass and Robert Coover—all male, white and provincially middle-class—just like, presumably, Borges’ Symbolist from Nîmes.
In what remained of the 60s, throughout 70s, 80s, 90s, and the first decade of the new century, Barth continued to publish novels and short-stories in this Borgesian-Menardian vein. The satirical re-invention of the eighteenth-century novel that had earned him a name in the early 60s, now gave way to more deeply philosophical explorations. At the heart of Barth’s narratives would always be a writer, who in writing in a semi-autobiographical, realistic style, would find himself literally swept away by the fictions of the past, Homer’s Odysseus, Poe’s Pym, but above all, Scheherazade’s One Thousand and One Nights. Barth’s string of Mr. Bennetts—Fenwick Turner from Sabbatical (1982), Peter Sagamore from The Tidewater Tales (1987), and Simon William Behler, of The Last Voyage of Somebody the Sailor (1991)—are in this sense antithetical to Woolf’s Mrs. Brown, ... For the better part of half a century, John Barth was synonymous with what was the last self-conscious attempt at constructing a universal aesthetic movement speaking for all of humanity but recognizing only its bourgeois, white constituent.http://feeds.feedblitz.com/~/433639706/_/oupblog/https://blog.oup.com/2017/08/ecosystem-based-mitigation-adaptation/Ecosystem-based mitigation and adaptationhttp://feedproxy.google.com/~r/oupbloglawpolitics/~3/S3phMsJgTzA/
http://feeds.feedblitz.com/~/433587954/_/oupblog/#respondMon, 14 Aug 2017 07:30:53 +0000https://blog.oup.com/?p=132928Payments for ecosystem services (PES), also known as payments for environmental services (or benefits), are incentives offered to farmers or landowners in compensation for proper land-management that provides ecological services. Among these benefits we can mention conserving animal and plant species, protecting hydric resources, conserving natural scenery, and storing carbon.

]]>
Payments for ecosystem services (PES), also known as payments for environmental services (or benefits), are incentives offered to farmers or landowners in compensation for proper land-management that provides ecological services. Among these benefits we can mention conserving animal and plant species, protecting hydric resources, conserving natural scenery, and storing carbon.

Costa Rica is a pioneer in PES schemes. Since 1997 approximately one million hectares of forest have been part of these ‘payments for ecosystem services’ (PES) schemes at one time or another. Forest cover has returned to more than 50% of the country’s total land area (51,000 square kilometres) – a huge increase from an all-time low of 21% in 1987. This small Central American country was not only able to stop one of the highest deforestation rates world-wide, but also to reverse it and in this way preserve its fantastic biodiversity. According to a recent study by the Forest and Agriculture Organization of the United Nations (FAO), in 2016 forest cover amounted to 54% of the total territory.

This tremendous success was possible with the help of an innovative scheme elaborated, financed, and implemented by the Costa Rican government with the support of the Fundación para el Desarrollo de la Cordillera Volcánica Central (Fundecor). The foundation still offers today a series of services related to forest conservation, such as capacity building for forest ecosystem management, advice on sustainable forest management (both of natural forest and forest plantations), and advice on mitigation and adaptation to climate change in the forest sector.

Brazil, Ecuador, and Guatemala have also created PES schemes, financed by the government. In Brazil, 2,700 indigenous families have benefited from the Bolsa Floresta project, which pays in exchange for the preservation of primary forest. In Ecuador, the Socio-Bosque project has been able to conserve more than half a million hectares of forest, with more than 60,000 beneficiaries. Guatemala has reforested more than 95,000 hectares of land, in addition to the conservation of 155,000 hectares of natural forest.

More recently, PES schemes have been implemented for the conservation of agricultural and livestock landscapes. A pilot program, again in Costa Rica, has achieved a significant reduction in degraded pasture land (more than 40% recovery), an increase of more than 75% in the number pastures with tree coverage, an increase in 3.5 times the length of living fences, 22% more carbon storage, habitat creation, improved water security, and a reduction in water surface runoff. In addition to this, a program financed by the Global Environmental Fund (GEF) and carried out in Colombia, Nicaragua, and Costa Rica achieved a 60% reduction in degraded pastureland in all three countries, with a significant increase in forest coverage mixed with pastureland. The project has also contributed to a 71% increase in carbon storage, a higher milk production, and a 115% income increase of the farm owners. A well-planned and executed PES scheme provides a holistic solution to many interrelated problems. First of all, it preserves existing forests and rebuilds formerly deforested areas. This has several positive co-benefits: the forests store carbon and thus mitigate climate change, water resources are secured, beautiful countryside scenery is restored, air quality is improved, and plant and animal species are better protected. Second, it provides additional resources to land-owners in rural areas, potentially relieving social and economic problems in those regions. Thirdly, PES can also be used as a strategy for adaptation to the impacts of climate change.

Ecosystem-based adaptation (EBA) is the usage of biodiversity and ecosystemic services as part of an adaptation strategy. Some additional benefits to those mentioned above are that EBA improves risk reduction by restoring coastal habitats, establishes agricultural systems, and helps in the prevention of fires. A good EBA project should follow several basic principles: a) involve local communities, taking into consideration their way of life and specific needs; b) focus on the reduction of pressures that have degraded the ecosystem; c) develop alliances and strategies with different partners, public and private; d) take advantage of existing good practices in the management of natural resources; e) follow an adaptive approach; f) integrate the project into greater adaptation strategies; and g) communicate and educate.

One example of a EBA project is the Parque Andino de la Papa (Andean Potato Park) in Cusco, Perú. The people in the region have been able to increase the number of potato varieties (from 200 to 650). This reduces the risk of crop destruction, by using different varieties of potatoes in different microclimates. As a co-benefit, the project improves and secures genetic biodiversity. Women and local communities have also been empowered in the process.

Another interesting example is the CASCADA project, which takes place in Guatemala, Honduras, and Costa Rica. Some of its objectives are to contribute to the adaptation of small coffee producers, increase the capacities in the communities, and involve civil society in decision-makin g. CASCADA also reduces greenhouse gas (GHG) emissions, such as methane and nitrous oxide set off during the coffee production process. This is done by planting trees for shade and as green barriers, as well as reducing chemical fertilisers and organic waste. The project also aims to empower the local communities and other relevant actors, forming a virtuous cycle.

The successful experiences with ecosystem-based mitigation and adaptation schemes in Latin America show that these are helpful tools in the fight against climate change. If well implemented, ecosystem-based programs can have many other additional environmental, social and economic co-benefits. They offer holistic solutions to interconnected problems and have the potential to become virtuous cycles, enabling countries and regions to achieve much with a good cost-benefit ratio. Also, by sharing their knowledge gained from these pioneer schemes, Latin American countries could also become a leading force in the fight against climate change. By providing advisory services to other countries, Latin American countries could also receive additional sources of income and in this manner, further enrich a win-win process.

]]>http://feeds.feedblitz.com/~/433587954/_/oupblog/feed/0*Featured,climate change law,climate change,climate,Earth & Life Sciences,International Environmental Law,environment,Latin America,payments for ecosystem services,Books,Paris Agreement,Law,maría pía carazo,the paris agreement on climate change,Ecosystem-based adaptation,ecosystem,environmental law,ecosystem-based programsPayments for ecosystem services (PES), also known as payments for environmental services (or benefits), are incentives offered to farmers or landowners in compensation for proper land-management that provides ecological services. Among these benefits we can mention conserving animal and plant species, protecting hydric resources, conserving natural scenery, and storing carbon.
Costa Rica is a pioneer in PES schemes. Since 1997 approximately one million hectares of forest have been part of these ‘payments for ecosystem services’ (PES) schemes at one time or another. Forest cover has returned to more than 50% of the country's total land area (51,000 square kilometres) – a huge increase from an all-time low of 21% in 1987. This small Central American country was not only able to stop one of the highest deforestation rates world-wide, but also to reverse it and in this way preserve its fantastic biodiversity. According to a recent study by the Forest and Agriculture Organization of the United Nations (FAO), in 2016 forest cover amounted to 54% of the total territory.
This tremendous success was possible with the help of an innovative scheme elaborated, financed, and implemented by the Costa Rican government with the support of the Fundación para el Desarrollo de la Cordillera Volcánica Central (Fundecor). The foundation still offers today a series of services related to forest conservation, such as capacity building for forest ecosystem management, advice on sustainable forest management (both of natural forest and forest plantations), and advice on mitigation and adaptation to climate change in the forest sector.
Brazil, Ecuador, and Guatemala have also created PES schemes, financed by the government. In Brazil, 2,700 indigenous families have benefited from the Bolsa Floresta project, which pays in exchange for the preservation of primary forest. In Ecuador, the Socio-Bosque project has been able to conserve more than half a million hectares of forest, with more than 60,000 beneficiaries. Guatemala has reforested more than 95,000 hectares of land, in addition to the conservation of 155,000 hectares of natural forest. Save Global Green by Raw Pixels. CC0 Public Domain by Pixabay.
More recently, PES schemes have been implemented for the conservation of agricultural and livestock landscapes. A pilot program, again in Costa Rica, has achieved a significant reduction in degraded pasture land (more than 40% recovery), an increase of more than 75% in the number pastures with tree coverage, an increase in 3.5 times the length of living fences, 22% more carbon storage, habitat creation, improved water security, and a reduction in water surface runoff. In addition to this, a program financed by the Global Environmental Fund (GEF) and carried out in Colombia, Nicaragua, and Costa Rica achieved a 60% reduction in degraded pastureland in all three countries, with a significant increase in forest coverage mixed with pastureland. The project has also contributed to a 71% increase in carbon storage, a higher milk production, and a 115% income increase of the farm owners. A well-planned and executed PES scheme provides a holistic solution to many interrelated problems. First of all, it preserves existing forests and rebuilds formerly deforested areas. This has several positive co-benefits: the forests store carbon and thus mitigate climate change, water resources are secured, beautiful countryside scenery is restored, air quality is improved, and plant and animal species are better protected. Second, it provides additional resources to land-owners in rural areas, potentially relieving social and economic problems in those regions. Thirdly, PES can also be used as a strategy for adaptation to the impacts of climate change.
Ecosystem-based adaptation (EBA) is the usage of biodiversity and ecosystemic services as part of an adaptation strategy. Some additional benefits to those mentioned above are that ... Payments for ecosystem services (PES), also known as payments for environmental services (or benefits), are incentives offered to farmers or landowners in compensation for proper land-management that provides ecological services.http://feeds.feedblitz.com/~/433587954/_/oupblog/https://blog.oup.com/2017/08/george-romero-game-of-thrones-zombie/George Romero, Game of Thrones, and the zombie apocalypsehttp://feedproxy.google.com/~r/oupbloglawpolitics/~3/mEOr9lOgIOo/
http://feeds.feedblitz.com/~/432745512/_/oupblog/#respondSun, 13 Aug 2017 12:30:07 +0000https://blog.oup.com/?p=132961When George Romero, director of Night of the Living Dead, died on 16 July, the world was gearing up for the season opener of Game of Thrones. Game of Thrones owes its central storyline—the conflict between the Night’s Watch and the White Walkers—and a great measure of its success to Romero, as do other popular and critically-acclaimed versions of the story, whether television, film, fiction, or comics.

]]>
When George Romero, director of Night of the Living Dead, died on 16 July, the world was gearing up for the season opener of Game of Thrones. Game of Thrones owes its central storyline—the conflict between the Night’s Watch and the White Walkers—and a great measure of its success to Romero, as do other popular and critically-acclaimed versions of the story, whether television (The Walking Dead, iZombie), film (28 Days Later, Sean of the Dead, and Zombieland), fiction (Pride and Prejudice and Zombies, The Making of Zombie Wars), comics (Marvel Zombies, Afterlife with Archie, Blackest Night), or any of the other media or products shaped by the zombie narrative. When we consider how important the Zombie Apocalypse story has become in our culture, it is hard to know whether to call George Romero a popular filmmaker, a social critic, or a prophet. Maybe he’s all three.

When Romero directed Night of the Living Dead in 1968 on a shoestring budget and with no-name actors, he and co-writer Joe Russo were simply trying to make a cheap, stylish, and entertaining horror film. In the process, they also shaped a myth appropriate for an age full of tensions and troubles. While the word “zombie” is never used in the film, Night of the Living Dead represents the Ground Zero of the modern zombie story. 1968, of course, was a year marked by assassinations, political unrest, the Vietnam War, changes in social and sexual mores, racial violence, and other unsettling changes. Night of the Living Dead took on those fears and worries metaphorically by transmuting them into ghouls outside a farmhouse, trying to break in and attack the living.

Did Romero and Russo consciously set out to offer cultural critique in their story? Yes and no. Romero talked in interviews about how the model of Richard Matheson’s science-fiction novel I Am Legend offered him a narrative of revolution, of the world being turned upside down, that he very much liked and wanted to explore. But the assassination of Dr. Martin Luther King, Jr., to which some viewers think the film’s ending refers to, took place after the completion of principal photography, and the film’s topicality is largely a result of the shape of the narrative itself. The Zombie Apocalypse has proven to be a particularly appropriate tale for an unsettled world, as Night of the Living Dead (and the reaction to it) amply demonstrate.

As with many artists, in this film Romero was anticipating as much as he was responding. A story about zombies who attack in waves, about humans who resist and quarrel about how to resist, and about the fear that we will lose our identity turned out to be the right story at the right time. A similar congruence comes just after 9/11, when the British horror film 28 Days Later set off the zombie craze in which we still reside.

In reshaping the zombie story from its origins as a narrative about slavery and dominance in the Caribbean, to a story about supernatural ghouls who threaten all human life, Night of the Living Dead offered a contemporary example of a trope familiar in the West for at least 600 years in which Death or the dead confront the living in art and literature in times of crisis. Romero’s modern zombie story is an updated version of the Danse Macabre in which embodied Death reaches out to every member of the society and yanks them into the next life. It is also a cathartic tale of ultimate horror that feels much like our current experience, but which we can turn off or put down, grateful that however bad the terrors of the present moment might be, at least the dead are not trying to knock down our doors.

Game of Thrones and The Walking Dead are two of our culture’s most-widely consumed versions of the Zombie Apocalypse story Romero inaugurated. In Game of Thrones, the often-foregrounded storylines of characters vying to sit on the Iron Throne are finally beginning to be overshadowed by the conflict between members of the Night’s Watch and the White Walkers (“The Others”) who command the walking dead. While the plots concerning royal succession are filled with human interest, intrigue, sexuality, and violence, some characters (and many critics) have suggested that all these are no more than the arguing of attractive children over toys. When you see the dead walk, as Melisandre (Carice van Houten) and Jon Snow (Kit Harrington) have, it becomes clear that nothing else matters. In this way, the Zombie Apocalypse will determine Game of Thrones’ final outcome.

Like Night of the Living Dead in 1968, Game of Thrones uses the Zombie Apocalypse to wrestle with the threats of our own age, not just the obvious ones, but also perhaps the ones to which we’re simply not paying enough attention. So, as in Romero’s films, Game of Thrones helps its viewers to grapple with such menaces as international terrorism, economic unrest, refugees, pandemics, and natural disasters. But by showing how easily we can be distracted from greater menaces and how often people prefer not to believe that “Winter Is Coming,” it also encourages us to pay attention to undervalued or often-dismissed threats. In what ways, for example, might human beings be contributing to climate change, the decline of bees, the extinction of animal species, or any number of potentially apocalyptic crises that we ignore?

Romero’s films were recognized in his lifetime as entertaining,important, and culturally relevant. His great final achievement may be that he bequeathed to us a master narrative with which we can confront our fears. In the Zombie Apocalypse, the people of 2017 too can find meaning, comfort, and insight into our own lives, and for that, we can thank George Romero.

]]>http://feeds.feedblitz.com/~/432745512/_/oupblog/feed/0Religion,*Featured,The Wisdom of the Zombie Apocalypse,zombies,george romero,Arts & Humanities,Media,Zombie Apocalypse,Books,Joe Russo,Game of Thrones,Greg Garrett,Night of the Living Dead,living with the living dead,TV & Film,28 Days LaterWhen George Romero, director of Night of the Living Dead, died on 16 July, the world was gearing up for the season opener of Game of Thrones. Game of Thrones owes its central storyline—the conflict between the Night’s Watch and the White Walkers—and a great measure of its success to Romero, as do other popular and critically-acclaimed versions of the story, whether television (The Walking Dead, iZombie), film (28 Days Later, Sean of the Dead, and Zombieland), fiction (Pride and Prejudice and Zombies, The Making of Zombie Wars), comics (Marvel Zombies, Afterlife with Archie, Blackest Night), or any of the other media or products shaped by the zombie narrative. When we consider how important the Zombie Apocalypse story has become in our culture, it is hard to know whether to call George Romero a popular filmmaker, a social critic, or a prophet. Maybe he’s all three.
When Romero directed Night of the Living Dead in 1968 on a shoestring budget and with no-name actors, he and co-writer Joe Russo were simply trying to make a cheap, stylish, and entertaining horror film. In the process, they also shaped a myth appropriate for an age full of tensions and troubles. While the word “zombie” is never used in the film, Night of the Living Dead represents the Ground Zero of the modern zombie story. 1968, of course, was a year marked by assassinations, political unrest, the Vietnam War, changes in social and sexual mores, racial violence, and other unsettling changes. Night of the Living Dead took on those fears and worries metaphorically by transmuting them into ghouls outside a farmhouse, trying to break in and attack the living.
Did Romero and Russo consciously set out to offer cultural critique in their story? Yes and no. Romero talked in interviews about how the model of Richard Matheson’s science-fiction novel I Am Legend offered him a narrative of revolution, of the world being turned upside down, that he very much liked and wanted to explore. But the assassination of Dr. Martin Luther King, Jr., to which some viewers think the film’s ending refers to, took place after the completion of principal photography, and the film’s topicality is largely a result of the shape of the narrative itself. The Zombie Apocalypse has proven to be a particularly appropriate tale for an unsettled world, as Night of the Living Dead (and the reaction to it) amply demonstrate.
As with many artists, in this film Romero was anticipating as much as he was responding. A story about zombies who attack in waves, about humans who resist and quarrel about how to resist, and about the fear that we will lose our identity turned out to be the right story at the right time. A similar congruence comes just after 9/11, when the British horror film 28 Days Later set off the zombie craze in which we still reside. White Walker fight from the HBO series’ Game of Thrones. (c) HBO via Game of Thrones Wiki.
In reshaping the zombie story from its origins as a narrative about slavery and dominance in the Caribbean, to a story about supernatural ghouls who threaten all human life, Night of the Living Dead offered a contemporary example of a trope familiar in the West for at least 600 years in which Death or the dead confront the living in art and literature in times of crisis. Romero’s modern zombie story is an updated version of the Danse Macabre in which embodied Death reaches out to every member of the society and yanks them into the next life. It is also a cathartic tale of ultimate horror that feels much like our current experience, but which we can turn off or put down, grateful that however bad the terrors of the present moment might be, at least the dead are not trying to knock down our doors.
Game of Thrones and The Walking Dead are two of our culture’s most-widely consumed versions of the Zombie Apocalypse story Romero inaugurated. In Game of Thrones, the ... When George Romero, director of Night of the Living Dead, died on 16 July, the world was gearing up for the season opener of Game of Thrones. Game of Thrones owes its central storyline—the conflict between the Night’http://feeds.feedblitz.com/~/432745512/_/oupblog/https://blog.oup.com/2017/08/dna-testing-immigration-family-reunification/DNA testing for immigration and family reunification?http://feedproxy.google.com/~r/oupbloglawpolitics/~3/e35ZwRt0gVo/
http://feeds.feedblitz.com/~/432711052/_/oupblog/#respondSun, 13 Aug 2017 11:30:03 +0000https://blog.oup.com/?p=132931Family reunification is one of the main forms of immigration in many countries. However, in recent times, immigration has become increasingly regulated with many countries encouraging stricter vetting measures. In this climate, countries’ laws and policies applicable to family reunification seek a balance between an individual’s right to a family life and a country’s right to control the influx of immigrants.

]]>
During the past decade, immigrants accounted for 47% of the increase in the US workforce and 70% in Europe. Family reunification is one of the main forms of immigration in many countries. However, in recent times, immigration has become increasingly regulated with many countries encouraging stricter vetting measures. In this climate, countries’ laws and policies applicable to family reunification seek a balance between an individual’s right to a family life and a country’s right to control the influx of immigrants. The use of DNA testing (using blood samples or buccal swabs collected from the sponsor and each of the applicants) has been included in the family reunifications processes to help confirm a biological link between the sponsor and the applicants in at least 21 countries including Austria, Canada, Finland, France, Germany, the United Kingdom, and the USA. As numerous jurisdictions use DNA testing in their family reunification processes, can the use of DNA testing help achieve a better balance between promoting family reunification and enabling better control of the immigration demands?

On the one hand, given that the test results are considered very accurate, reliable, and scientifically valid, the use of this test has been deemed to have a number of benefits. It is viewed as helpful for immigrants whose birth or baptismal certificates are unavailable, non-existent, or unreliable. It is deemed to add neutrality to the migratory process, as the decision becomes less discretionary than if it solely depended on the immigration officer’s interpretation of the supportive documentation. It is also considered to make the process more efficient, faster, and cheaper, because the results can be self-explanatory. Therefore, it is not necessary for immigrants to hire lawyers or for the government to train its immigration officers to be able to properly interpret the supporting evidence or to interview the potential immigrants or for either of them to wait long periods of time for all of the above to be concluded. Lastly, it is considered helpful to prevent fraud, human trafficking, and misuse of the process, as potential immigrants who know they do not have a true familial link may be discouraged from initiating the process.

On the other hand, there are criticisms based on legal, social, and ethical concerns raised not by the test itself, but by the way it is implemented. The test is usually “suggested” in cases where documents are unavailable or unreliable (mainly in cases of potential immigrants from a specific list of countries of Africa, Asia, or Latin America), done in accredited laboratories, and paid by the immigrant (although in some cases, the government will directly cover or reimburse the costs of the test). Some countries assign such an enormous evidential weight to the test that a negative result or a refusal to undergo the test would very likely lead to the rejection of the application.

Can the use of DNA testing help achieve a better balance between promoting family reunification and enabling better control of the immigration demands?

Sociologically, the definition of “family” based solely on a biological link (the result of a DNA test), disregards any other physical, psychological, social, intellectual, or spiritual factor or element of a relationship between two family members. In this sense, the requirement of DNA testing in these terms can disrupt immigrants’ family lives and consequently, parents’ care of their children; their emotional well-being; personality; identity; social and affective skills; integration to the host country; and even their work/school performance. The latter could have a negative impact on the host country’s economy, as immigrants constitute an important part of the world’s workforce.

There are various ethical concerns with the use of DNA testing in family reunification processes. Firstly, it is problematic that the majority of the countries using DNA testing in their family reunification processes neglect to provide genetic counseling services prior to or after the immigrants undergo the test. Additionally, signatories to the Prüm Convention store and share the information collected from the migratory process with the other signatories to combat terrorism, cross-border crime, and illegal migration without the immigrants’ consent. Moreover, their state of vulnerability while applying for family reunification diminishes their autonomous consent to undergo the test. Finally, their informed consent can be violated because they lack the power to prevent secondary uses of their genetic information.

Legally, there are concerns about issues of discrimination based on country or origin, religion (some religions do not allow forms of this test), socio-economic class (the cost per applicant is between $230 and $1250), non-traditional models of families (e.g. LGBT, blended, extended, or reproductively assisted families, and those that include orphans), and unwed parents (it is more frequent for unwed parents to be suggested to undergo the test). Furthermore, nationals’ privacy and dignity are better protected and their familial relationships are less scrutinized than those of foreigners.

Countries have a sovereign right to control their immigration regulations and policies, and as discussed, DNA testing can be useful in protecting this right. The consistency in the benefits of families as the optimal foundation for physical and emotional well-being has even resulted in international agencies and instruments upholding a human right to a family. However, families are formed and shaped by many factors, complexities, and dynamics that DNA is incapable of fully capturing. Immigration laws, regulations, policies, and practices have to be reasonable, justifiable, and proportional to principles of equality, inclusiveness, efficiency, human dignity, and respect for more pluralistic concepts of family. Specifically, it would be beneficial if countries truly maintained the use of the DNA testing as a “last resort” for cases where it is appropriate/necessary to suggest it, preserved an inclusive concept of family, and provided immigrants with as much information as possible regarding the test and its potential outcomes.

]]>http://feeds.feedblitz.com/~/432711052/_/oupblog/feed/0*Featured,Science & Medicine,Journals,family reunification,immigration laws,Palmira Granados Moreno,DNA,family structure,Medical Ethics,oxford journals,DNA testing,immigrants,JLB,Law,child immigration,genetics,immigration ethics,jlbios,Journal of Law and the Biosciences,immigration,Yann Joly,Prüm ConventionDuring the past decade, immigrants accounted for 47% of the increase in the US workforce and 70% in Europe. Family reunification is one of the main forms of immigration in many countries. However, in recent times, immigration has become increasingly regulated with many countries encouraging stricter vetting measures. In this climate, countries’ laws and policies applicable to family reunification seek a balance between an individual’s right to a family life and a country’s right to control the influx of immigrants. The use of DNA testing (using blood samples or buccal swabs collected from the sponsor and each of the applicants) has been included in the family reunifications processes to help confirm a biological link between the sponsor and the applicants in at least 21 countries including Austria, Canada, Finland, France, Germany, the United Kingdom, and the USA. As numerous jurisdictions use DNA testing in their family reunification processes, can the use of DNA testing help achieve a better balance between promoting family reunification and enabling better control of the immigration demands?
On the one hand, given that the test results are considered very accurate, reliable, and scientifically valid, the use of this test has been deemed to have a number of benefits. It is viewed as helpful for immigrants whose birth or baptismal certificates are unavailable, non-existent, or unreliable. It is deemed to add neutrality to the migratory process, as the decision becomes less discretionary than if it solely depended on the immigration officer’s interpretation of the supportive documentation. It is also considered to make the process more efficient, faster, and cheaper, because the results can be self-explanatory. Therefore, it is not necessary for immigrants to hire lawyers or for the government to train its immigration officers to be able to properly interpret the supporting evidence or to interview the potential immigrants or for either of them to wait long periods of time for all of the above to be concluded. Lastly, it is considered helpful to prevent fraud, human trafficking, and misuse of the process, as potential immigrants who know they do not have a true familial link may be discouraged from initiating the process.
On the other hand, there are criticisms based on legal, social, and ethical concerns raised not by the test itself, but by the way it is implemented. The test is usually “suggested” in cases where documents are unavailable or unreliable (mainly in cases of potential immigrants from a specific list of countries of Africa, Asia, or Latin America), done in accredited laboratories, and paid by the immigrant (although in some cases, the government will directly cover or reimburse the costs of the test). Some countries assign such an enormous evidential weight to the test that a negative result or a refusal to undergo the test would very likely lead to the rejection of the application. Can the use of DNA testing help achieve a better balance between promoting family reunification and enabling better control of the immigration demands?
Sociologically, the definition of “family” based solely on a biological link (the result of a DNA test), disregards any other physical, psychological, social, intellectual, or spiritual factor or element of a relationship between two family members. In this sense, the requirement of DNA testing in these terms can disrupt immigrants’ family lives and consequently, parents’ care of their children; their emotional well-being; personality; identity; social and affective skills; integration to the host country; and even their work/school performance. The latter could have a negative impact on the host country’s economy, as immigrants constitute an important part of the world’s workforce.
There are various ethical concerns with the use of DNA testing in family reunification processes. Firstly, ... During the past decade, immigrants accounted for 47% of the increase in the US workforce and 70% in Europe. Family reunification is one of the main forms of immigration in many countries. However, in recent times, immigration has become increasingly ... http://feeds.feedblitz.com/~/432711052/_/oupblog/https://blog.oup.com/2017/08/education-globalisation-political-cleavage-europe/It’s education, stupid: how globalisation has made education the new political cleavage in Europehttp://feedproxy.google.com/~r/oupbloglawpolitics/~3/g_SGI1J0E-0/
http://feeds.feedblitz.com/~/432665340/_/oupblog/#commentsSun, 13 Aug 2017 10:30:45 +0000https://blog.oup.com/?p=132254 Commentators argue that a globalisation cleavage is appearing in western Europe, with the issues of migration and European integration core bones of contention. We argue that at a deeper level it is not just globalisation or the EU that drives this contestation. The new political divide is also rooted in demographic changes, it is a manifestation of the rise of a more structural, educational cleavage.

]]>
The Brexit referendum and the Dutch and French elections have shown that the traditional distinction between right and left is becoming obsolete. Commentators argue that a globalisation cleavage is appearing in western Europe, with the issues of migration and European integration core bones of contention. We argue that at a deeper level it is not just globalisation or the EU that drives this contestation. The new political divide is also rooted in demographic changes, it is a manifestation of the rise of a more structural, educational cleavage. It’s no longer just the economy, it’s education.

Tell us what your highest diploma is, and we will tell you who you are and what you do. If you are a university graduate, you will watch public television, such as the BBC or its equivalent in other European countries (such as Canvas in Belgium) and read ‘quality’ papers, such as The Guardian, Die Zeit, or Libération. You will do your utmost to get your children into public school in the UK, a Gymnasium in Germany and the Netherlands, or one of the Grandes écoles in France. You will live in a university town, a green pre-war suburb, or in the 19th century, gentrified parts of the inner cities, such as Prenslauer Berg in Berlin, De Pijp in Amsterdam, or Notting Hill in London. You will be moderately in favour of the EU, worry about climate change, the state of higher education, xenophobia, and vote for a Green or social liberal party.

On the other hand, if your education ended after junior high school or primary vocational training, the chances are you will watch commercial television, such as SBS, VTM or ITV, and read tabloid papers – if you read any newspaper at all – such as The Sun in England, Bild in Germany, or BT in Denmark. Your children will attend a local state school in the UK, a large “ROC” in the Netherlands, or a “lycée professionnel” in France. You will live in former industrial areas and manufacturing towns, in post-war satellite cities, such as Marzahn in Berlin, Lelystad in the Netherlands, or Slough in England, or in the 20th century outskirts of the major cities. You will be highly sceptical about the EU, worry about crime and immigration, and vote for a nationalist party, or perhaps not at all.

Our research shows how the contours of this new divide have crystallised in western and northern Europe. Using a broad notion of cleavage, we find the rise of an educational cleavage reflected along three lines: a new socio-demographic division, differences in terms of political preferences, and the appearance of a new divide in the political landscape.

The rise of the well-educated as a new social segment

Cleavages are rooted in demography. For a large part of the twentieth century it made little sense to speak of distinct educational groups, because the group of well-educated citizens was so small. This changed as a consequence of rising educational attainment levels. In 2015, according to the OECD, 27% of the EU workforce (using a group of 22 EU countries) were well educated – more than double the 11 per cent who classified as well-educated EU citizens in 1992 and more than twenty-five times higher than the meagre 1 per cent recorded in the 1960s.

This emerging educational segmentation comes with an increased stratification and segregation along educational lines, with unequal access to housing, health, and job opportunities. Moreover, education is an important driver of new patterns of homogamy. The well educated and the less well educated live in different social worlds and do not mingle. They differ in health, in life expectancies, in wealth, and in income.

Cosmopolitans versus nationalists

Traditionally, most voters in western Europe can be positioned along a social-economic, left–right dimension and along a religious–secular dimension. In addition to these traditional conflict dimensions, which reach back to the late nineteenth and early twentieth century, a new cultural conflict dimension has manifested itself in the past three decades. This new division between what could be called ‘cosmopolitans’ and ‘nationalists’ has emerged gradually, fuelled by the waves of non-western immigration and the process of European unification.

This division between cosmopolitan and nationalist attitudes coincides in western European countries with the education divide. Ranged on one side of this new line of conflict are the citizens who accept social and cultural heterogeneity and who favour, or at least condone, multiculturalism. These are the more highly educated. On the other side are citizens who are highly critical of multiculturalism and who prefer a more homogeneous national culture.

In the 2016 Brexit referendum, strong educational differences could be observed. With the exception of Scotland, the Leave vote was much higher in those regions of Britain populated by citizens with low education qualifications, and much lower in those regions with a larger number of university graduates. According to Matthew Goodwin and Oliver Heath: ‘fifteen of the 20 “least educated” areas voted to leave the EU while every single one of the 20 “most educated” areas voted to remain.’

The rise of social-liberal versus nationalist parties

Cleavages manifest themselves also in the support for specific political parties. New parties with new types of preferences have recently entered the political stage across Europe. On the one side of the new cultural dimension of conflict, we see the emergence of Green and social liberal parties, such as Groen! and Ecolo in Belgium, Les Verts in France, The Greens in Germany, D66 and GroenLinks in the Netherlands, and the Liberal Democrats in the United Kingdom, to name but a few. Since the late seventies, they have become established political actors throughout western Europe. In all countries, the Green and social liberal parties predominantly attract voters from the high end of the education spectrum, as shown in Figure 1.

Figure 1: Link between education and support for Green and Left-liberal parties (2014) Note: The chart shows the percentage of people who had voted for selected Green and Left-liberal parties in the previous election from different educational groups. Figures are compiled using the 2014 European Social Survey.

On the other side of this cultural conflict, we see the emergence of nationalist populist parties such as the FPÖ in Austria, Vlaams Belang and NV-A in Belgium, the Danish People’s Party, the Finns Party, France’s Front National, the AfD in Germany, Lega Nord in Italy, the Party for Freedom in the Netherlands, the Sweden Democrats, and UKIP. These nationalist parties tend to draw large proportions of the low and medium-educated voters, and relatively few well-educated voters as shown in Figure 2 below.

Figure 2: Link between education and support for nationalist parties (2014). Note: The chart shows the percentage of people who had voted for selected nationalist parties in the previous election from different educational groups. Figures are compiled using the 2014 European Social Survey.

Political contestation regarding globalisation issues should thus be understood as only part of the puzzle. Underneath it, the contours of an educational cleavage have become visible. Educational differences matter most in societies that are meritocratic – in which access to higher education, the labour market, and social stratification are based on merit instead of class or patronage.

This is particularly the case in western and northern European countries, such as Belgium, Denmark, and to a somewhat lesser extent the Netherlands, the UK, Finland, Austria, and Switzerland. In these countries, the contours of something resembling a full educational cleavage are visible. The nationalist populist parties on the one hand, and the Greens and social-liberals on the other hand, embody the institutionalisation of this new political conflict line. They have gained a lasting place in the political arena because they represent groups of voters who not only share a particular set of issue attitudes, but also specific social characteristics – their educational background.

]]>http://feeds.feedblitz.com/~/432665340/_/oupblog/feed/2European Union,*Featured,The Rise of Political Meritocracy,integration,Anchrit Wille,political demographic,liberalism,cultural integration,Books,Education,brexit,multiculturalism,Social Sciences,left-wing parties,university education,democracy,Diploma Democracy,Mark Bovens,Politics,immigration,nationalism,left-liberal partiesThe Brexit referendum and the Dutch and French elections have shown that the traditional distinction between right and left is becoming obsolete. Commentators argue that a globalisation cleavage is appearing in western Europe, with the issues of migration and European integration core bones of contention. We argue that at a deeper level it is not just globalisation or the EU that drives this contestation. The new political divide is also rooted in demographic changes, it is a manifestation of the rise of a more structural, educational cleavage. It’s no longer just the economy, it’s education.
Tell us what your highest diploma is, and we will tell you who you are and what you do. If you are a university graduate, you will watch public television, such as the BBC or its equivalent in other European countries (such as Canvas in Belgium) and read ‘quality’ papers, such as The Guardian, Die Zeit, or Libération. You will do your utmost to get your children into public school in the UK, a Gymnasium in Germany and the Netherlands, or one of the Grandes écoles in France. You will live in a university town, a green pre-war suburb, or in the 19th century, gentrified parts of the inner cities, such as Prenslauer Berg in Berlin, De Pijp in Amsterdam, or Notting Hill in London. You will be moderately in favour of the EU, worry about climate change, the state of higher education, xenophobia, and vote for a Green or social liberal party.
On the other hand, if your education ended after junior high school or primary vocational training, the chances are you will watch commercial television, such as SBS, VTM or ITV, and read tabloid papers – if you read any newspaper at all – such as The Sun in England, Bild in Germany, or BT in Denmark. Your children will attend a local state school in the UK, a large “ROC” in the Netherlands, or a “lycée professionnel” in France. You will live in former industrial areas and manufacturing towns, in post-war satellite cities, such as Marzahn in Berlin, Lelystad in the Netherlands, or Slough in England, or in the 20th century outskirts of the major cities. You will be highly sceptical about the EU, worry about crime and immigration, and vote for a nationalist party, or perhaps not at all.
Our research shows how the contours of this new divide have crystallised in western and northern Europe. Using a broad notion of cleavage, we find the rise of an educational cleavage reflected along three lines: a new socio-demographic division, differences in terms of political preferences, and the appearance of a new divide in the political landscape.
The rise of the well-educated as a new social segment
Cleavages are rooted in demography. For a large part of the twentieth century it made little sense to speak of distinct educational groups, because the group of well-educated citizens was so small. This changed as a consequence of rising educational attainment levels. In 2015, according to the OECD, 27% of the EU workforce (using a group of 22 EU countries) were well educated – more than double the 11 per cent who classified as well-educated EU citizens in 1992 and more than twenty-five times higher than the meagre 1 per cent recorded in the 1960s.
This emerging educational segmentation comes with an increased stratification and segregation along educational lines, with unequal access to housing, health, and job opportunities. Moreover, education is an important driver of new patterns of homogamy. The well educated and the less well educated live in different social worlds and do not mingle. They differ in health, in life expectancies, in wealth, and in income.
Cosmopolitans versus nationalists
Traditionally, most voters in western Europe can be positioned along a social-economic, left–right dimension and along a religious–secular dimension. In addition to these traditional conflict dimensions, which ... The Brexit referendum and the Dutch and French elections have shown that the traditional distinction between right and left is becoming obsolete. Commentators argue that a globalisation cleavage is appearing in western Europe, with the issues of ... http://feeds.feedblitz.com/~/432665340/_/oupblog/https://blog.oup.com/2017/08/richard-susskind-law-vox-future-of-law/Richard Susskind on the future of lawhttp://feedproxy.google.com/~r/oupbloglawpolitics/~3/ger15Jx0rQo/
http://feeds.feedblitz.com/~/432577350/_/oupblog/#respondSun, 13 Aug 2017 08:30:52 +0000https://blog.oup.com/?p=132571In the latest episode of the Oxford Law Vox podcast Richard Susskind talks to George Miller about the gaining momentum of technology and AI in the law profession. They discuss just how vital it is that lawyers learn to reinvent themselves and work alongside technology. He also address the importance of the opportunity young lawyers have to bring about and be a major part of social change in the legal profession.

]]>
In the latest episode of the Oxford Law Vox podcast Richard Susskind talks to George Miller about the gaining momentum of technology and AI in the law profession. They discuss just how vital it is that lawyers learn to reinvent themselves and work alongside technology.

He also address the importance of the opportunity young lawyers have to bring about and be a major part of social change in the legal profession.

Below are selected excerpts from their extensive conversation, with access to the full episode on the Oxford Law Vox, as well as a bonus episode on his longstanding interest in technological change.

George began by asking about Richard’s motivation for writing the second edition of the book.

“When I read through the book I realised it really wasn’t serving its purpose anymore … I don’t think the tone or the message has changed. The messages are the same – that if we want to improve access to justice in our society then technology provides a great route. I think our law schools are still out of step. I think if you are a conventional lawyer and you’re not prepared to adapt to the [2020’s] you’ll struggle to survive, but if you are entrepreneurial and enthusiastic, forward looking, open minded, then there’s probably never been a more exciting time to be in the law.”

Richard addressed the issue that the people who are most supportive of innovation in the profession often do not yet have the power to implement change.

“If you don’t have a group of leaders within your business that are supportive of technology and you are sitting there as a junior lawyer who has all sorts of ideas for rethinking legal services … you’re probably not in the right business, I’m afraid. It’s very hard to manage upwards and to help fundamentally rejig, reorganise your firm if you’re a very junior partner or not a partner at all … Now is the time to think less about safety and more about legacy. What is it as a business that we’re leaving to the generation that’s coming through? … More often we call a firm innovative by referring to the group of people who happen to be running it at one point in time, who are forward-looking, and there’s not a lot you can do about that if you haven’t got that group of leaders. As I say, you may have to find a firm that actually does support new ideas and innovation and change.”

They discussed if Richard felt that there are new career options opening up for lawyers which are as stimulating and rewarding as conventional roles.

“I do but others don’t, by which I mean I think there’s certainly a whole new range of legal jobs emerging … I go back to this general phenomenon that we are seeing across society that so many different sectors have had to reinvent themselves, people have had to retool and retrain … The question about whether or not there is a viable legal profession out there really is a question of whether or not there will be a sufficiency of new tasks for lawyers that will emerge for lawyers to do …

I say to young students – you are wanting to make the world a better place, you’re studying law, you want to help people understand their entitlements. So one thing is you can go out and learn a lot as a lawyer and you can advise maybe 10,000 clients in your career. How about actually instead developing an online system that can help a few million people, why don’t you use your legal knowledge in different ways?”

George ended by addressing Richard’s optimism that there are positive developments happening in the profession to encourage legal access for all.

“We are expected all of us under the law to know of our rights and obligations and yet it has become a perilous system to which very few of us have ready and reasonable access, and I think that’s tragic and we need to do something about it … A system that can help us avoid having disputes in the first place, whether it be by public access to legal materials, public legal education, online legal services, better advisory services and so forth. It’s important I think that people have ready access to the law that’s applicable to them … We should want our systems and our technologies to help us, alert us, to occasions not just where there is a legal threat or risk, but also where there is a legal opportunity.”

]]>http://feeds.feedblitz.com/~/432577350/_/oupblog/feed/0legal systems,*Featured,Audio & Podcasts,Technology,ai,lawyers and technology,law and society,legal access,An Introduction to Your Future,richard susskind,Books,Law,access to justice,artificial technology,rebecca parker,Tomorrow's Lawyers,artificial intelligence,george miller,lawyers,Oxford Law VoxIn the latest episode of the Oxford Law Vox podcast Richard Susskind talks to George Miller about the gaining momentum of technology and AI in the law profession. They discuss just how vital it is that lawyers learn to reinvent themselves and work alongside technology.
He also address the importance of the opportunity young lawyers have to bring about and be a major part of social change in the legal profession.
Below are selected excerpts from their extensive conversation, with access to the full episode on the Oxford Law Vox, as well as a bonus episode on his longstanding interest in technological change.
George began by asking about Richard’s motivation for writing the second edition of the book.
“When I read through the book I realised it really wasn’t serving its purpose anymore … I don’t think the tone or the message has changed. The messages are the same – that if we want to improve access to justice in our society then technology provides a great route. I think our law schools are still out of step. I think if you are a conventional lawyer and you’re not prepared to adapt to the [2020’s] you’ll struggle to survive, but if you are entrepreneurial and enthusiastic, forward looking, open minded, then there’s probably never been a more exciting time to be in the law.”
Richard addressed the issue that the people who are most supportive of innovation in the profession often do not yet have the power to implement change.
“If you don’t have a group of leaders within your business that are supportive of technology and you are sitting there as a junior lawyer who has all sorts of ideas for rethinking legal services … you’re probably not in the right business, I’m afraid. It’s very hard to manage upwards and to help fundamentally rejig, reorganise your firm if you’re a very junior partner or not a partner at all … Now is the time to think less about safety and more about legacy. What is it as a business that we’re leaving to the generation that’s coming through? … More often we call a firm innovative by referring to the group of people who happen to be running it at one point in time, who are forward-looking, and there’s not a lot you can do about that if you haven’t got that group of leaders. As I say, you may have to find a firm that actually does support new ideas and innovation and change.”
They discussed if Richard felt that there are new career options opening up for lawyers which are as stimulating and rewarding as conventional roles.
“I do but others don’t, by which I mean I think there’s certainly a whole new range of legal jobs emerging … I go back to this general phenomenon that we are seeing across society that so many different sectors have had to reinvent themselves, people have had to retool and retrain … The question about whether or not there is a viable legal profession out there really is a question of whether or not there will be a sufficiency of new tasks for lawyers that will emerge for lawyers to do …
I say to young students – you are wanting to make the world a better place, you’re studying law, you want to help people understand their entitlements. So one thing is you can go out and learn a lot as a lawyer and you can advise maybe 10,000 clients in your career. How about actually instead developing an online system that can help a few million people, why don’t you use your legal knowledge in different ways?”
George ended by addressing Richard’s optimism that there are positive developments happening in the profession to encourage legal access for all.
“We are expected all of us under the law to know of our rights and obligations and yet it has become a perilous system to which very few of us have ready and reasonable ... In the latest episode of the Oxford Law Vox podcast Richard Susskind talks to George Miller about the gaining momentum of technology and AI in the law profession. They discuss just how vital it is that lawyers learn to reinvent themselves and work ... http://feeds.feedblitz.com/~/432577350/_/oupblog/https://blog.oup.com/2017/08/electrons-consciousness-philosophy/Are electrons conscious?http://feedproxy.google.com/~r/oupbloglawpolitics/~3/_kmhxyDrN9g/
http://feeds.feedblitz.com/~/432542068/_/oupblog/#commentsSun, 13 Aug 2017 07:30:41 +0000https://blog.oup.com/?p=132951For most of the twentieth century a "brain-first" approach dominated the philosophy of consciousness. The idea was that the brain is the thing we really understand, through neuroscience, and the task of the philosopher is try to understand how that thing "gives rise" to subjective experience: to the inner world of colours, smells and sounds that each of us knows in our own case.

]]>
For most of the twentieth century a “brain-first” approach dominated the philosophy of consciousness. The idea was that the brain is the thing we really understand, through neuroscience, and the task of the philosopher is try to understand how that thing “gives rise” to subjective experience: to the inner world of colours, smells and sounds that each of us knows in our own case. This philosophical project has not gone all that well–nobody has provided even the beginnings of a satisfying solution to what David Chalmers called “the hard problem” of consciousness. More recently a quiet revolution has been occurring in philosophy of mind which aims to turn the brain-first approach on its head. According to the view that has come to be known as “Russellian monism,” physical science tell us surprisingly little about nature of the brain (more on this below). It is the nature of consciousness that we really understand–through being conscious–and hence the philosophical task is to build our picture of the brain around our understanding of consciousness. We might call this a “consciousness-first” approach to the mind-body problem. The general approach has given birth to a broad family of specific theories outlined in numerous recent publications. Suddenly progress on consciousness looks possible.

The essence of Russellian monism

The conscious mind and the physical brain seem on the face of it to be wildly different things. For one thing, conscious experiences involve a wide variety of what philosophers call “phenomenal qualities.” This is just a technical term for the qualities we find in our experience: the redness of a red experience, the itchiness of an itch, the sensation of spiciness. A neuroscientific description of the brain seems to leave out these qualities. How on earth can quality-rich experience be accommodated within soggy grey brain matter? The Russellian monist solution, inspired by certain writings of Bertrand Russell from the 1920s, is to point out that physical science is in fact silent on the intrinsic nature of matter, restricting itself to telling us what matter does. Neuroscience characterises a region of the brain in terms of (A) its causal relationships with other brain regions/sensory inputs/behavioural outputs and (B) its chemical constituents. Chemistry in turns characterises those chemical constituents in terms of (A) their causal relationships with other chemical entities and (B) their physical constituents. Finally, physics characterises basic physical properties in terms of their causal relationships with other basic physical properties. Throughout the whole hierarchy of the physical sciences we learn only about causal relationships. And yet there must be more to be more to the nature of a physical entity, such as the cerebellum, than its causal relationships. There must be some intrinsic nature to the cerebellum, some way it is in and of itself independently of what it does. About this intrinsic nature physical science remains silent. Accepting this casts the problem of consciousness in a completely different light, and points the way to a solution. Our initial question was, “Where in the physical processes of the brain are the phenomenal qualities?” Our discussion has led to another question, “What is the intrinsic nature of physical brain processes?” The Russellian monist proposes answering both question at once, by identifying phenomenal properties with the intrinsic nature of (at least some) physical brain processes. Whilst neuroscience characterises brain processes extrinsically, in terms of what they do, in their intrinsic nature they are forms of quality-rich consciousness.

Two Arguments for Panpsychism

Russellian monism is a general framework for unifying matter and mind and thereby avoiding dualism: the view of Descartes that mind and body are radically different kinds of thing. But how to fill in the details is much debated. Many have founded it natural to extend Russellian monism into a form of panpsychism, the view that all matter involves experience of some form, bringing a new respectability to this much maligned view. There are essentially two arguments for this extension, one of which I don’t accept and one of which I do. The first is the “intelligible emergence argument,” an ancient argument for panpsychism championed in modern times by Galen Strawson. The idea is that it is only by supposing that there is consciousness “all the way down” to electrons and quarks that we can render the emergence of human and animal consciousness intelligible. Experience can’t possibly emerge from the utterly non-experiential, according to Strawson, so it must be there all along. One difficulty for this argument is that even if we do attribute basic consciousness to the smallest bits of the brain, it’s still not clear how to intelligibly account for the consciousness of the brain as a whole. How do the interactions of trillions of tiny minds produce a big mind? This is the so-called ‘combination problem’ for panpsychism, and until it is solved it’s not obvious that the panpsychist Russellian monist has an advantage over the non-panpsychist Russellian monist when it comes to explaining the emergence of human and animal consciousness. I favour instead what I call ‘the simplicity argument’ for panpsychism. Whilst in the mind-set that physical science is giving us a complete picture of the universe, panpsychism is implausible, as physical science doesn’t seem to be telling us that electrons are conscious. But once we accept the basic tenants of Russellian monism, things look quite different. Physical science tells us nothing about the intrinsic nature of matter; indeed arguably the only thing we know about the intrinsic nature of matter is that some of it, i.e. the brains and humans, have a consciousness-involving nature. From this epistemic starting point, the most simple, parsimonious speculation is that the nature of matter outside of brains is continuous with the nature of matter inside of brains, in also being consciousness-involving. This may seem like an insubstantial consideration, but science is strongly motivated by considerations of simplicity. Special relativity, for example, is empirically equivalent to its Lorenzian rival but favoured as a much simpler interpretation of the data.

Against neuro-fundamentalism

Some philosophers–I call them “neuro-fundamentalists”–think the only way to make progress on consciousness is to do more neuroscience. These philosophers have an exceedingly limited view of how science operates, as though it’s simply a matter of doing the experiments and recording the data. In fact, many significant developments in science have arisen not from experimental findings in the lab but from a radical reconceptualization of our picture of the universe formulated from the comfort of an armchair. Think of the move in the Minkowski interpretation of special relativity from thinking of space and time as distinct things to the postulation of the single unified entity of spacetime, or Galileo’s separation of the primary and the secondary qualities which paved the way for mathematical physics. My hunch is that progress on consciousness, as well of course as involving neuroscience, will involve this kind of radical reconceptualization of the mind, the brain, and the relationship between them. Russellian monism looks to be a promising framework in which to do this.

]]>http://feeds.feedblitz.com/~/432542068/_/oupblog/feed/11*Featured,consciousness,electron,philip goff,Philosophy,consciousness and fundamental reality,Arts & Humanities,philosophy of the mind,conscious,monism,panpsychism,neuro-fundamentalismFor most of the twentieth century a “brain-first” approach dominated the philosophy of consciousness. The idea was that the brain is the thing we really understand, through neuroscience, and the task of the philosopher is try to understand how that thing “gives rise” to subjective experience: to the inner world of colours, smells and sounds that each of us knows in our own case. This philosophical project has not gone all that well–nobody has provided even the beginnings of a satisfying solution to what David Chalmers called “the hard problem” of consciousness. More recently a quiet revolution has been occurring in philosophy of mind which aims to turn the brain-first approach on its head. According to the view that has come to be known as “Russellian monism,” physical science tell us surprisingly little about nature of the brain (more on this below). It is the nature of consciousness that we really understand–through being conscious–and hence the philosophical task is to build our picture of the brain around our understanding of consciousness. We might call this a “consciousness-first” approach to the mind-body problem. The general approach has given birth to a broad family of specific theories outlined in numerous recent publications. Suddenly progress on consciousness looks possible.
The essence of Russellian monism
The conscious mind and the physical brain seem on the face of it to be wildly different things. For one thing, conscious experiences involve a wide variety of what philosophers call “phenomenal qualities.” This is just a technical term for the qualities we find in our experience: the redness of a red experience, the itchiness of an itch, the sensation of spiciness. A neuroscientific description of the brain seems to leave out these qualities. How on earth can quality-rich experience be accommodated within soggy grey brain matter? The Russellian monist solution, inspired by certain writings of Bertrand Russell from the 1920s, is to point out that physical science is in fact silent on the intrinsic nature of matter, restricting itself to telling us what matter does. Neuroscience characterises a region of the brain in terms of (A) its causal relationships with other brain regions/sensory inputs/behavioural outputs and (B) its chemical constituents. Chemistry in turns characterises those chemical constituents in terms of (A) their causal relationships with other chemical entities and (B) their physical constituents. Finally, physics characterises basic physical properties in terms of their causal relationships with other basic physical properties. Throughout the whole hierarchy of the physical sciences we learn only about causal relationships. And yet there must be more to be more to the nature of a physical entity, such as the cerebellum, than its causal relationships. There must be some intrinsic nature to the cerebellum, some way it is in and of itself independently of what it does. About this intrinsic nature physical science remains silent. Accepting this casts the problem of consciousness in a completely different light, and points the way to a solution. Our initial question was, “Where in the physical processes of the brain are the phenomenal qualities?” Our discussion has led to another question, “What is the intrinsic nature of physical brain processes?” The Russellian monist proposes answering both question at once, by identifying phenomenal properties with the intrinsic nature of (at least some) physical brain processes. Whilst neuroscience characterises brain processes extrinsically, in terms of what they do, in their intrinsic nature they are forms of quality-rich consciousness.
Two Arguments for Panpsychism
Russellian monism is a general framework for unifying matter and mind and thereby avoiding dualism: the view of Descartes that mind and body are radically ... For most of the twentieth century a “brain-first” approach dominated the philosophy of consciousness. The idea was that the brain is the thing we really understand, through neuroscience, and the task of the philosopher is try to ... http://feeds.feedblitz.com/~/432542068/_/oupblog/https://blog.oup.com/2017/08/science-environmental-protection/Is science being taken out of environmental protection?http://feedproxy.google.com/~r/oupbloglawpolitics/~3/vvDekqvcPPc/
http://feeds.feedblitz.com/~/431552734/_/oupblog/#respondSat, 12 Aug 2017 11:30:25 +0000https://blog.oup.com/?p=132921In 1963, dying of breast cancer and wearing a wig to cover the effects of radiation treatments, Rachel Carson appeared before a congressional committee to defend her indictment of pesticides. She had rattled the chemical industry with Silent Spring, which urged caution at a time when Americans were buying dangerous products that the scientific community had itself made possible.

]]>
In 1963, dying of breast cancer and wearing a wig to cover the effects of radiation treatments, Rachel Carson appeared before a congressional committee to defend her indictment of pesticides. She had rattled the chemical industry with Silent Spring, which urged caution at a time when Americans were buying dangerous products that the scientific community had itself made possible. Industry representatives mounted a vigorous campaign to discredit the scientific perspective she offered, some calling her work a hoax.

Attacks on science by industry—including attacks by scientists paid by industry, as was the case with Silent Spring—are common in the world of environmental regulation. Just as Silent Spring ushered in the modern environmental movement, ironically so did it also provoke this tactic. Fortunately, however, our government has until recently recognized the need for strong science as a cornerstone of modern environmental policy and has incorporated scientific perspectives into the heart of its decision-making. In the words of the Executive Order that established the EPA in 1970, one of the key functions of the new agency was “the conduct of research on the adverse effects of pollution and on methods and equipment for controlling it.”

A good example of this recognition is the EPA’s 47-member Scientific Advisory Board (SAB), established in 1978 at the direction of Congress. The SAB reviews the quality and relevance of scientific research and data when the EPA is creating environmental regulations. EPA ethics officials screen its members—mostly leading academics—for conflicts of interest to ensure they are independent and not tied to special interests. EPA’s current SAB website notes that “a key priority for EPA is to base Agency actions on sound scientific data.”

The EPA often requests the SAB to independently peer review major reports that will undergird future regulations. In 2016, for example, the SAB reviewed a major report concerning the impact on the nation’s drinking water of hydraulic fracturing, a lucrative and controversial method of extracting natural gas. The review resulted in changes to the final report that were met with criticism from the oil and gas industry.

The headquarters of the United States Environmental Protection Agency in Washington, D.C. Photographed on en:August 12, en:2006 by user Coolcaesar. CC BY-SA 3.0 via Wikimedia Commons.

Things are different today. The President’s current budget proposal would cut SAB funding by 84 percent. The EPA Administrator, Scott Pruitt, has dragged his feet in appointing a new SAB Chair to fill a term that ends in September. The selection process, which in the past has been transparent, with public input, and a multi-month effort, has just begun. Will it be a rush job? Will it be transparent? What is the relationship of this foot-dragging to Pruitt’s connection with oil and gas interests in Oklahoma, his home state? And why is Pruitt not encouraging renewed terms for members of the Board of Scientific Counselors (BOSC) who have distinguished themselves by their service? According to Pruitt’s spokesman, the objective is to broaden the Board to include people who understand the impact of environmental regulations on the regulated community. This is not how Ken Kimmel, the President of the Union of Concerned Scientists, sees it. To him, Pruitt’s signal to the BOSC is “part of a multifaceted effort to get science out of the way of the deregulation agenda.”

It appears that congressional House Republicans are proving Kimmel right. They passed the EPA Science Advisory Board Review Act in March 2017 on a straight party line vote: 229 to 193. The bill ensures that industry representatives will have strong voices, strong enough perhaps to drown out the independent scientists currently on the Board. The bill eases conflict of interest rules, blocks academic researchers, and burdens the SAB with requirements that deter scientists lacking industrial financial support. As Eddie Bernice Johnson, ranking member of the House Committee on Science, Space, and Technology, put it, “This bill is a transparent attempt to slow down the regulatory process and stack science review boards with industry representatives. The result would be. . .worse science at EPA and less public health protection for American citizens.” Which brings us back to hydrofracking. How would a SAB reimagined in a new Science Advisory Board Review Act, and reconstituted by industry-friendly Scott Pruitt, have come out in its review of the hydrofracking study?

Voices from industry are important. They are rightly given ample platforms throughout the regulatory process. Indeed, some of the most robust and voluminous comments come from representatives from industries such as the American Petroleum Institute, the Chemical Manufacturers Association, or Exxon Mobil. Such institutions also have outsized lobbying voices, some of which reach into the legislative process and influence the drafting of laws under consideration by government officials.

Science needs voices too. The Science Advisory Board and the Board of Scientific Counselors have been among them for years but now are under attack. Our highest levels of government need to protect the role of strong science as EPA staff labor to protect public health and the natural world.

]]>http://feeds.feedblitz.com/~/431552734/_/oupblog/feed/0*Featured,Science & Medicine,Scott Pruitt,SAB,environmental science,Earth & Life Sciences,Scientific Advisory Board,Books,What Everyone Needs to Know,What Everyone Needs To Know,environmental policy,Environmental Protection Agency,Ken Kimmel,EPA,environmental protection,Union of Concerned Scientists,Rachel Carson,Board of Scientific Counselors,Silent Spring,Politics,Pamela HillIn 1963, dying of breast cancer and wearing a wig to cover the effects of radiation treatments, Rachel Carson appeared before a congressional committee to defend her indictment of pesticides. She had rattled the chemical industry with Silent Spring, which urged caution at a time when Americans were buying dangerous products that the scientific community had itself made possible. Industry representatives mounted a vigorous campaign to discredit the scientific perspective she offered, some calling her work a hoax.
Attacks on science by industry—including attacks by scientists paid by industry, as was the case with Silent Spring—are common in the world of environmental regulation. Just as Silent Spring ushered in the modern environmental movement, ironically so did it also provoke this tactic. Fortunately, however, our government has until recently recognized the need for strong science as a cornerstone of modern environmental policy and has incorporated scientific perspectives into the heart of its decision-making. In the words of the Executive Order that established the EPA in 1970, one of the key functions of the new agency was “the conduct of research on the adverse effects of pollution and on methods and equipment for controlling it.”
A good example of this recognition is the EPA’s 47-member Scientific Advisory Board (SAB), established in 1978 at the direction of Congress. The SAB reviews the quality and relevance of scientific research and data when the EPA is creating environmental regulations. EPA ethics officials screen its members—mostly leading academics—for conflicts of interest to ensure they are independent and not tied to special interests. EPA’s current SAB website notes that “a key priority for EPA is to base Agency actions on sound scientific data.”
The EPA often requests the SAB to independently peer review major reports that will undergird future regulations. In 2016, for example, the SAB reviewed a major report concerning the impact on the nation’s drinking water of hydraulic fracturing, a lucrative and controversial method of extracting natural gas. The review resulted in changes to the final report that were met with criticism from the oil and gas industry. The headquarters of the United States Environmental Protection Agency in Washington, D.C. Photographed on en:August 12, en:2006 by user Coolcaesar. CC BY-SA 3.0 via Wikimedia Commons.
Things are different today. The President’s current budget proposal would cut SAB funding by 84 percent. The EPA Administrator, Scott Pruitt, has dragged his feet in appointing a new SAB Chair to fill a term that ends in September. The selection process, which in the past has been transparent, with public input, and a multi-month effort, has just begun. Will it be a rush job? Will it be transparent? What is the relationship of this foot-dragging to Pruitt’s connection with oil and gas interests in Oklahoma, his home state? And why is Pruitt not encouraging renewed terms for members of the Board of Scientific Counselors (BOSC) who have distinguished themselves by their service? According to Pruitt’s spokesman, the objective is to broaden the Board to include people who understand the impact of environmental regulations on the regulated community. This is not how Ken Kimmel, the President of the Union of Concerned Scientists, sees it. To him, Pruitt’s signal to the BOSC is “part of a multifaceted effort to get science out of the way of the deregulation agenda.”
It appears that congressional House Republicans are proving Kimmel right. They passed the EPA Science Advisory Board Review Act in March 2017 on a straight party line vote: 229 to 193. The bill ensures that industry representatives will have strong voices, strong enough perhaps to drown out the independent scientists currently on the Board. The bill eases conflict of interest rules, blocks ... In 1963, dying of breast cancer and wearing a wig to cover the effects of radiation treatments, Rachel Carson appeared before a congressional committee to defend her indictment of pesticides. She had rattled the chemical industry with Silent Spring, ... http://feeds.feedblitz.com/~/431552734/_/oupblog/https://blog.oup.com/2017/08/good-manager-business-excerpt/What makes a good manager? [excerpt]http://feedproxy.google.com/~r/oupbloglawpolitics/~3/4LznGBtPx4U/
http://feeds.feedblitz.com/~/431507186/_/oupblog/#commentsSat, 12 Aug 2017 10:30:57 +0000https://blog.oup.com/?p=132956Is modern work culture is pushing otherwise good people to adopt poor management styles? From creating “growth opportunities” to taking on mentors, managers often find themselves falling into progressive traps that seem like the right thing to do, but ultimately lead employees astray. In the following excerpt from Good People, Bad Managers, Samuel A. Culbert examines the effectiveness of modern management approaches.

Is modern work culture is pushing otherwise good people to adopt poor management styles? From creating “growth opportunities” to taking on mentors, managers often find themselves falling into progressive traps that seem like the right thing to do, but ultimately lead employees astray. In the following excerpt from Good People, Bad Managers, Samuel A. Culbert examines the effectiveness of modern management approaches.

When it’s called to their attention, most managers quickly recognize the value of new management approaches and the importance of upgrading their current style. But the work cul­ture has them fearful of taking a new direction and possibly making an image- discrediting mistake. Wading into the waters of change, but risking only one toe at a time, they seldom make sufficient headway to realize the momentum required to con­tinue progressing. In the end, they talk enthusiastically about progressive practices but don’t commit to system change. And do they ever talk. Knowing change is needed, wanting to think themselves avant-garde and progressive, they speak enlightenment words. But it’s old wine in new bottles.

Passing timelines and meeting benchmarks, managers “progress- up.” They no longer make assignments, they pro­vide “growth opportunities” and “challenge a person’s poten­tial.” They don’t ask for help, they “reach out.” Their direct reports are no longer their employees, they are “business partners.” They don’t persuade and convince, they “discuss to get buy- in.” They no longer assign people to project groups, they assign them to “tiger teams” and invite them to “take journeys together.” Instead of working the data, it’s a “deep dive” and “taking analysis to the next level.” Finding someone proposing a course of action they like, they don’t just concur, now they’re “in violent agreement”— which always scares the hell out of me. When a manager wants someone’s focus, they say “let me be honest with you” and “time to open the kimono” without realizing they might be admitting that past declarations lacked truthfulness and, should I say it, a certain degree of “intimacy.” Do these words really change anything? Does anyone work “inside the box” anymore?!

The field of management has gone too many years imple­menting new and progressive management practices without much fundamental progress made. Employees want managers to perform their mandated other- directed duties. They need managers to help them accomplish what they’re unable to do for themselves. But this entails managers acknowledging past insufficiencies, admitting error, and taking other directed approaches to managing. Is it that man­agers need different guidance from their higher- level managers and leaders, or is it that they’re too insecure and system suspicious to vary from what they know? Probably it’s both.

The field of management has gone too many years imple­menting new and progressive management practices without much fundamental progress made.

Unfortunately, when managers find the approach they’re using not working, too often their backup approach is going longer and stronger with the same approach that didn’t work. And when that doesn’t overpower employee resistance, you know what they do. It’s a route we’ve already traveled. They blame their employee for not being responsive. Responsive to what? Responsive to the self- serving, pretentious practices the work culture claims is good managerial behavior.

When asking top- level managers how they got to the top, a large percentage cite someone “big” who, recognizing their potential, took them under their wing, so to speak, and set about opening doors. In today’s lexicon that’s called “being mentored.” But I don’t find mentoring a company plus. A men­tor is someone playing favorites, short- circuiting the system, and ensuring preferential treatment at the expense of system- wide fair play.

Needless to say, mentors don’t think what they’re doing is negative. In their minds they’re practicing good management, doing what’s objectively right for the company. They’re cul­tivating talent, keeping people challenged, and fast- tracking “high performers” to ensure the company doesn’t lose them. It’s as if what they are doing is part of some grand retention strategy. Well, as much as I hate to rain on anyone’s happiness parade, playing favorites by mentoring is not the good man­agement development system companies should be shooting for. Good management is not about mentoring one individual at a time. It’s about fixing the system so that everyone can be their best and help the company, and creating the circum­stances for individuals to realize their ambitions and dreams.

Managing in today’s work culture is fraught with too many demands for managers to assume a more other-directed focus and mentality. To do so, managers would have to put self- pursuits on hold, drop the pretense of objectivity, and work collaboratively with cohorts to identify and remove obstacles to employee effectiveness companywide. Other- directed man­agement requires providing employees the assurances needed to feel safe speaking their minds. Give employees a voice and managers will finally have the data they now lack for facing up to their mistakes and revising the erroneous reasoning that led to their making them. The way things are going now, what I’m talking about appears light- years away.

Under what circumstances can employees and cohorts expect to get the type of managerial good behavior they need? It’s obvious; the culture has to change.

]]>http://feeds.feedblitz.com/~/431507186/_/oupblog/feed/1*Featured,How Work Culture Corrupts Good Intentions,Good People Bad Managers,work culture,business,modern management,Books,Business Management,Samuel A. Culbert,Social Sciences,employees,Business & Economics,managementIs modern work culture is pushing otherwise good people to adopt poor management styles? From creating “growth opportunities” to taking on mentors, managers often find themselves falling into progressive traps that seem like the right thing to do, but ultimately lead employees astray. In the following excerpt from Good People, Bad Managers, Samuel A. Culbert examines the effectiveness of modern management approaches.
When it’s called to their attention, most managers quickly recognize the value of new management approaches and the importance of upgrading their current style. But the work cul­ture has them fearful of taking a new direction and possibly making an image- discrediting mistake. Wading into the waters of change, but risking only one toe at a time, they seldom make sufficient headway to realize the momentum required to con­tinue progressing. In the end, they talk enthusiastically about progressive practices but don’t commit to system change. And do they ever talk. Knowing change is needed, wanting to think themselves avant-garde and progressive, they speak enlightenment words. But it’s old wine in new bottles.
Passing timelines and meeting benchmarks, managers “progress- up.” They no longer make assignments, they pro­vide “growth opportunities” and “challenge a person’s poten­tial.” They don’t ask for help, they “reach out.” Their direct reports are no longer their employees, they are “business partners.” They don’t persuade and convince, they “discuss to get buy- in.” They no longer assign people to project groups, they assign them to “tiger teams” and invite them to “take journeys together.” Instead of working the data, it’s a “deep dive” and “taking analysis to the next level.” Finding someone proposing a course of action they like, they don’t just concur, now they’re “in violent agreement”— which always scares the hell out of me. When a manager wants someone’s focus, they say “let me be honest with you” and “time to open the kimono” without realizing they might be admitting that past declarations lacked truthfulness and, should I say it, a certain degree of “intimacy.” Do these words really change anything? Does anyone work “inside the box” anymore?!
The field of management has gone too many years imple­menting new and progressive management practices without much fundamental progress made. Employees want managers to perform their mandated other- directed duties. They need managers to help them accomplish what they’re unable to do for themselves. But this entails managers acknowledging past insufficiencies, admitting error, and taking other directed approaches to managing. Is it that man­agers need different guidance from their higher- level managers and leaders, or is it that they’re too insecure and system suspicious to vary from what they know? Probably it’s both. The field of management has gone too many years imple­menting new and progressive management practices without much fundamental progress made.
Unfortunately, when managers find the approach they’re using not working, too often their backup approach is going longer and stronger with the same approach that didn’t work. And when that doesn’t overpower employee resistance, you know what they do. It’s a route we’ve already traveled. They blame their employee for not being responsive. Responsive to what? Responsive to the self- serving, pretentious practices the work culture claims is good managerial behavior.
When asking top- level managers how they got to the top, a large percentage cite someone “big” who, recognizing their potential, took them under their wing, so to ... Is modern work culture is pushing otherwise good people to adopt poor management styles? From creating “growth opportunities” to taking on mentors, managers often find themselves falling into progressive traps that seem like the right ... http://feeds.feedblitz.com/~/431507186/_/oupblog/https://blog.oup.com/2017/08/choir-director-q-a-ben-parry/From singer to choir director: A Q&A with Ben Parryhttp://feedproxy.google.com/~r/oupbloglawpolitics/~3/Hqz8qb_0LbU/
http://feeds.feedblitz.com/~/430165838/_/oupblog/#respondFri, 11 Aug 2017 08:30:27 +0000https://blog.oup.com/?p=132906Ben Parry studied at Cambridge University, where he was a member of The Choir of King’s College, Cambridge, before he became the musical director of, and singer with, the Swingle Singers. Today, Ben has a busy career as a conductor, arranger, singer and producer in both classical and light music fields. We caught up with Ben to ask him about his conducting experiences, and his advice for directors wishing to set up their own choirs.

]]>
Ben Parry studied at Cambridge University, where he was a member of The Choir of King’s College, Cambridge, before he became the musical director of, and singer with, the Swingle Singers. Today, Ben has a busy career as a conductor, arranger, singer and producer in both classical and light music fields; directing choirs as varied as the National Youth Choirs of Great Britain and the professional choir London Voices, to the King’s College Cambridge mixed choir – King’s voices, and Aldeburgh voices. We caught up with Ben to ask him about his progression from singer to director, his conducting experiences, and his advice for directors wishing to set up their own choirs.

With the amount of conducting work you do, you must have to move around a lot! Could you tell us what a typical week in your life looks like?

Any week can be crazy but very rewarding! A typical week might include a choral evensong in King’s (as director of King’s Voices), a film session at Abbey Road or Air Studios with my professional choir – London Voices, a meeting for National Youth Choirs of Great Britain, a rehearsal at Snape Maltings with Aldeburgh Voices, and, sometimes, composing or arranging choral music at home.

Although you started out as a singer, much of your work now revolves around directing. How did you make the progression from singer to director?

I got really interested in choral conducting just before I moved to Edinburgh with my family in 1995. In Edinburgh, I quickly became Chorus Director of the Scottish Chamber Chorus and I also started my own group, Dunedin Consort, which is now one of Scotland’s leading ensembles.

How did you learn to conduct?

My approach with choirs has been informed by observing the many conductors I have worked for – for example, Andrew Parrott’s natural and empathetic approach to music-making has been particularly influential. I’ve never had a conducting lesson, but have learnt my craft through retaining the good things and jettisoning the bad things from other conductors.

Ben Parry conducting in action. Used with permission.

The groups that you conduct range from amateur to professional choirs. Do your rehearsal and performance expectations differ between these?

One of the most satisfying things about conducting a range of choirs with differing abilities is how each approach informs the other. I expect the same enthusiasm, application, discipline, and attention to detail from whatever choir I’m working with. You’d expect that London Voices would deliver the goods almost without thinking about it, and that the commitment and enthusiasm of a choir like the National Youth Choir would be greater. However, sometimes it’s the other way round! It’s true to say that the more I put into a rehearsal or performance, the more I and the singers get out of it.

What do you look for when you’re recruiting new singers for the choirs you direct?

For younger choirs (National Youth Choir and Eton Choral Courses) you’re always looking for passion and potential, whereas London Voices relies on professional experience and reputation. With King’s Voices and Aldeburgh Voices, a shared love of singing is very important. But all singers should really share all of these qualities. I’d hate it if any of the singers I worked with weren’t happy singing. What I love is that sometimes the learning curve of a young singer on an Eton Choral Course is so much greater than the satisfaction of a job well done by one of the professionals.

What advice would you give to someone preparing for an audition to become a choir director?

It’s important to build up a good reputation through your work so people respect you and are confident in your abilities. I was offered co-direction of London Voices because Terry Edwards (founder and co-director) had heard I was good at running choirs. Stepping into someone else’s shoes can be hard, but it’s important to remain genuine.

Do you have any advice to offer to directors who are trying to set up their own choir?

The choral market is niche and definitely saturated with myriad choirs, so you need to be able to offer something unique and innovative – don’t just set up a choir because you want to. Ask yourself what the need is; it may be that you’d like to explore some undiscovered repertoire, or that the region you’re in doesn’t offer enough opportunities for keen singers. The journey through a music career can throw many challenges along the way, and isn’t always easy. But, if one does a good job, is enthusiastic, efficient, and polite you’ll go far and have a rewarding time!

Featured image credit: music notes sheet music bookeh and folder by David Beale. Public domain via Unsplash.

]]>http://feeds.feedblitz.com/~/430165838/_/oupblog/feed/0Ben Parry,*Featured,Scottish Chamber Chorus,amateur choir,choir director,Arts & Humanities,choral,chorus director,Music,q&a,professional choir,London Voices,choral musicBen Parry studied at Cambridge University, where he was a member of The Choir of King’s College, Cambridge, before he became the musical director of, and singer with, the Swingle Singers. Today, Ben has a busy career as a conductor, arranger, singer and producer in both classical and light music fields; directing choirs as varied as the National Youth Choirs of Great Britain and the professional choir London Voices, to the King’s College Cambridge mixed choir – King’s voices, and Aldeburgh voices. We caught up with Ben to ask him about his progression from singer to director, his conducting experiences, and his advice for directors wishing to set up their own choirs.
With the amount of conducting work you do, you must have to move around a lot! Could you tell us what a typical week in your life looks like?
Any week can be crazy but very rewarding! A typical week might include a choral evensong in King’s (as director of King’s Voices), a film session at Abbey Road or Air Studios with my professional choir – London Voices, a meeting for National Youth Choirs of Great Britain, a rehearsal at Snape Maltings with Aldeburgh Voices, and, sometimes, composing or arranging choral music at home.
Although you started out as a singer, much of your work now revolves around directing. How did you make the progression from singer to director?
I got really interested in choral conducting just before I moved to Edinburgh with my family in 1995. In Edinburgh, I quickly became Chorus Director of the Scottish Chamber Chorus and I also started my own group, Dunedin Consort, which is now one of Scotland’s leading ensembles.
How did you learn to conduct?
My approach with choirs has been informed by observing the many conductors I have worked for – for example, Andrew Parrott’s natural and empathetic approach to music-making has been particularly influential. I’ve never had a conducting lesson, but have learnt my craft through retaining the good things and jettisoning the bad things from other conductors. Ben Parry conducting in action. Used with permission.
The groups that you conduct range from amateur to professional choirs. Do your rehearsal and performance expectations differ between these?
One of the most satisfying things about conducting a range of choirs with differing abilities is how each approach informs the other. I expect the same enthusiasm, application, discipline, and attention to detail from whatever choir I’m working with. You’d expect that London Voices would deliver the goods almost without thinking about it, and that the commitment and enthusiasm of a choir like the National Youth Choir would be greater. However, sometimes it’s the other way round! It’s true to say that the more I put into a rehearsal or performance, the more I and the singers get out of it.
What do you look for when you’re recruiting new singers for the choirs you direct?
For younger choirs (National Youth Choir and Eton Choral Courses) you’re always looking for passion and potential, whereas London Voices relies on professional experience and reputation. With King’s Voices and Aldeburgh Voices, a shared love of singing is very important. But all singers should really share all of these qualities. I’d hate it if any of the singers I worked with weren’t happy singing. What I love is that sometimes the learning curve of a young singer on an Eton Choral Course is so much greater than the satisfaction of a job well done by one of the professionals.
What advice would you give to someone preparing for an audition to become a choir director?
It’s important to build up a good reputation through your work so people respect you and are confident in your abilities. I was offered co-direction of London Voices because Terry Edwards (founder and co-director) had heard I ... Ben Parry studied at Cambridge University, where he was a member of The Choir of King’s College, Cambridge, before he became the musical director of, and singer with, the Swingle Singers. Today, Ben has a busy career as a conductor, arranger, ... http://feeds.feedblitz.com/~/430165838/_/oupblog/https://blog.oup.com/2017/08/winnicott-latest-brain-child/“My latest brain child”http://feedproxy.google.com/~r/oupbloglawpolitics/~3/SvMv4ZEA0Vc/
http://feeds.feedblitz.com/~/429055560/_/oupblog/#respondThu, 10 Aug 2017 11:30:14 +0000https://blog.oup.com/?p=132316In his 1954 essay ‘Metapsychological and Clinical Aspects of Regression within the Psycho-Analytical Set’, Donald Winnicott states: “The idea of psycho-analysis as an art must gradually give way to a study of environmental adaptation relative to patients' regressions. […] I know from experience that some will say: all this leads to a theory of development which ignores the early stages of the development of the individual, which ascribes early development to environmental factors.

]]>
In his 1954 essay ‘Metapsychological and Clinical Aspects of Regression within the Psycho-Analytical Set’, Donald Winnicott states:

“The idea of psycho-analysis as an art must gradually give way to a study of environmental adaptation relative to patients’ regressions. […] I know from experience that some will say: all this leads to a theory of development which ignores the early stages of the development of the individual, which ascribes early development to environmental factors. This is quite untrue. In the early development of the human being the environment that behaves well enough (that makes good-enough active adaptation) enables personal growth to take place.”

In a 1965 essay written for the British Psychoanalytical Society, ‘The Psychology of Madness: A Contribution from Psycho-Analysis’, he wrote:

“The practice of psycho-analysis for thirty-five years cannot but leave its mark. For me there have come about changes in my theoretical formulation, and these I have tried to state as they consolidated themselves in my mind. Often what I have discovered had been already discovered and even better stated […] This does not deter me from continuing to write down what is my latest brain-child.”

These words convey Winnicott’s most significant legacy to psychoanalysis: the theory of the mind of the Subject in constant evolution in its contact with the Other, along the pathway of life. This intuition has found important confirmations in the elements that have emerged from the research studies on the baby’s early stages of life, as well as from the explorations on the psychoanalytical treatment of borderline states and psychoses, and also regarding the approach to traumatic situations.

“Often what I have discovered had been already discovered and even better stated..”

Winnicott’s view of psychoanalysis aims to individuate the contexts – in terms of environment and treatment – in which the subject’s psychic life recovers and organizes itself according to new ways of functioning. He shows its plasticity, its potential for constant reorganization, and the presence of a way of functioning which is developmentally non-linear but experientially transformative in any stage of life.

Winnicott was aware that the development of psychoanalytic theory would greatly benefit from his clinical insights on the early stages of life of the mother-infant couple, and from the analysis of borderline patients. He was not indifferent to theory; on the contrary, he was curious and constantly researching. This biological and relational foundation of the development of the human being constitutes the fertilizing and founding core of his thought, which was received in Italy particularly by Eugenio and Renata Gaddini, to whom we must acknowledge the extraordinary merit of having grasped it and made it known.

Just because of this intrinsically biological and relational foundation, in relation to the well-known events linked to the Controversial Discussions in the British Psychoanalytical Society, Winnicott tried not to get stuck in a static and abstract ideological position, and he abstained from providing his theoretical-clinical findings with a coherent and complete theoretical structure. This is what others called “Winnicott’s illness,” as he himself remembered, with a certain ironical awareness, in a letter to Melanie Klein in 1952, in which he declines the offer to write a paper which should have been included in a book edited by her.

Winnicott presenting ‘A Psychotherapeutic Interview in Child Psychiatry’ at a Pre-Congress Clinical Seminar of the twenty-third Congress of the International Psychoanalytical Association, held in Stockholm on 24 July 1963.
Courtesy of Barbara Young, held in Donald Woods Winnicott Archive, in the care of the Wellcome Library.

However, in 1971, the very year of his demise, Winnicott decided to publish a book, Playing and Reality, which comprises in an organic – albeit not organized – manner, his papers on the theory relative to his most significant psychoanalytical discovery; the individuation of the transitional area in the psychic functioning of the human subject.

The conceptual developments of Winnicott’s thought were not organized by him in a coherent and complete scheme; they were scattered as discrete elements which fertilized several fields and different theories. As is the case with every particularly creative thinker, a later, inevitable reductionist phenomenon took shape. The founding formula: “there is no such thing as a baby, there is a baby and someone”, was impoverished and reduced to a scheme that gave mechanical, excessive importance to the environment, obscuring the primary creativity of the infant.

However, the absence of a coherent and complete framework in Winnicott’s thought may also be read in another way, as the expression of his determined will not to renounce the polarity between biology and narration. A polarity which is intrinsic to Freud’s way of thinking, who had characterized its origin without reaching a unitary resolution; Winnicott confirmed and amplified such polarity, driving this tension forward.

In his Project for a Scientific Psychology, Freud’s intention was to develop psychology as a natural science which might express itself in terms of forces and structures, according to the language of the sciences of his time. But in Studies on Hysteria, written in the same period, the exposition of his clinical cases takes the form of a necessary narration, intrinsic to the theory from which they originated. Such polarity travels through the entire history of psychoanalysis.

Similarly, Winnicott did not intend to disarticulate his theory on psychic functioning from what he indicated as the psyche-soma, the sensory experience from which the mind takes its shape. At the same time, giving theoretical consistency to the area of transitional phenomena, and to the production of that intermediate world to which the analytic process also belongs, Winnicott confers to this third reality the status of a psychic structure that is fundamental for mental functioning.

Psychoanalytical productions, as well as artistic ones, utilize concrete, sensory elements which have a life of their own outside the subject, but which the subject recreates by conferring a personal meaning upon them. They do not belong to the field of hermeneutics, of descriptions and explanations as external captions for the subject’s psychic life.

Therefore, Winnicott maintains the implicit tension of Freudian thought between biological dynamics and narrative construction; he does not resign himself to formulate an organic theory that for the sake of completeness should renounce either experience or dreams as they emerge from the psyche-soma. Winnicott stays with this tension, convinced as he is that research studies, both in the biological field and in the relational field of the analytic situation, will yield further interesting results.

]]>http://feeds.feedblitz.com/~/429055560/_/oupblog/feed/0*Featured,research,Melanie Klein,freudian,The Collected Works of D. W. Winnicott,Psychology & Neuroscience,Theory,theory of the mind,Books,Freud,child psychology,squiggle,winnicott,lesley caldwell,psychoanalysis,child psychoanalysis,helen taylor-robinson,psychological therapy,winnicottian,Anna Ferruta,Oxford Clinical PsychologyIn his 1954 essay ‘Metapsychological and Clinical Aspects of Regression within the Psycho-Analytical Set’, Donald Winnicott states:
“The idea of psycho-analysis as an art must gradually give way to a study of environmental adaptation relative to patients' regressions. […] I know from experience that some will say: all this leads to a theory of development which ignores the early stages of the development of the individual, which ascribes early development to environmental factors. This is quite untrue. In the early development of the human being the environment that behaves well enough (that makes good-enough active adaptation) enables personal growth to take place.”
In a 1965 essay written for the British Psychoanalytical Society, ‘The Psychology of Madness: A Contribution from Psycho-Analysis’, he wrote:
“The practice of psycho-analysis for thirty-five years cannot but leave its mark. For me there have come about changes in my theoretical formulation, and these I have tried to state as they consolidated themselves in my mind. Often what I have discovered had been already discovered and even better stated […] This does not deter me from continuing to write down what is my latest brain-child.”
These words convey Winnicott’s most significant legacy to psychoanalysis: the theory of the mind of the Subject in constant evolution in its contact with the Other, along the pathway of life. This intuition has found important confirmations in the elements that have emerged from the research studies on the baby’s early stages of life, as well as from the explorations on the psychoanalytical treatment of borderline states and psychoses, and also regarding the approach to traumatic situations. “Often what I have discovered had been already discovered and even better stated..”
Winnicott’s view of psychoanalysis aims to individuate the contexts – in terms of environment and treatment – in which the subject’s psychic life recovers and organizes itself according to new ways of functioning. He shows its plasticity, its potential for constant reorganization, and the presence of a way of functioning which is developmentally non-linear but experientially transformative in any stage of life.
Winnicott was aware that the development of psychoanalytic theory would greatly benefit from his clinical insights on the early stages of life of the mother-infant couple, and from the analysis of borderline patients. He was not indifferent to theory; on the contrary, he was curious and constantly researching. This biological and relational foundation of the development of the human being constitutes the fertilizing and founding core of his thought, which was received in Italy particularly by Eugenio and Renata Gaddini, to whom we must acknowledge the extraordinary merit of having grasped it and made it known.
Just because of this intrinsically biological and relational foundation, in relation to the well-known events linked to the Controversial Discussions in the British Psychoanalytical Society, Winnicott tried not to get stuck in a static and abstract ideological position, and he abstained from providing his theoretical-clinical findings with a coherent and complete theoretical structure. This is what others called “Winnicott’s illness,” as he himself remembered, with a certain ironical awareness, in a letter to Melanie Klein in 1952, in which he declines the offer to write a paper which should have been included in a book edited by her. Winnicott presenting ‘A Psychotherapeutic Interview in Child Psychiatry’ at a Pre-Congress Clinical Seminar of the twenty-third Congress of the International Psychoanalytical Association, held in Stockholm on 24 July 1963.
Courtesy of Barbara Young, held in Donald Woods Winnicott Archive, in the care of the Wellcome Library.
However, in 1971, the very ... In his 1954 essay ‘Metapsychological and Clinical Aspects of Regression within the Psycho-Analytical Set’, Donald Winnicott states:
“The idea of psycho-analysis as an art must gradually give way to a study of environmental ... http://feeds.feedblitz.com/~/429055560/_/oupblog/https://blog.oup.com/2017/08/asa-2017-conference-sociology-inequality/Culture, inequalities, and social inclusion across the globe: a ASA 2017 reading listhttp://feedproxy.google.com/~r/oupbloglawpolitics/~3/kKFSgYwfiTw/
http://feeds.feedblitz.com/~/428951784/_/oupblog/#respondThu, 10 Aug 2017 09:30:39 +0000https://blog.oup.com/?p=132882This year, the 2017 American Sociological Association Annual Meeting takes place in Montreal, and our Sociology team is gearing up. The 112th Annual Meeting will take place from 12- 15 August, bringing together over 5,000 sociologists nationwide for four days of lectures, sessions, and networking with some of the top figures in the field. This year’s theme is “Culture, Inequalities, and Social Inclusion across the Globe.”

]]>
This year, the 2017 American Sociological Association Annual Meeting takes place in Montreal, and our Sociology team is gearing up. The 112th Annual Meeting will take place from 12- 15 August, bringing together over 5,000 sociologists nationwide for four days of lectures, sessions, and networking with some of the top figures in the field.

This year’s theme is “Culture, Inequalities, and Social Inclusion across the Globe.” The annual meeting is focusing on improving our understanding of the nexus of culture, inequalities, and group boundaries, in order to promote greater social inclusion and resilience, collective well-being, and solidarity throughout the world.

The conference schedule has a variety of meetings, workshops, and sessions to check out. Ahead of the conference, we have created a reading list of authors attending ASA17 to discuss important topics and issues within the field, and to raise awareness of current inequalities and promote social inclusions across the world.

A Gallup poll conducted just a month after the Newtown school shootings found that 74% of Americans oppose a ban on hand-guns, and at least 11 million people now have licenses to carry concealed weapons as part of their everyday lives. Why do so many Americans not only own guns but also carry them? In Citizen-Protectors, Jennifer Carlson offers a compelling portrait of gun carriers, shedding light on Americans’ complex relationship with guns.

Winner of the 2015 Mary Douglas Prize for Best Book, Sociology of Culture Section, American Sociological Association, and 2016 Barrington Moore Book Award, Honorable Mention, Comparative Historical Sociology Section of the American Sociological Association.

Denial of Violence develops a novel theoretical, historical, and methodological framework to understanding what happened and why the denial of collective violence against Armenians still persists within Turkish state and society.

Winner of the 2015 Best Scholarly Book Award from the American Sociological Association Section on Global and Transnational Sociology, and 2015 Oliver Cromwell Cox Book Award from the American Sociological Association Section on Racial and Ethnic Minorities.

Mara Loveman explains why most Latin American states classified their citizens by race on early national censuses, why they stopped the practice of official racial classification around mid-twentieth century, and why they reintroduced ethnoracial classification on national censuses at the dawn of the twenty-first century. The way census officials described populations in official statistics, in turn, shaped how policymakers viewed national populations and informed their prescriptions for national development—with consequences that still reverberate in contemporary political struggles for recognition, rights, and redress for ethnoracially marginalized populations in today’s Latin America.

Through five years’ worth of interviews and data-gathering at Riverview high school, John Diamond and Amanda Lewis have created a rich and disturbing portrait of the achievement gap that persists more than fifty years after the formal dismantling of segregation. As students progress from elementary school to middle school to high school, their level of academic achievement increasingly tracks along racial lines, with white and Asian students maintaining higher GPAs and standardized testing scores, taking more advanced classes, and attaining better college admission results than their black and Latino counterparts. Most research to date has focused on the role of poverty, family stability, and other external influences in explaining poor performance at school, especially in urban contexts.

Today we live in a society in which relationships, social ties, and jobs seem to change constantly. People roll this way and that, like tumbleweeds blown across an arid plain. How do we raise children, put down roots in our communities, and live up to our promises at a time when flexibility and job insecurity reign? Allison Pugh offers a moving exploration of sacrifice, betrayal, defiance, and resignation, as people adapt to insecurity with their own negotiations of commitment on the job and in intimate life.

Winner of the 2017 Distinguished Book Award from the Political Sociology Section of the American Sociological Association and 2016 Outstanding Book Award, Section on Peace, War and Social Conflict, American Sociological Association.

Wounded City dispells the popular belief that a lack of community is the primary source of violence, arguing that competition for political power and state resources often undermine efforts to reduce gang violence. Robert Vargas argues that the state, through the way it governs, can contribute to distrust and division among community members, thereby undermining social cohesion.

Winner of the 2017 Best Book Award, Sociology of Development Section of the American Sociological Association.

Although Argentina’s use of genetically modified (GM) soybean seeds has spurred a major agricultural boom, it has also had a negative impact on many communities, including deforestation of native forests, prompted the eviction of indigenous and peasant families, and spurred episodes of contamination. Soybeans and Power investigates the ways in which rural populations have coped with GM soybean expansion in Argentina. It also gives voice to the communities most adversely affected by GM technology, as well as the strategies that they have enacted in order to survive.

Have questions for us? We’ll be at booths 915-919 in the exhibit hall with our latest books, journals, and online resources in sociology. The ASA Exhibit Hall will be located in Hall 220C in the Palais des Congrès de Montréal.

]]>http://feeds.feedblitz.com/~/428951784/_/oupblog/feed/0American Sociological Association Annual Meeting,*Featured,Sociology,inequality,American Sociological Association,Social Inequality,reading list,sociology reading list,Books,asa17,Wounded City,Social SciencesThis year, the 2017 American Sociological Association Annual Meeting takes place in Montreal, and our Sociology team is gearing up. The 112th Annual Meeting will take place from 12- 15 August, bringing together over 5,000 sociologists nationwide for four days of lectures, sessions, and networking with some of the top figures in the field.
This year’s theme is “Culture, Inequalities, and Social Inclusion across the Globe.” The annual meeting is focusing on improving our understanding of the nexus of culture, inequalities, and group boundaries, in order to promote greater social inclusion and resilience, collective well-being, and solidarity throughout the world.
The conference schedule has a variety of meetings, workshops, and sessions to check out. Ahead of the conference, we have created a reading list of authors attending ASA17 to discuss important topics and issues within the field, and to raise awareness of current inequalities and promote social inclusions across the world.
Citizen Protectors: The Everyday Politics of Guns in an Age of Decline by Jennifer Carlson
A Gallup poll conducted just a month after the Newtown school shootings found that 74% of Americans oppose a ban on hand-guns, and at least 11 million people now have licenses to carry concealed weapons as part of their everyday lives. Why do so many Americans not only own guns but also carry them? In Citizen-Protectors, Jennifer Carlson offers a compelling portrait of gun carriers, shedding light on Americans' complex relationship with guns.
Denial of Violence: Ottoman Past, Turkish Present, and Collective Violence against the Armenians, 1789-2009 by Fatma Muge Gocek
Winner of the 2015 Mary Douglas Prize for Best Book, Sociology of Culture Section, American Sociological Association, and 2016 Barrington Moore Book Award, Honorable Mention, Comparative Historical Sociology Section of the American Sociological Association.
Denial of Violence develops a novel theoretical, historical, and methodological framework to understanding what happened and why the denial of collective violence against Armenians still persists within Turkish state and society.
National Colors: Racial Classification and the State in Latin America by Mara Loveman
Winner of the 2015 Best Scholarly Book Award from the American Sociological Association Section on Global and Transnational Sociology, and 2015 Oliver Cromwell Cox Book Award from the American Sociological Association Section on Racial and Ethnic Minorities.
Mara Loveman explains why most Latin American states classified their citizens by race on early national censuses, why they stopped the practice of official racial classification around mid-twentieth century, and why they reintroduced ethnoracial classification on national censuses at the dawn of the twenty-first century. The way census officials described populations in official statistics, in turn, shaped how policymakers viewed national populations and informed their prescriptions for national development—with consequences that still reverberate in contemporary political struggles for recognition, rights, and redress for ethnoracially marginalized populations in today's Latin America.
Despite the Best Intentions: How Racial Inequality Thrives in Good Schools by Amanda E. Lewis and John B. Diamond
Through five years' worth of interviews and data-gathering at Riverview high school, John Diamond and Amanda Lewis have created a rich and disturbing portrait of the achievement gap that persists more than fifty years after the formal dismantling of segregation. As students progress from elementary school to middle school to high school, their level of academic achievement increasingly tracks along racial lines, with white and Asian students maintaining higher GPAs and standardized testing scores, taking more advanced classes, and attaining better college admission results ... This year, the 2017 American Sociological Association Annual Meeting takes place in Montreal, and our Sociology team is gearing up. The 112th Annual Meeting will take place from 12- 15 August, bringing together over 5,000 sociologists ... http://feeds.feedblitz.com/~/428951784/_/oupblog/https://blog.oup.com/2017/08/hannah-boron-music-hire-librarian/Introducing Hannah, OUP’s Music Hire Librarianhttp://feedproxy.google.com/~r/oupbloglawpolitics/~3/2sMU63-3KOc/
http://feeds.feedblitz.com/~/428908310/_/oupblog/#respondThu, 10 Aug 2017 08:30:26 +0000https://blog.oup.com/?p=132257We are delighted to introduce Hannah Boron, who joined OUP’s Music Hire Library team in March 2017 and is based in the Oxford offices. We asked her to tell us what her job involves and chatted more generally about fantasy novels and how she would like to be Lara Croft!

]]>
We are delighted to introduce Hannah Boron, who joined OUP’s Music Hire Library team in March 2017 and is based in the Oxford offices. We asked her to tell us what her job involves and chatted more generally about fantasy novels and how she would like to be Lara Croft.

What does a sheet music hire librarian actually do?

Some people are surprised, when I tell them what I do, that music publishers hire music to customers as well as sell it. Not all OUP music is available to buy. For large orchestral works like symphonies and concertos, for example, it often wouldn’t be cost-effective to print sets of parts to sell via music shops–but the music still needs to be available for people to access and perform.

That’s where the Hire Library comes in. We keep sets of orchestral parts (and sometimes vocal scores too) for a huge variety of works from composers as far back as Purcell and Monteverdi to those who are active now, like Bob Chilcott and Zhou Long. When a performing group wants to play an OUP work, they get in touch with us and arrange to borrow the parts they need for however long it will take them to rehearse and perform the piece, in exchange for a hire fee.

Some of the repertoire we hire out is also available for sale, which gives customers options, but if a performing group is only expecting to use the music once, they often feel it is better to hire than buy. Our customers range from small amateur ensembles to some of the UK’s top orchestras!

When did you start working at OUP?

Hannah Boron

At the end of March 2017, so very recently in relative terms! Somehow it feels as though it has been both shorter and longer than that, possibly because it’s such a friendly environment.

What is your typical day like at OUP?

I walk to work, which takes about forty minutes, and usually sets me up well for the day. The first thing I do, once I’m settled at my desk, is look in the email inbox and sort out the quote requests from the orders that can be booked immediately–and see if there’s anything urgent!

Then it’s time to assemble the orders that are due to be dispatched. The first batch of the day’s post is collected at 11am, after which I’m usually ready for a cup of tea! Depending on how many emails there are to reply to, it’s always good to spend some time booking returned music back into stock. There are often missing pieces of material to be chased up with customers too.

What are you reading right now?

I am a huge fan of Terry Pratchett’s Discworld books, and am currently reading The Shepherd’s Crown. The last thing I read was Angela Carter’s Nights at the Circus, which is an opulently fantastical book: her use of language is truly virtuosic.

If you didn’t work in publishing, what would you be doing?

I would want to be either a writer or a singer. I have completed and published one fantasy adventure novel, involving a bounty hunt for thief gangs set in a pseudo-Middle Eastern landscape, and am working on another one.

I have sung in various different choirs over the years but am currently concentrating on my solo repertoire, mainly in the musical theatre genre. My recent performance of “Whatever Happened to my Part?” (from Spamalot) at the Abingdon Music Festival went down so well that I made it into the final concert! I’m quite quiet a lot of the time, so people are often surprised when they hear me sing. Apparently I have an inner diva…

If you could trade places with any one person for a week, who would it be and why?

I don’t think of myself as a particularly hardy person, so I would like to have the experience of being someone who is physically adventurous. Being Lara Croft would be awesome!

If you were stranded on a desert island, which three items would you take with you?

An endless supply of pens and paper, my keyboard (assuming there was a power source somewhere)–and a complete set of Discworld novels.

]]>http://feeds.feedblitz.com/~/428908310/_/oupblog/feed/0Music Hire Library,*Featured,librarian,sheet music publishing,hire library,Music,sheet music,staff q&aWe are delighted to introduce Hannah Boron, who joined OUP’s Music Hire Library team in March 2017 and is based in the Oxford offices. We asked her to tell us what her job involves and chatted more generally about fantasy novels and how she would like to be Lara Croft.
What does a sheet music hire librarian actually do?
Some people are surprised, when I tell them what I do, that music publishers hire music to customers as well as sell it. Not all OUP music is available to buy. For large orchestral works like symphonies and concertos, for example, it often wouldn’t be cost-effective to print sets of parts to sell via music shops–but the music still needs to be available for people to access and perform.
That’s where the Hire Library comes in. We keep sets of orchestral parts (and sometimes vocal scores too) for a huge variety of works from composers as far back as Purcell and Monteverdi to those who are active now, like Bob Chilcott and Zhou Long. When a performing group wants to play an OUP work, they get in touch with us and arrange to borrow the parts they need for however long it will take them to rehearse and perform the piece, in exchange for a hire fee.
Some of the repertoire we hire out is also available for sale, which gives customers options, but if a performing group is only expecting to use the music once, they often feel it is better to hire than buy. Our customers range from small amateur ensembles to some of the UK’s top orchestras!
When did you start working at OUP? Hannah Boron
At the end of March 2017, so very recently in relative terms! Somehow it feels as though it has been both shorter and longer than that, possibly because it’s such a friendly environment.
What is your typical day like at OUP?
I walk to work, which takes about forty minutes, and usually sets me up well for the day. The first thing I do, once I’m settled at my desk, is look in the email inbox and sort out the quote requests from the orders that can be booked immediately–and see if there’s anything urgent!
Then it’s time to assemble the orders that are due to be dispatched. The first batch of the day’s post is collected at 11am, after which I’m usually ready for a cup of tea! Depending on how many emails there are to reply to, it’s always good to spend some time booking returned music back into stock. There are often missing pieces of material to be chased up with customers too.
What are you reading right now?
I am a huge fan of Terry Pratchett's Discworld books, and am currently reading The Shepherd’s Crown. The last thing I read was Angela Carter’s Nights at the Circus, which is an opulently fantastical book: her use of language is truly virtuosic.
If you didn’t work in publishing, what would you be doing?
I would want to be either a writer or a singer. I have completed and published one fantasy adventure novel, involving a bounty hunt for thief gangs set in a pseudo-Middle Eastern landscape, and am working on another one.
I have sung in various different choirs over the years but am currently concentrating on my solo repertoire, mainly in the musical theatre genre. My recent performance of “Whatever Happened to my Part?” (from Spamalot) at the Abingdon Music Festival went down so well that I made it into the final concert! I’m quite quiet a lot of the time, so people are often surprised when they hear me sing. Apparently I have an inner diva…
If you could trade places with any one person for a week, who would it be and why?
I don’t think of myself as a particularly hardy person, so I would like to have the experience of being someone who is physically adventurous. Being Lara Croft would be awesome!
If you were stranded on a desert island, which three items would you take with you?
An endless supply of pens and paper, my keyboard (assuming there ... We are delighted to introduce Hannah Boron, who joined OUP’s Music Hire Library team in March 2017 and is based in the Oxford offices. We asked her to tell us what her job involves and chatted more generally about fantasy novels and how ... http://feeds.feedblitz.com/~/428908310/_/oupblog/