Our brains seem better at predictions than we are. A part of our brain becomes active when it knows something will be successfully crowdfunded, even if we consciously decide otherwise. If this finding stands up and works in other areas of life, neuroforecasting may lead to better voting polls or even predict changes in financial markets

Our brains seem better at predictions than we are. A part of our brain becomes active when it knows something will be successfully crowdfunded, even if we consciously decide otherwise. If this finding stands up and works in other areas of life, neuroforecasting may lead to better voting polls or even predict changes in financial markets

To see if one can predict market behaviour by sampling a small number of people, Brian Knutson at Stanford University in California and his team scanned the brains of 30 people while they decided whether to fund 36 projects from the crowdfunding website Kickstarter.

The projects were all recently posted proposals for documentary films. Each participant had their brain scanned while taking in the pictures and descriptions of each campaign, and they were then asked if they would want to fund the project.

When the real Kickstarter campaigns ended a few weeks later, 18 of the projects had gained enough funding to go forward. Examining the participants’ brain scans, the team discovered that activity in a region called the nucleus accumbens had been different when they considered projects that later went on to be successful.

It is not always good to have the opportunity to make a choice. When we must decide to take one action rather than another, we also, ordinarily, become at least partly responsible for what we choose to do. Usually this is appropriate; it’s what makes us the kinds of creatures who can be expected to abide by moral norms

It is not always good to have the opportunity to make a choice. When we must decide to take one action rather than another, we also, ordinarily, become at least partly responsible for what we choose to do. Usually this is appropriate; it’s what makes us the kinds of creatures who can be expected to abide by moral norms

Yet sometimes, having a choice means deciding to commit one bad act or another. Imagine being a doctor or nurse caught in the following fictionalised version of real events at a hospital in New Orleans in the aftermath of Hurricane Katrina in 2005. Due to a tremendous level of flooding after the hurricane, the hospital must be evacuated. The medical staff have been ordered to get everyone out by the end of the day, but not all patients can be removed. As time runs out, it becomes clear that you have a choice, but it’s a choice between two horrifying options: euthanise the remaining patients without consent (because many of them are in a condition that renders them unable to give it) or abandon them to suffer a slow, painful and terrifying death alone. Even if you’re anguished at the thought of making either choice, you might be confident that one action – let’s say administering a lethal dose of drugs – is better than the other. Nevertheless, you might have the sense that no matter which action you perform, you’ll be violating a moral requirement.

Are there situations, perhaps including this one, in which all the things that you could do are things that would be morally wrong for you to do? If the answer is yes, then there are some situations in which moral failure is unavoidable. In the case of the flooded hospital, what you morally should do is something impossible: you should both avoid killing patients without consent and avoid leaving them to suffer a painful death. You’re required to do the impossible.

The so-called heartbeat bill, which Kasich rejected, was considered more vulnerable to legal challenge. Provisions of the measure would have essentially limited the period during which women could get an abortion to about six weeks, when many women don’t even realize they’re pregnant, reports the Associated Press.

Similar measures have faced legal challenges in other states, the news service goes on to say, a fact weighing heavily in Kasich’s veto defense. Kasich, himself an abortion-rights opponent, noted bans in two other states had been declared unconstitutional.

“The State of Ohio will be the losing party in that lawsuit and, as the losing party, the State of Ohio will be forced to pay hundreds of thousands of taxpayer dollars to cover the legal fees for the pro-choice activists’ lawyers. Furthermore, such a defeat invites additional challenges to Ohio’s strong legal protections for unborn life,” the Republican governor said in a statement.

There’s an exceedingly simple way to get better health care: Choose a better hospital. A recent study shows that many patients have already done so, driving up the market shares of higher-quality hospitals

There’s an exceedingly simple way to get better health care: Choose a better hospital. A recent study shows that many patients have already done so, driving up the market shares of higher-quality hospitals

A great deal of the decrease in deaths from heart attacks over the past two decades can be attributed to specific medical technologies like stents and drugs that break open arterial blood clots. But a study by health economists at Harvard, M.I.T., Columbia and the University of Chicago showed that heart attack survival gains from patients selecting better hospitals were significant, about half as large as those from breakthrough technologies.

That’s a big improvement for nothing more than driving a bit farther to a higher-quality hospital.

Because more Medicare patients went to higher-quality hospitals for heart attacks between 1996 and 2008, overall chances of survival increased by one percentage point, according to the study. To receive care at a hospital with a one-percentage-point gain in survival rate or a one-percentage-point decrease in readmission rate, a heart attack patient traveled 1.8 or 1.1 miles farther, respectively.

Last week, Microsoft inadvertently revealed the difficulty of creating moral robots. Chatbot Tay, designed to speak like a teenage girl, sounded like a Nazi-loving racist after less than 24 hours on Twitter. Of course, Tay wasn’t designed to be explicitly moral. But plenty of other machines are involved in work that has clear ethical implications

Last week, Microsoft inadvertently revealed the difficulty of creating moral robots. Chatbot Tay, designed to speak like a teenage girl, sounded like a Nazi-loving racist after less than 24 hours on Twitter. Of course, Tay wasn’t designed to be explicitly moral. But plenty of other machines are involved in work that has clear ethical implications

Of course, Tay wasn’t designed to be explicitly moral. But plenty of other machines are involved in work that has clear ethical implications.

Wendell Wallach, a scholar at Yale’s Interdisciplinary Center for Bioethics and author of “A Dangerous Master: How to keep technology from slipping beyond our control,” points out that in hospitals, APACHE medical systems help determine the best treatments for patients in intensive care units—often those who are at the edge of death. Wallach points out that, though the doctor may seem to have autonomy, it could be very difficult in certain situations to go against the machine—particularly in a litigious society. “Is the doctor really free to make an independent decision?,” he says. “You might have a situation where the machine is the de facto decision-maker.”

As robots become more advanced, their ethical decision-making will only become more sophisticated. But this raises the question of how to program ethics into robots, and whether we can trust machines with moral decisions.

WILMINGTON, DE — Early in the morning on March 17, staff from a nonprofit called Upstream USA arrived at a Delaware health clinic. They showed up with some typical supplies: three Dunkin’ Donuts coffee jugs, two dozen doughnuts, countless paper handouts, and one mechanical vagina.

The mechanical vagina — which, much like its human counterpart, is attached to a (mechanical) cervix and uterus — was certainly the most unusual cargo. But it was important: The 40-pound replica of the female reproductive system allows nurses and doctors to practice new procedures. On that Thursday morning, it was where two nurses learned how to insert an intrauterine device (IUD) into a patient.

Driverless or autonomous cars will almost certainly be commonplace quite soon. Imagine you are sitting in such a car, approaching a tunnel on a single-lane mountain road. A child wanders into the middle of the road, blocking the entrance to the tunnel. How should such cars be programmed to react? Keep going and kill the child; or to swerve aside into the tunnel wall and kill the driver

Driverless or autonomous cars will almost certainly be commonplace quite soon. Imagine you are sitting in such a car, approaching a tunnel on a single-lane mountain road. A child wanders into the middle of the road, blocking the entrance to the tunnel. How should such cars be programmed to react? Keep going and kill the child; or to swerve aside into the tunnel wall and kill the driver

The tunnel problem was invented by the philosopher Jason Millar. The question, of course, is not what the ‘user’ of the car should do. Nor is it any good suggesting an override function: there may be cases where there isn’t time to react. Millar’s own suggestion is based on an analogy with medical ethics. Those who purchase driverless cars should be permitted to choose their own ‘ethics package’. That suggestion itself rests on his view that there is no ‘right answer’ about what to do in the tunnel case, and that programming a particular programme into the car would ‘alienate’ users from their moral convictions.

Now Millar is quite clear that he doesn’t mean that anything goes here: he says it would be absurd to allow someone to use a program that swerves only to avoid males. But this raises the question for him why people’s own moral commitments are relevant only within a certain range. A more parsimonious and elegant, and I suspect popular, view would be that there is a right answer in the tunnel case, but we don’t know what it is.

On that view, then, perhaps we can agree with Millar’s solution, but for a different reason. There is a right answer, and we know it lies within some range (so, as he says, sexist positions are out). But as we don’t know which view within that range is correct, we should leave it up to individuals to choose for themselves.