Monday, November 27, 2017

Is Support of Censoring Controversial Media Content for the Good of Others? Sexual Strategies and Support of Censoring Pro-Alcohol Advertising. Jinguang Zhang. Evolutionary Psychology, volume: 15 issue: 4. https://doi.org/10.1177/1474704917742808

Abstract: At least in the United States, there are widespread concerns with advertising that encourages alcohol consumption, and previous research explains those concerns as aiming to protect others from the harm of excessive alcohol use.1 Drawing on sexual strategies theory, we hypothesized that support of censoring pro-alcohol advertising is ultimately self-benefiting regardless of its altruistic effect at a proximate level. Excessive drinking positively correlates with having casual sex, and casual sex threatens monogamy, one of the major means with which people adopting a long-term sexual strategy increase their inclusive fitness. Then, one way for long-term strategists to protect monogamy, and thus their reproductive interest is to support censoring pro-alcohol advertising, thereby preventing others from becoming excessive drinkers (and consequently having casual sex) under media influence. Supporting this hypothesis, three studies consistently showed that restricted sociosexuality positively correlated with support of censoring pro-alcohol advertising before and after various value-, ideological-, and moral-foundation variables were controlled for. Also as predicted, Study 3 revealed a significant indirect effect of sociosexuality on censorship support through perceived media influence on others but not through perceived media influence on self. These findings further supported a self-interest analysis of issue opinions, extended third-person-effect research on support of censoring pro-alcohol advertising, and suggested a novel approach to analyzing media censorship support.

Abstract: Women engage in a variety of beauty practices, or “beauty work,” to enhance their physical appearance, such as applying cosmetics, tanning, or exercising. Although the rewards of physical attractiveness are well documented, perceptions of both the women who engage in efforts to enhance their appearance and the high-effort beauty products marketed to them are not well understood. Across seven studies, we demonstrate that consumers judge women who engage in certain types of extensive beauty work as possessing poorer moral character. These judgments occur only for effortful beauty work perceived as transformative (significantly altering appearance) and transient (lasting a relatively short time), such that they emerge within cosmetics and tanning, yet not skincare or exercise. This effect is mediated by the perception that putting high effort into one’s appearance signals a willingness misrepresent one’s true self, and translates into lower purchase intentions for higher-effort cosmetics. We identify several boundary conditions, including the attractiveness of the woman performing the beauty work and whether the effort is attributed to external norms or causes. In examining how beauty work elicits moral judgments, we also shed light on why effortful cosmetic use is viewed negatively, yet effortful products continue to be commercially successful.

Abstract: Nonviable “zombie” firms have become a key concern in China. Using novel firm-level industrial survey data, this paper illustrates the central role of zombies and their strong linkages with stateowned enterprises (SOEs) in contributing to debt vulnerabilities and low productivity. As a group, zombie firms and SOEs account for an outsized share of corporate debt, contribute to much of the rise in debt, and face weak fundamentals. Empirical results also show that resolving these weak firms can generate significant gains of 0.7–1.2 percentage points in long-term growth per year. These results also shed light on the ongoing government strategy to tackle these issues by evaluating the effects of different restructuring options. In particular, deleveraging, reducing government subsidies, as well as operational restructuring through divestment and reducing redundancy have significant benefits in restoring corporate performance for zombie firms.

Is It Time for a New Medical Specialty? The Medical Virtualist. Michael Nochomovitz; Rahul Sharma. Journal of the American Medical Association, published online November 27, 2017. DOI: 10.1001/jama.2017.17094

[...]

Drivers of Specialty Expansion

Specialty development has been driven by advances in technology and expansion of knowledge in care delivery. Physician-led teams leverage technology and new knowledge into a structured approach for a medical discipline, which gains a momentum of its own with adoption. For instance, critical care was not a unique specialty until 30 years ago. The refinement in ventilator techniques, cardiac monitoring and intervention, anesthesia, and surgical advancements drove the development of the specialty and certification, with subsequent subspecialization (eg, neurological intensive care). The development of laparoscopic and robotic surgical equipment, with advanced imaging, spawned new specialty and subspecialty categories including colon and rectal surgery, general surgical oncology, interventional radiology, and electrophysiology.

In nonprocedural areas, unique certification was established for geriatrics and palliative care. Additional new specialties include hospitalists, laborists, and extensivists, to name a few. These clinical areas do not yet have formal training programs or certification but are specific disciplines with core competencies and measures of performance that might be likely recognized in the future.

Telemedicine and Medical Care

Telemedicine is the delivery of health care services remotely by the use of various telecommunications modalities. The expansion of web-based services, use of videoconferencing in daily communication, and social media coupled with the demand for convenience by consumers of health care are all factors driving exponential growth in telehealth.2

According to one estimate, the global telehealth market is projected to increase at an annual compounded rate of 30% between 2017 and 2022, achieving an estimated value of $12.1 billion.2 Some recent market surveys show that more than 70% of consumers would consider a virtual health care service.3 A preponderance of higher income and privately insured consumers indicate a preference for telehealth, particularly when reassured of the quality of the care and the appropriate scope of the virtual visit.3 Telemedicine is being used to provide health care to some traditionally underserved and rural areas across the United States and increased shortages of primary care and specialty physicians are anticipated in those areas.4

A New Specialty

Digital advances within health care and patients acting more like consumers have resulted in more physicians and other clinicians delivering virtual care in almost every medical discipline. Second-opinion services, emergency department express care, virtual intensive care units (ICUs), telestroke with mobile stroke units, telepsychiatry, and remote services for postacute care are some examples.

In the traditional physician office, answering services and web-based portals focused on telephone and email communication. The advent of telehealth has resulted in incremental growth of video face-to-face communication with patients by mobile phone, tablet, or other computer devices.2,3,5 In larger enterprises or commercial ventures, the scale is sufficient to “make or buy” centralized telehealth command centers to service functions across broad geographic areas including international.

Early telehealth focused on minor ailments such as coughs, colds, and rashes, but now telehealth is being used in broader applications, such as communicating imaging and laboratory results, changing medication, and most significantly managing more complex chronic disease.

The coordination of virtual care with home visits, remote monitoring, and simultaneous family engagement is changing the perception and reality of virtual health care. Commercialization is well under way with numerous start-ups and more established companies. These services are provided by the companies alone or in collaboration with physician groups.

The Medical Virtualist

We propose the concept of a new specialty representing the medical virtualist. This term could be used to describe physicians who will spend the majority or all of their time caring for patients using a virtual medium. A professional consensus will be needed on a set of core competencies to be further developed over time.

Physicians now spend variable amounts of time delivering care through a virtual medium without formal training. Training should include techniques in achieving good webside manner.5 Some components of a physical examination can be conducted virtually via patient or caregiver. Some commercial insurance carriers and institutional groups have developed training courses.5 These are neither associated with a medical specialty board or society consensus or oversight nor with an associated certification.

Contemporary care is multidisciplinary, including nurses, medical students, nurse practitioners, physician assistants, pharmacists, social workers, nutritionists, counselors, and educators. All require formal training in virtual encounters to ensure a similar quality outcome as is expected for in-person care.

It is possible that there could be a need for physicians across multiple disciplines to become full-time medical virtualists with subspecialty differentiation. Examples could be urgent care virtualists, intensive care virtualists, neurological virtualists, and psychiatric or behavioral virtualists. This shift would not preclude virtual visits from becoming a totally integrated component of all practices to varying extents.

Based on early experience in primary care, one estimate suggests that 30% to 50% of visits could possibly be eligible for a virtual encounter.4 This could be amplified when coupled with home care and remote monitoring devices. There are varying data on the influence of telehealth on total health care services utilization and that will be determined with greater adoption. In addition, as the number of emergency department visits continues to increase nationally, health care systems must develop innovative ways to maximize efficiency and maintain high-quality standards.6

However, complete replacement of the traditional clinical encounter will not occur. “Bricks and clicks” will prevail for patients’ convenience and value. Physicians will lead teams with both in-office and remote monitoring resources at their disposal to deliver care. This model could be enhanced in the future with digital assistants or avatars.

In the surgical specialties, remote surgery has been more focused on telementoring and guiding surgeons in remote locations. There have been examples of true virtual surgeons who have operated robotically on patients hundreds of miles away.7 This approach can be expected to develop further in the coming years.

Critical Success Factors

The success of technology-based services is not determined by hardware and software alone but by ease of use, perceived value, and workflow optimization.

Medical virtualists will need specific core competencies and curricula that are beginning to develop at some institutions. In addition to the medical training for a specific discipline, the curriculum for certification should include knowledge of legal and clinical limitations of virtual care, competencies in virtual examination using the patient or families, “virtual visit presence training,” inclusion of on-site clinical measurements, as well as continuing education.

It will be necessary for early adopters, thought leaders, medical specialty societies, and medical trade associations to work with the certifying organizations to formalize curriculum, training, and certification for medical virtualists. If advances in technology continue and if rigorous evidence demonstrates that this technology improves care and outcomes and reduces cost, medical virtualists could be involved in a substantial proportion of health care delivery for the next generation.

Additional Contributions, references, etc., in the full article at the link above

Abstract: We present a computable algorithm that assigns probabilities to every logical statement in a given formal language, and refines those probabilities over time. For instance, if the language is Peano arithmetic, it assigns probabilities to all arithmetical statements, including claims about the twin prime conjecture, the outputs of long-running computations, and its own probabilities. We show that our algorithm, an instance of what we call a logical inductor, satisfies a number of intuitive desiderata, including: (1) it learns to predict patterns of truth and falsehood in logical statements, often long before having the resources to evaluate the statements, so long as the patterns can be written down in polynomial time; (2) it learns to use appropriate statistical summaries to predict sequences of statements whose truth values appear pseudorandom; and (3) it learns to have accurate beliefs about its own current beliefs, in a manner that avoids the standard paradoxes of self-reference. For example, if a given computer program only ever produces outputs in a certain range, a logical inductor learns this fact in a timely manner; and if late digits in the decimal expansion of π are difficult to predict, then a logical inductor learns to assign ≈ 10% probability to “the n th digit of π is a 7” for large n. Logical inductors also learn to trust their future beliefs more than their current beliefs, and their beliefs are coherent in the limit (whenever φ → ψ, P∞ (φ) ≤ P∞ (ψ), and so on); and logical inductors strictly dominate the universal semimeasure in the limit.

These properties and many others all follow from a single logical induction criterion, which is motivated by a series of stock trading analogies. Roughly speaking, each logical sentence φ is associated with a stock that is worth $1 per share if φ is true and nothing otherwise, and we interpret the belief-state of a logically uncertain reasoner as a set of market prices, where Pn (φ) = 50% means that on day n, shares of φ may be bought or sold from the reasoner for 50¢. The logical induction criterion says (very roughly) that there should not be any polynomial-time computable trading strategy with finite risk tolerance that earns unbounded profits in that market over time. This criterion bears strong resemblance to the “no Dutch book” criteria that support both expected utility theory (von Neumann and Morgenstern 1944) and Bayesian probability theory (Ramsey 1931; de Finetti 1937).

Summary: Do psychotherapies work primarily through the specific factors described in treatment manuals, or do they work through common factors? In attempting to unpack this ongoing debate between specific and common factors, we highlight limitations in the existing evidence base and the power battles and competing paradigms that influence the literature. The dichotomy is much less than it might first appear. Most specific factor theorists now concede that common factors have importance, whereas the common factor theorists produce increasingly tight definitions of bona fide therapy. Although specific factors might have been overplayed in psychotherapy research, some are effective for particular conditions. We argue that continuing to espouse common factors with little evidence or endless head-to-head comparative studies of different psychotherapies will not move the field forward. Rather than continuing the debate, research needs to encompass new psychotherapies such as e-therapies, transdiagnostic treatments, psychotherapy component studies, and findings from neurobiology to elucidate the effective process components of psychotherapy.

---
Additionally, the dissemination of findings leads to further bias. Negative trials are less likely to be reported, thereby inflating effect sizes. Low-quality studies often result in larger effect sizes. Trial registration is poor, so we cannot know whether outcomes are selectively reported, particularly by groups with a strong allegiance to the treatments. Findings from a 2017 systematic review showed that only 12% of psychotherapy trials were prospectively registered with clearly defined primary outcome measures.

One obvious approach to the dodo bird problem is to test whether different therapies do lead to different outcomes. Head-to-head comparisons generally suggest small differential effects, which are smaller and non-significant after researcher allegiance is controlled for. However, this literature has substantial limitations. Most studies have investigated cognitive therapy or CBT as one of the treatment groups, so specific strengths of other approaches are poorly understood. Only a narrow range of treatment outcome measures have been systematically examined, most typically acute symptom reduction; longer-term effects, including relapse prevention measures for common chronic conditions, might differentiate some therapies for some problems. Differences might be revealed if a wider range of treatment outcome measures were used, including functioning, quality of life, and individualised measures of treatment outcome. However, such trials are expensive and rarely undertaken. Differences might also be larger if moderating factors such as individual differences between patients were accounted for in outcome modelling.

Another way to test the specific factor model is through therapist adherence. Improved adherence to theory-specified factors in evidence-supported therapies should improve patient outcomes, if these specific factors are important to the success of the therapy. However, the evidence has not generally supported this hypothesis, with findings from a meta-analysis showing that neither variability in competence nor adherence was related to patient outcome, suggesting that these variables are relatively inert therapeutic agents. The broader literature is split on this question, with some investigators finding no effect of treatment integrity on outcomes, some a positive effect, and some a negative effect (potentially due to an overly rigid application of technique, which could be detrimental to the therapeutic alliance for some clients). Extent of training might also not be relevant to outcome, as suggested by the work of Stanley and colleagues. Indeed, therapeutic alliance, a common factor, might be a more important variable to instigate change than therapeutic adherence, although even these effect sizes are modest (mean alliance–outcome correlation 0·26).

Regardless, common factor researchers argue that outcome studies do not answer the most important outstanding question in psychotherapy—namely, what are the mechanisms of change? Although the importance of specific factors has been estimated from effect sizes of targeted therapies compared with plausible controls, the importance of common factors has been estimated correlationally through the association between therapy outcomes and patient reports of rapport and engagement. Although the effect sizes of targeted therapies compared with controls permit causal correlations, correlation between therapy outcomes and patient engagement does not, and will be confounded by an overlap between the success of therapy and the client’s satisfaction with the therapist. Therapeutic alliance is fundamentally dyadic (ie, a reciprocal working relationship), which sits uncomfortably with the more medical notion of patient as recipient of the therapist’s activities.

Finally, psychotherapy research is difficult and expensive to conduct, and—without the commercial investment that occurs in pharmacotherapy research—deficits of the existing evidence base are attributable simply to the low power and small number of studies. For example, although the effectiveness of behavioural therapy for obsessive compulsive disorder is similar to that of pharmacological treatment, investigators of a meta-analysis of psychotherapy and pharmacotherapy for obsessive compulsive disorder found 15 psychotherapy trials with a total 705 patients, by contrast with 32 pharmacotherapy trials with a total of 3588 patients.

Abstract: Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.

Abstract: A number indecent photos depicting models whose genital areas have been censored or covered have been widely disseminated on the Internet and proved to be extremely popular. The question is whether these “incomplete nudes” on the Internet can induce sexual cognition. To answer this question, this study presented 25 male college students with 4 types of images. Results found that pictures of females induced larger positive potential (P2) amplitudes and shorter latencies than did pictures of males, and that pictures of nude females induced larger negative potential (N2) amplitudes than did pictures of nude males. Moreover, pictures of covered or nude females evoked larger P300 waves than did pictures of fully-dressed or underwear-wearing females. Pictures of nude models also evoked larger PSW than did other types of pictures. These results suggested that P2 and N2 reflect early gender processing and early sexual cognition, respectively, while P300 reflect inferential sexual cognition which meant that covered models were indeed perceived as nude models. This study revealed that censored (covered) sexual information disseminated through the Internet could still evoke inferential sexual cognition.

Abstract: Using research into learning from sequences of examples, we generate predictions about what cultural products become widely distributed in the social marketplace of ideas. We investigate what we term the Repetition-Break plot structure: the use of repetition among obviously similar items to establish a pattern, and then a final contrasting item that breaks with the pattern to generate surprise. Two corpus studies show that this structure arises in about a third of folktales and story jokes. An experiment shows that jokes with this structure are more interesting than those without the initial repetition. Thus, we document evidence for how a cognitive factor influences the cultural products that are selected in the marketplace of ideas.