Interest in many-core architectures applied to real time selections is growing in High Energy Physics (HEP) experiments. In this paper we describe performance measurements of many-core devices when applied to a typical HEP online task: the selection of events based on the trajectories of charged particles. We use as benchmark a scaled-up version of the algorithm used at CDF experiment at Tevatron for online track reconstruction - the SVT algorithm - as a realistic test-case for low-latency trigger systems using new computing architectures for LHC experiment. We examine the complexity/performance trade-off in porting existing serial algorithms to many-core devices. We measure performance of different architectures (Intel Xeon Phi and AMD GPUs, in addition to NVidia GPUs) and different software environments (OpenCL, in addition to NVidia CUDA). Measurements of both data processing and data transfer latency are shown, considering different I/O strategies to/from the many-core devices.

Experiments in Physical Chemistry, Second Edition provides a compilation of experiments concerning physical chemistry. This book illustrates the link between the theory and practice of physical chemistry. Organized into three parts, this edition begins with an overview of those experiments that generally have a simple theoretical background. Part II contains experiments that are associated with more advanced theory or more developed techniques, or which require a greater degree of experimental skill. Part III consists of experiments that are in the nature of investigations wherein these invest

The current status of flavour physics and the prospects for present and future experiments will be reviewed. Measurements in B‐physics, in which sensitive probes of new physics are the CKM angle γ, the Bs mixing phase ϕs, and the branching ratios of the rare decays B(s)0→μ+μ− , will be highlighted. Topics in charm and kaon physics, in which the measurements of ACP and the branching ratios of the rare decays K→πνν¯ are key measurements, will be discussed. Finally the complementarity of the future heavy flavour experiments, the LHCb upgrade and Belle‐II, will be summarised. PMID:26877543

The Earth's inner core plays a vital role in the dynamics of our planet and is itself strongly exposed to dynamic processes as evidenced by a complex pattern of elastic structure. To gain deeper insight into the nature of these processes we rely on a characterization of the physical properties of the inner core which are governed by the material physics of its main constituent, iron. Here we review recent research on structure and dynamics of the inner core, focusing on advances in mineral physics. We will discuss results on core composition, crystalline structure, temperature,and various aspects of elasticity. Based on recent computational results, we will show that aggregate seismic properties of the inner core can be explained by temperature and compression effects on the elasticity of pure iron, and use single crystal anisotropy to develop a speculative textural model of the inner core that can explain major aspects of inner core anisotropy.

This dissertation describes a series of laboratory experiments motivated by planetary cores and the dynamo effect, the mechanism by which the flow of an electrically conductive fluid can give rise to a spontaneous magnetic field. Our experimental apparatus, meant to be a laboratory model of Earth's core, contains liquid sodium between an inner, solid sphere and an outer, spherical shell. The fluid is driven by the differential rotation of these two boundaries, each of which is connected to a motor. Applying an axial, DC magnetic field, we use a collection of Hall probes to measure the magnetic induction that results from interactions between the applied field and the flowing, conductive fluid. We have observed and identified inertial modes, which are bulk oscillations of the fluid restored by the Coriolis force. Over-reflection at a shear layer is one mechanism capable of exciting such modes, and we have developed predictions of both onset boundaries and mode selection from over-reflection theory which are consistent with our observations. Also, motivated by previous experimental devices that used ferromagnetic boundaries to achieve dynamo action, we have studied the effects of a soft iron (ferromagnetic) inner sphere on our apparatus, again finding inertial waves. We also find that all behaviors are more broadband and generally more nonlinear in the presence of a ferromagnetic boundary. Our results with a soft iron inner sphere have implications for other hydromagnetic experiments with ferromagnetic boundaries, and are appropriate for comparison to numerical simulations as well. From our observations we conclude that inertial modes almost certainly occur in planetary cores and will occur in future rotating experiments. In fact, the predominance of inertial modes in our experiments and in other recent work leads to a new paradigm for rotating turbulence, starkly different from turbulence theories based on assumptions of isotropy and homogeneity, starting instead

A consensus has not been reached among strength and conditioning specialists regarding what physical fitness exercises are most effective to stimulate activity of the core muscles. Thus, the purpose of this article was to systematically review the literature on the electromyographic (EMG) activity of 3 core muscles (lumbar multifidus, transverse abdominis, quadratus lumborum) during physical fitness exercises in healthy adults. CINAHL, Cochrane Central Register of Controlled Trials, EMBASE, PubMed, SPORTdiscus, and Web of Science databases were searched for relevant articles using a search strategy designed by the investigators. Seventeen studies enrolling 252 participants met the review's inclusion/exclusion criteria. Physical fitness exercises were partitioned into 5 major types: traditional core, core stability, ball/device, free weight, and noncore free weight. Strength of evidence was assessed and summarized for comparisons among exercise types. The major findings of this review with moderate levels of evidence indicate that lumbar multifidus EMG activity is greater during free weight exercises compared with ball/device exercises and is similar during core stability and ball/device exercises. Transverse abdominis EMG activity is similar during core stability and ball/device exercises. No studies were uncovered for quadratus lumborum EMG activity during physical fitness exercises. The available evidence suggests that strength and conditioning specialists should focus on implementing multijoint free weight exercises, rather than core-specific exercises, to adequately train the core muscles in their athletes and clients.

We present high resolution observations in the starless dense molecular core L1512 performed with the Medicina 32m radio telescope. The resolved hfs components of HC3N and NH3 show no kinematic sub-structure and consist of an apparently symmetric peak profile without broadened line wings or self-absorption features suggesting that they sample the same material. The velocity dispersion is 101( ± 1) m s - 1for NH3 and 85( ± 2) m s - 1 for HC3N. The kinetic temperature of the cloud is estimated at 9.2 ( ± 1.2) K and the turbulence is of 76 m s - 1in a subsonic regime. This places L1512 among the most quiescent dark cores and makes it an ideal laboratory to study variations of the electron-to-proton mass ratio, μ = {m}e/{m}p by means of observations of inversion lines of NH3 combined with rotational lines of other molecular species.

The Sustained Spheromak PhysicsExperiment is proposed for experimental studies of spheromak confinement issues in a controlled way: in steady state relative to the confinement timescale and at low collisionality. Experiments in a flux - conserver will provide data on transport in the presence of resistive modes in shear-stabilized systems and establish operating regimes which pave the way for true steady-state experiments with the equilibrium field supplied by external coils. The proposal is based on analysis of past experiments, including the achievement of T{sub e} = 400 eV in a decaying spheromak in CTX. Electrostatic helicity injection from a coaxial ``gun`` into a shaped flux conserver will form and sustain the plasma for several milliseconds. The flux conserver minimizes fluxline intersection with the walls and provides MHD stability. Improvements from previous experiments include modem wall conditioning (especially boronization), a divertor for density and impurity control, and a bias magnetic flux for configurational flexibility. The bias flux will provide innovative experimental opportunities, including testing helicity drive on the large-radius plasma boundary. Diagnostics include Thomson scattering for T{sub e} measurements and ultra-short pulse reflectrometry to measure density and magnetic field profiles and turbulence. We expect to operate at T{sub e} of several hundred eV, allowing improved understanding of energy and current transport due to resistive MHD turbulence during sustained operation. This will provide an exciting advance in spheromak physics and a firm basis for future experiments in the fusion regime.

I have developed an equation of state (EOS) for hot, dense matter that is intended specifically for use in radiation hydrodynamic simulations of supernovae, proto-neutron star cooling, and neutron stars. This EOS makes use of an adjustable nucleon-nucleon interaction that allows for the input of various nuclear force parameters that are not well determined by laboratory measurements. Properties of the EOS as a function of these input parameters were studied and comparisons were made to another EOS that is currently used in stellar collapse simulations. Using this EOS I have conducted simulations of core collapse supernovae with several ideas in mind. First, I have attempted to delineate role of the incompressibility of dense matter in supernovae. I have conducted a parameter study in which the compression modulous of bulk nuclear matter was varied and have found some new and surprising results. When the EOS is constrained by the observed mass of 1.44M(solar mass) for one of the components of the binary pulsar system PSR1913+16, the 'stiffness' of the EOS no longer plays a role in the shock dynamics of the supernova. Secondly, I varied the symmetry energy coefficients in the EOS to determine the role of these coefficients in supernovae. I have found that the symmetry energy behavior of the EOS has potentially observable effects and may play an important role in determining the efficacy of the late-time heating mechanism for the explosion and the stability of the post-bounce core against convection. Finally, I have developed an implicit, general relativistic, radiation hydrodynamics algorithm for the numerical simulation of supernovae. By allowing simulation timesteps to exceed the Courant timescale, this algorithm makes practical high resolution simulations of supernovae to late times. I discuss this algorithm and the associated computer code along with code verification tests and an example of a late-time calculation.

Core concepts can be integrated throughout lower-division science and engineering courses by using a series of related, cross-referenced laboratory experiments. Starting with butane combustion in chemistry, the authors expanded the underlying core concepts of energy transfer into laboratories designed for biology, physics, and engineering. This…

We aim to determine the physical and chemical properties of dense cores in Orion B9. We observed the NH3(1,1) and (2,2), and the N2H+(3-2) lines towards the submm peak positions. These data are used in conjunction with our LABOCA 870 micron dust continuum data. The gas kinetic temperature in the cores is between ~9.4-13.9 K. The non-thermal velocity dispersion is subsonic in most of the cores. The non-thermal linewidth in protostellar cores appears to increase with increasing bolometric luminosity. The core masses are very likely drawn from the same parent distribution as the core masses in Orion B North. Starless cores in the region are likely to be gravitationally bound, and thus prestellar. Some of the cores have a lower radial velocity than the systemic velocity of the region, suggesting that they are members of the "low-velocity part" of Orion B. The observed core-separation distances deviate from the corresponding random-like model distributions. The distances between the nearest-neighbours are comparab...

Highly efficient, compact nuclear reactors would provide high specific impulse spacecraft propulsion. This analysis and numerical simulation effort has focused on the technical feasibility issues related to the nuclear design characteristics of a novel reactor design. The Fissioning Plasma Core Reactor (FPCR) is a shockwave-driven gaseous-core nuclear reactor, which uses Magneto Hydrodynamic effects to generate electric power to be used for propulsion. The nuclear design of the system depends on two major calculations: corephysics calculations and kinetics calculations. Presently, corephysics calculations have concentrated on the use of the MCNP4C code. However, initial results from other codes such as COMBINE/VENTURE and SCALE4a. are also shown. Several significant modifications were made to the ISR-developed QCALC1 kinetics analysis code. These modifications include testing the state of the core materials, an improvement to the calculation of the material properties of the core, the addition of an adiabatic core temperature model and improvement of the first order reactivity correction model. The accuracy of these modifications has been verified, and the accuracy of the point-core kinetics model used by the QCALC1 code has also been validated. Previously calculated kinetics results for the FPCR were described in the ISR report, "QCALC1: A code for FPCR Kinetics Model Feasibility Analysis" dated June 1, 2002.

Radiative shocks play a dominant role in star formation. The accretion shocks on the first and second Larson's cores involve radiative processes and are thus characteristic of radiative shocks. In this study, we explore the formation of the first Larson's core and characterize the radiative and dynamical properties of the accretion shock, using both analytical and numerical approaches. We develop both numerical RHD calculations and a semi-analytical model that characterize radiative shocks in various physical conditions, for radiating or barotropic fluids. Then, we perform 1D spherical collapse calculations of the first Larson's core, using a grey approximation for the opacity of the material. We consider three different models for radiative transfer, namely: the barotropic approximation, the FLD approximation and the more complete M1 model. We investigate the characteristic properties of the collapse and of the first core formation. Comparison between the numerical results and our semi-analytical model shows...

The purpose of this guide is to provide a consistent, standardized approach to performing neutronics/physics analysis for experiments inserted into the Advanced Test Reactor (ATR). This document provides neutronics/physics analysis guidance to support experiment design and analysis needs for experiments irradiated in the ATR. This guide addresses neutronics/physics analysis in support of experiment design, experiment safety, and experiment program objectives and goals. The intent of this guide is to provide a standardized approach for performing typical neutronics/physics analyses. Deviation from this guide is allowed provided that neutronics/physics analysis details are properly documented in an analysis report.

Although no samples yet have been returned from a comet, extensive experience from sampling another solar system body, the Moon, does exist. While, in overall structure, composition, and physical properties the Moon bears little resemblance to what is expected for a comet, sampling the Moon has provided some basic lessons in how to do things which may be equally applicable to cometary samples. In particular, an extensive series of core samples has been taken on the Moon, and coring is the best way to sample a comet in three dimensions. Data from cores taken at 24 Apollo collection stations and 3 Luna sites have been used to provide insight into the evolution of the lunar regolith. It is now well understood that this regolith is very complex and reflects gardening (stirring of grains by micrometeorites), erosion (from impacts and solar wind sputtering), maturation (exposure on the bare lunar surface to solar winds ions and micrometeorite impacts) and comminution of coarse grains into finer grains, blanket deposition of coarse-grained layers, and other processes. All of these processes have been documented in cores. While a cometary regolith should not be expected to parallel in detail the lunar regolith, it is possible that the upper part of a cometary regolith may include textural, mineralogical, and chemical features which reflect the original accretion of the comet, including a form of gardening. Differences in relative velocities and gravitational attraction no doubt made this accretionary gardening qualitatively much different than the lunar version. Furthermore, at least some comets, depending on their orbits, have been subjected to impacts of the uppermost surface by small projectiles at some time in their history. Consequently, a more recent post-accretional gardening may have occurred. Finally, for comets which approach the sun, large scale erosion may have occurred driven by gas loss. The uppermost material of these comets may reflect some of the features

The objective of this study was to identify core journals in physical therapy by identifying those that publish the most randomized controlled trials of physical therapy interventions, provide the highest-quality reports of randomized controlled trials, and have the highest journal impact factors. This study was an audit of a bibliographic database. All trials indexed in the Physiotherapy Evidence Database (PEDro) were analyzed. Journals that had published at least 80 trials were selected. The journals were ranked in 4 ways: number of trials published; mean total PEDro score of the trials published in the journal, regardless of publication year; mean total PEDro score of the trials published in the journal from 2000 to 2009; and 2008 journal impact factor. The top 5 core journals in physical therapy, ranked by the total number of trials published, were Archives of Physical Medicine and Rehabilitation, Clinical Rehabilitation, Spine, British Medical Journal (BMJ), and Chest. When the mean total PEDro score was used as the ranking criterion, the top 5 journals were Journal of Physiotherapy, Journal of the American Medical Association (JAMA), Stroke, Spine, and Clinical Rehabilitation. When the mean total PEDro score of the trials published from 2000 to 2009 was used as the ranking criterion, the top 5 journals were Journal of Physiotherapy, JAMA, Lancet, BMJ, and Pain. The most highly ranked physical therapy-specific journals were Physical Therapy (ranked eighth on the basis of the number of trials published) and Journal of Physiotherapy (ranked first on the basis of the quality of trials). Finally, when the 2008 impact factor was used for ranking, the top 5 journals were JAMA, Lancet, BMJ, American Journal of Respiratory and Critical Care Medicine, and Thorax. There were no significant relationships among the rankings on the basis of trial quality, number of trials, or journal impact factor. Physical therapists who are trying to keep up-to-date by reading the best

Contains more than 1,800 experiments in elementary particle physics from the Experience database. Search and browse by author; title; experiment number or prefix; institution; date approved, started or completed; accelerator or detector; polarization, reaction, final state or particle; or by papers produced. Maintained at SLAC for the Particle Data Group. Supplies the information for Current Experiments in Particle Physics (LBL-91). Print version updated every second year.

Contains more than 1,800 experiments in elementary particle physics from the Experience database. Search and browse by author; title; experiment number or prefix; institution; date approved, started or completed; accelerator or detector; polarization, reaction, final state or particle; or by papers produced. Maintained at SLAC for the Particle Data Group. Supplies the information for Current Experiments in Particle Physics (LBL-91). Print version updated every second year.

The six experiments included in this monography are titled Blackbody Radiation, Collision of Electrons with Atoms, The Photoelectric Effect, Magnetic Properties of Atoms, The Scattering of X-Rays, and Diffraction of Electrons by a Crystal Lattice. The discussion provides historical background by giving description of the original experiments and…

Short-lived isotope systematics, mantle siderophile abundances and the power requirements of the geodynamo favour an early and high-temperature core-formation process, in which metals concentrate and partially equilibrate with silicates in a deep magma ocean before descending to the core. We report results of laboratory experiments on liquid metal dynamics in a two-layer stratified viscous fluid, using sucrose solutions to represent the magma ocean and the crystalline, more primitive mantle and liquid gallium to represent the core-forming metals. Single gallium drop experiments and experiments on Rayleigh-Taylor instabilities with gallium layers and gallium mixtures produce metal diapirs that entrain the less viscous upper layer fluid and produce trailing plume conduits in the high-viscosity lower layer. Calculations indicate that viscous dissipation in metal-silicate plumes in the early Earth would result in a large initial core superheat. Our experiments suggest that metal-silicate mantle plumes facilitate high-pressure metal-silicate interaction and may later evolve into buoyant thermal plumes, connecting core formation to ancient hotspot activity on the Earth and possibly on other terrestrial planets.

This report contains summaries of 720 recent and current experiments in elementary particle physics (experiments that finished taking data before 1980 are excluded). Included are experiments at Brookhaven, CERN, CESR, DESY, Fermilab, Moscow Institute of Theoretical and Experimental Physics, Tokyo Institute of Nuclear Studies, KEK, LAMPF, Leningrad Nuclear Physics Institute, Saclay, Serpukhov, SIN, SLAC, and TRIUMF, and also experiments on proton decay. Instructions are given for searching online the computer database (maintained under the SLAC/SPIRES system) that contains the summaries. Properties of the fixed-target beams at most of the laboratories are summarized.

An unlined disposal pond in the 300 Area of the Hanford Site received uranium-bearing liquid effluents associated with nuclear reactor fuel rod processing from 1943 to 1975. Contaminated sediments from the base and sides of the former pond were excavated and removed from the site in the early 1990s, but a uranium plume has persisted in the groundwater at concentrations exceeding the drinking water standard. The former process pond is located adjacent to the Columbia River and seasonal fluctuations in the river stage and water table provide a mechanism for resupplying residual uranium from the vadose zone to the groundwater when the lower vadose zone is periodically rewetted. Intact cores were collected from the site for measurements of physical, hydraulic, and geochemical properties. Multistep outflow experiments were also performed on the intact cores to determine permeability-saturation-capillary pressure relations. Pore water displaced during these experiments for two of the vadose zone cores was also analyzed for uranium. For a core containing finer-textured sediment classified as muddy sandy gravel, and a core containing coarser-textured sediment classified as gravel, the relative aqueous uranium concentrations increased by factors of 8.3 and 1.5, respectively, as the cores were desaturated and progressively smaller pore-size classes were drained. Aqueous concentrations of uranium in the extracted pore waters were up to 115 times higher than the current drinking water standard of 30 ppb. These results confirm that there is a continuing source of uranium in the vadose zone at the site, and are consistent with a hypothesis that the persistence of the groundwater uranium plume is also associated, in part, with rate-limited mass transfer from finer-textured sediments. The data from these and several other intact cores from the site are evaluated to explore relationships between physical and hydraulic properties and uranium desorption characteristics.

The purpose of this study is to evaluate the performance of a passive core cooling system (PCCS) with passive injection during the cold-leg small break loss-of-coolant accidents (SBLOCAs) experiments conducted at the Institute of Nuclear Energy Research (INER) Integral System Test (IIST) facility. Four tests were performed simulating break sizes of 0.2-2% (approximately corresponding to 1.25-4'' breaks for a referenced nuclear power plant) at cold-leg for assessing the PCCS capability in accident management. The key thermal-hydraulic phenomena to core heat removal for PCCS are observed and discussed. The experimental results show that the PCCS has successfully provided a continuous removal of core heat and a long term core cooling can be reached for all cases of SBLOCA.

The Telescope Array Low Energy Extension (TALE) Experiment consists of three detectors which will extend the sensitivity in energy of the Telescope Array (TA) experiment by two orders of magnitude, from 18.5experiment at all energies, and double it at the highest energies. The aim of the experiment is to study the second knee, the ankle, and the galactic/extragalactic transition. The three detectors start with a set of fluorescence detectors deployed in such a way that they are paired with TA fluorescence detectors at a separation of 6 km. These stereo pairs are designed to study the ankle of the cosmic ray spectrum in an optimal way. The second of the three is a "tower" detector, which is a fluorescence detector designed to have increased coverage in elevation angle, up to 71 degrees. This detector is designed to study the second knee of the spectrum. The third detector is an infill array to be added to TA within the aperture of the tower detector. This will make possible hybrid observation with the tower detector, and provide greatly improved reconstruction of lower energy events in purely surface detector mode.

A new physical chemistry laboratory experience has been designed for upper-level undergraduate chemistry majors. Students customize the first 10 weeks of their laboratory experience by choosing their own set of experiments (from a manual of choices) and setting their own laboratory schedule. There are several topics presented in the accompanying…

Research in experimental nuclear physics was done from 1979 to 2002 primarily at intermediate energy facilities that provide pion, proton, and kaon beams. Particularly successful has been the work at the Los Alamos Meson Physics Facility (LAMPF) on unraveling the neutron and proton contributions to nuclear ground state and transition densities. This work was done on a wide variety of nuclei and with great detail on the carbon, oxygen, and helium isotopes. Some of the investigations involved the use of polarized targets which allowed the extraction of information on the spin-dependent part of the triangle-nucleon interaction. At the Indiana University Cyclotron Facility (IUCF) we studied proton-induced charge exchange reactions with results of importance to astrophysics and the nuclear few-body problem. During the first few years, the analysis of heavy-ion nucleus scattering data that had been taken prior to 1979 was completed. During the last few years we created hypernuclei by use of a kaon beam at Brookhaven National Laboratory (BNL) and an electron beam at Jefferson Laboratory (JLab). The data taken at BNL for a study of the non-mesonic weak decay of the A particle in a nucleus are still under analysis by our collaborators. The work at JLab resulted in the best resolution hypernuclear spectra measured thus far with magnetic spectrometers.

Physics is very much an experimental science, but too often, students at the undergraduate level are not exposed to the reality of experimental physics ― i.e., what was done in a given experiment, why it was done, the background of physics against which the experiment was carried out and the changes in theory and knowledge that resulted. In this hook, the author helps to remedy the situation by presenting a variety of ""landmark"" experiments that have brought about significant alterations in our ideas about some aspect of nature. Among these scientific milestones are discoveries about the wa

We explore with self-consistent 2D Fornax simulations the dependence of the outcome of collapse on many-body corrections to neutrino-nucleon cross sections, pre-collapse seed perturbations, and inelastic neutrino-electron and neutrino-nucleon scattering. We show here for the first time that modest many-body corrections to neutrino-nucleon scattering, well-motivated by physics, make explosions easier in models of core-collapse supernovae. In this sense, realistic many-body corrections could be important missing pieces of physics needed to ensure robust supernova explosions. In addition, we find that imposed seed perturbations, while not necessarily determinative of explosion, can facilitate it and shorten its post-bounce emergence time. We now find that all our multi-D models with realistic physics explode by the neutrino heating mechanism. Proximity to criticality amplifies the role of even small changes in the neutrino-matter couplings, and such changes can together add to produce dramatic effects. When clos...

The benchmark evaluation of the start-up core reactor physics measurements performed with Japan’s High Temperature Engineering Test Reactor, in support of the Next Generation Nuclear Plant Project and Very High Temperature Reactor Program activities at the Idaho National Laboratory, has been completed. The evaluation was performed using MCNP5 with ENDF/B-VII.0 nuclear data libraries and according to guidelines provided for inclusion in the International Reactor PhysicsExperiment Evaluation Project Handbook. Results provided include updated evaluation of the initial six critical core configurations (five annular and one fully-loaded). The calculated keff eigenvalues agree within 1s of the benchmark values. Reactor physics measurements that were evaluated include reactivity effects measurements such as excess reactivity during the core loading process and shutdown margins for the fully-loaded core, four isothermal temperature reactivity coefficient measurements for the fully-loaded core, and axial reaction rate measurements in the instrumentation columns of three core configurations. The calculated values agree well with the benchmark experiment measurements. Fully subcritical and warm critical configurations of the fully-loaded core were also assessed. The calculated keff eigenvalues for these two configurations also agree within 1s of the benchmark values. The reactor physics measurement data can be used in the validation and design development of future High Temperature Gas-cooled Reactor systems.

Poster summarize forward physics at the ATLAS experiment. It aims to AFP project which is the project to install forward detectors at 220m (AFP220) and 420m (AFP420) around ATLAS for measurements at high luminosity.

This report contains summaries of 584 current and recent experiments in elementary particle physics. Experiments that finished taking data before 1986 are excluded. Included are experiments at Brookhaven, CERN, CESR, DESY, Fermilab, Tokyo Institute of Nuclear Studies, Moscow Institute of Theoretical and Experimental Physics, KEK, LAMPF, Novosibirsk, Paul Scherrer Institut (PSI), Saclay, Serpukhov, SLAC, SSCL, and TRIUMF, and also several underground and underwater experiments. Instructions are given for remote searching of the computer database (maintained under the SLAC/SPIRES system) that contains the summaries.

This report contains summaries of 736 current and recent experiments in elementary particle physics (experiments that finished taking data before 1982 are excluded). Included are experiments at Brookhaven, CERN, CESR, DESY, Fermilab, Tokyo Institute of Nuclear Studies, Moscow Institute of Theoretical and Experimental Physics, Joint Institute for Nuclear Research (Dubna), KEK, LAMPF, Novosibirsk, PSI/SIN, Saclay, Serpukhov, SLAC, and TRIUMF, and also several underground experiments. Also given are instructions for searching online the computer database (maintained under the SLAC/SPIRES system) that contains the summaries. Properties of the fixed-target beams at most of the laboratories are summarized.

With the ability to run above 4~GeV, the BESIII experiment located in the Beijing Electron Positron Collider (BEPCII), has becoming a pioneer in searching and studying charmoniumlike states ($XYZ$ particles). In 2013, BESIII Collaboration discovered a charged charmoniumlike state $Z_c(3900)$, which is confirmed immediately experimentally, and provides the best candidate for a four quark state by now. Continuous studies by BESIII Collaboration show new decay behavior of $Z_c(3900)$, and there are possible partner particle $Z_c(4020)/Z_c(4025)$ existing. By scanning above 4~GeV, BESIII also reveals the potential connection between $Y(4260)$ and $X(3872)$ for the first time, which may help us understand $XYZ$ particles in a new sight.

In this paper, we present a new experimental facility, Little Earth Experiment, designed to study the hydrodynamics of liquid planetary cores. The main novelty of this apparatus is that a transparent electrically conducting electrolyte is subject to extremely high magnetic fields (up to 10T) to produce electromagnetic effects comparable to those produced by moderate magnetic fields in planetary cores. This technique makes it possible to visualise for the first time the coupling between the principal forces in a convection-driven dynamo by means of Particle Image Velocimetry (PIV) in a geometry relevant to planets. We first present the technology that enables us to generate these forces and implement PIV in a high magnetic field environment. We then show that the magnetic field drastically changes the structure of convective plumes in a configuration relevant to the tangent cylinder region of the Earth's core.

In this paper, we present a new experimental facility, Little Earth Experiment, designed to study the hydrodynamics of liquid planetary cores. The main novelty of this apparatus is that a transparent electrically conducting electrolyte is subject to extremely high magnetic fields (up to 10 T) to produce electromagnetic effects comparable to those produced by moderate magnetic fields in planetary cores. This technique makes it possible to visualise for the first time the coupling between the principal forces in a convection-driven dynamo by means of Particle Image Velocimetry (PIV) in a geometry relevant to planets. We first present the technology that enables us to generate these forces and implement PIV in a high magnetic field environment. We then show that the magnetic field drastically changes the structure of convective plumes in a configuration relevant to the tangent cylinder region of the Earth's core.

The introduction of wireless telemetry into the design of monitoring and control systems has been shown to reduce system costs while simplifying installations. To date, wireless nodes proposed for sensing and actuation in cyberphysical systems have been designed using microcontrollers with one computational pipeline (i.e., single-core microcontrollers). While concurrent code execution can be implemented on single-core microcontrollers, concurrency is emulated by splitting the pipeline's resources to support multiple threads of code execution. For many applications, this approach to multi-threading is acceptable in terms of speed and function. However, some applications such as feedback controls demand deterministic timing of code execution and maximum computational throughput. For these applications, the adoption of multi-core processor architectures represents one effective solution. Multi-core microcontrollers have multiple computational pipelines that can execute embedded code in parallel and can be interrupted independent of one another. In this study, a new wireless platform named Martlet is introduced with a dual-core microcontroller adopted in its design. The dual-core microcontroller design allows Martlet to dedicate one core to standard wireless sensor operations while the other core is reserved for embedded data processing and real-time feedback control law execution. Another distinct feature of Martlet is a standardized hardware interface that allows specialized daughter boards (termed wing boards) to be interfaced to the Martlet baseboard. This extensibility opens opportunity to encapsulate specialized sensing and actuation functions in a wing board without altering the design of Martlet. In addition to describing the design of Martlet, a few example wings are detailed, along with experiments showing the Martlet's ability to monitor and control physical systems such as wind turbines and buildings.

Presents an experiment designed to give students some experience with photochemistry, electrochemistry, and basic theories about semiconductors. Uses a liquid-junction solar cell and illustrates some fundamental physical and chemical principles related to light and electricity interconversion as well as the properties of semiconductors. (JRH)

This is the fourth edition of our compilation of current high energy physicsexperiments. It is a collaborative effort of the Berkeley Particle Data Group, the SLAC library, and nine participating laboratories: Argonne (ANL), Brookhaven (BNL), CERN, DESY, Fermilab (FNAL), the Institute for Nuclear Study, Tokyo (INS), KEK, Serpukhov (SERP), and SLAC. The compilation includes summaries of all high energy physicsexperiments at the above laboratories that (1) were approved (and not subsequently withdrawn) before about April 1981, and (2) had not completed taking of data by 1 January 1977. We emphasize that only approved experiments are included.

This report contains summaries of current and recent experiments in Particle Physics. Included are experiments at BEPC (Beijing), BNL, CEBAF, CERN, CESR, DESY, FNAL, Frascati, ITEP (Moscow), JINR (Dubna), KEK, LAMPF, Novosibirsk, PNPI (St. Petersburg), PSI, Saclay, Serpukhov, SLAC, and TRIUMF, and also several proton decay and solar neutrino experiments. Excluded are experiments that finished taking data before 1991. Instructions are given for the World Wide Web (WWW) searching of the computer database (maintained under the SLAC-SPIRES system) that contains the summaries. This report contains full summaries of 180 approved current and recent experiments in elementary particle physics. The focus of the report is on selected experiments which directly contribute to our better understanding of elementary particles and their properties such as masses, widths or lifetimes, and branching fractions.

In 2004, the American Society for Therapeutic Radiology and Oncology (ASTRO) published a curriculum for physics education. The document described a 54-hour course. In 2006, the committee reconvened to update the curriculum. The committee is composed of physicists and physicians from various residency program teaching institutions. Simultaneously, members have associations with the American Association of Physicists in Medicine, ASTRO, Association of Residents in Radiation Oncology, American Board of Radiology, and American College of Radiology. Representatives from the latter two organizations are key to provide feedback between the examining organizations and ASTRO. Subjects are based on Accreditation Council for Graduate Medical Education requirements (particles and hyperthermia), whereas the majority of subjects and appropriated hours/subject were developed by consensus. The new curriculum is 55 hours, containing new subjects, redistribution of subjects with updates, and reorganization of core topics. For each subject, learning objectives are provided, and for each lecture hour, a detailed outline of material to be covered is provided. Some changes include a decrease in basic radiologic physics, addition of informatics as a subject, increase in intensity-modulated radiotherapy, and migration of some brachytherapy hours to radiopharmaceuticals. The new curriculum was approved by the ASTRO board in late 2006. It is hoped that physicists will adopt the curriculum for structuring their didactic teaching program, and simultaneously, the American Board of Radiology, for its written examination. The American College of Radiology uses the ASTRO curriculum for their training examination topics. In addition to the curriculum, the committee added suggested references, a glossary, and a condensed version of lectures for a Postgraduate Year 2 resident physics orientation. To ensure continued commitment to a current and relevant curriculum, subject matter will be updated

In 2012, the Wright State University physics curriculum introduced a new year-long seminar course required for all new physics majors. The goal of this course is to improve student retention and success via building a community of physics majors and provide them with the skills, mindset, and advising necessary to successfully complete a degree and transition to the next part of their careers. This new course sequence assembles a new cohort of majors annually. To prepare each cohort, students engage in a variety of activities that span from student success skills to more specific physics content while building an entrepreneurial mindset. Students participate in activities including study skills, career night, course planning, campus services, and a department social function. More importantly, students gain exposure to programming, literature searches, data analysis, technical writing, elevator pitches, and experimental design via hands-on projects. This includes the students proposing, designing, and conducting their own experiments. Preliminary evidence indicates increased retention, student success, and an enhanced sense of community among physics undergraduate students, The overall number of majors and students eventually completing their physics degrees has nearly tripled. Associate Professor, Department of Physics.

This report contains summaries of current and recent experiments in Particle Physics. Included are experiments at BEPC (Beijing), BNL, CEBAF, CERN, CESR, DESY, FNAL, Frascati, ITEP (Moscow), JINR (Dubna), KEK, LAMPF, Novosibirsk, PNPI (St. Petersburg), PSI, Saclay, Serpukhov, SLAC, and TRIUMF, and also several proton decay and solar neutrino experiments. Excluded are experiments that finished taking data before 1991. Instructions are given for the World Wide Web (WWW) searching of the computer database (maintained under the SLAC-SPIRES system) that contains the summaries.

Interest in parallel architectures applied to real time selections is growing in High Energy Physics (HEP) experiments. In this paper we describe performance measurements of Graphic Processing Units (GPUs) and Intel Many Integrated Core architecture (MIC) when applied to a typical HEP online task: the selection of events based on the trajectories of charged particles. We use as benchmark a scaled-up version of the algorithm used at CDF experiment at Tevatron for online track reconstruction – the SVT algorithm – as a realistic test-case for low-latency trigger systems using new computing architectures for LHC experiment. We examine the complexity/performance trade-off in porting existing serial algorithms to many-core devices. Measurements of both data processing and data transfer latency are shown, considering different I/O strategies to/from the parallel devices.

Seventy whole rounds from conventional cores obtained during drilling to 300mbsf at Atwater Valley and Keathley Canyon in the Gulf of Mexico in April and May 2005 were tested to determine geophysical and geomechanical parameters (liquid and plastic limit, porosity, specific surface, pH, sediment electrical conductivity, P- and S-wave velocities and undrained shear strength). Available data from a pressure core are included as well. Results show that the sediments are high specific surface plastic clays, and exhibit pronounced time-dependent stiffness recovery. Strains during coring disturb specimens, yet, the water content retains the effective stress history and permits gaining stiffness and strength information from conventional cores. Remolding is exacerbated when gas expands upon decompression; the limited pressure core data available show the advantages of preserving the pore fluid pressure during core recovery and testing. Valuable parameters for sediment characterization and engineering analysis are extracted from the data using pre-existing soil models. (author)

The paper describes a set of physics demonstration experiments where thermal sensitive foils are used for the detection of the two dimensional distribution of temperature. The method is used for the demonstration of thermal conductivity, temperature change in adiabatic processes, distribution of electromagnetic radiation in a microwave oven and…

Digital Rock Physics (DRP) is a novel technology that could be used to generate accurate, fast and cost effective special core analysis (SCAL) properties to support reservoir characterization and simulation tools. For this work, Micro-CT images at different resolutions have been used to run simulations to determine elastic properties like bulk, shear, Young's Modulus and Poisson's ratio of a dry carbonate core plug from Abu Dhabi reservoirs. Pre processing and segmentation of raw images is performed in FEI 3D visualization and analysis tool Avizo. Carbonates are characterized by a very complex pore-space structure and so a high degree of heterogeneity. Abaqus that is based on Finite Element Method is used to run 2D and 3D elastic simulations. Results will be compared by simulating the same core-plug in an alternative segmentation and FEM modeling environment used previously by Jouini & Vega et al. 2012 [1]. Acoustic wave propagation experiments at different confining pressures are performed in the laboratory Triaxial machine to determine the dynamic Young's modulus and Poisson's ratio for the same core plug. Expeirmental results are compared with numerical results. [1] Jouini, M.S. and Vega, S. 2012. Simulation of carbonate rocks elastic properties using 3D X-Ray computed tomography images based on Discrete Element Method and Finite Element Method. 46th US Rock Mechanics / Geomechanics Symposium, Chicago, Il, USA, 24-27 June 2012.

Introduction: Compressive and tensile stresses of core materials are important properties because cores usually replace a large bulk of tooth structure and must resist multidirectional masticatory forces for many years. Material and Methods: The present study was undertaken to find out the best core build up material with respect to their physical properties among resin-based composites. Individual compressive, tensile, and flexural strength of fiber-reinforced dual cure resin core build...

One of the difficulties in modern physics teaching is the limited availability of experimental activities. This is particularly true for teaching nuclear physics in high school or college. The activities suggested in the literature generally symbolise real phenomenon, using simulations. It happens because the experimental practices mostly include some kind of expensive radiation detector and an ionising radiation source that requires special care for handling and storage, being subject to a highly bureaucratic regulation in some countries. This study overcomes these difficulties and proposes three nuclear physicsexperiments using a low-cost ion chamber which construction is explained: the measurement of 222Rn progeny collected from the indoor air; the measurement of the range of alpha particles emitted by the 232Th progeny, present in lantern mantles and in thoriated welding rods, and by the air filter containing 222Rn progeny; and the measurement of 220Rn half-life collected from the emanation of the lantern mantles. This paper presents the experimental procedures and the expected results, indicating that the experiments may provide support for nuclear physics classes. These practices may outreach wide access to either college or high-school didactic laboratories, and the apparatus has the potential for the development of new teaching activities for nuclear physics.

To maintain the economic viability of nuclear power the industry has begun to emphasize maximizing the efficiency and output of existing nuclear power plants by using longer fuel cycles, stretch power uprates, shorter outage lengths, mixed-oxide (MOX) fuel and more aggressive operating strategies. In order to accommodate these changes, while still satisfying the peaking factor and power envelope requirements necessary to maintain safe operation, more complexity in commercial core designs have been implemented, such as an increase in the number of sub-batches and an increase in the use of both discrete and integral burnable poisons. A consequence of the increased complexity of core designs, as well as the use of MOX fuel, is an increase in the neutronic heterogeneity of the core. Such heterogeneous cores introduce challenges for the current methods that are used for reactor analysis. New methods must be developed to address these deficiencies while still maintaining the computational efficiency of existing reactor analysis methods. In this thesis, advanced core design methodologies are developed to be able to adequately analyze the highly heterogeneous core designs which are currently in use in commercial power reactors. These methodological improvements are being pursued with the goal of not sacrificing the computational efficiency which core designers require. More specifically, the PSU nodal code NEM is being updated to include an SP3 solution option, an advanced transverse leakage option, and a semi-analytical NEM solution option.

The Planck experiment will soon provide a very accurate measurement of cosmic microwave background anisotropies. This will let cosmologists determine most of the cosmological parameters with unprecedented accuracy. Future experiments will improve and complement the Planck data with better angular resolution and better polarization sensitivity. This unexplored region of the CMB power spectrum contains information on many parameters of interest, including neutrino mass, the number of relativistic particles at recombination, the primordial helium abundance, and the injection of additional ionizing photons by dark matter self-annihilation. We review the imprint of each parameter on the CMB and forecast the constraints achievable by future experiments by performing a Monte Carlo analysis on synthetic realizations of simulated data. We find that next generation satellite missions such as CMBPol could provide valuable constraints with a precision close to that expected in current and near future laboratory experiments. Finally, we discuss the implications of this intersection between cosmology and fundamental physics.

Stellar collapse and the subsequent development of a core-collapse supernova explosion emit bursts of gravitational waves (GWs) that might be detected by the advanced generation of laser interferometer gravitational-wave observatories such as Advanced LIGO, Advanced Virgo, and LCGT. GW bursts from core-collapse supernovae encode information on the intricate multi-dimensional dynamics at work at the core of a dying massive star and may provide direct evidence for the yet uncertain mechanism driving supernovae in massive stars. Recent multi-dimensional simulations of core-collapse supernovae exploding via the neutrino, magnetorotational, and acoustic explosion mechanisms have predicted GW signals which have distinct structure in both the time and frequency domains. Motivated by this, we describe a promising method for determining the most likely explosion mechanism underlying a hypothetical GW signal, based on Principal Component Analysis and Bayesian model selection. Using simulated Advanced LIGO noise and ass...

With the first dedicated B-factory experiments BaBar (USA) and BELLE (Japan) Flavour Physics has entered the phase of precision physics. LHCb (CERN) and the high luminosity extension of KEK-B together with the state of the art BELLE II detector will further push this precision frontier. Progress in this field always relied on close cooperation between experiment and theory, as extraction of fundamental parameters often is very indirect. To extract the full physics information from existing and future data, this cooperation must be further intensified. This MIAPP programme aims in particular to prepare for this task by joining experimentalists and theorists in the various relevant fields, with the goal to build the necessary tools in face of the challenge of new large data sets. The programme will begin with a focus on physics with non-leptonic final states, continued by semileptonic B meson decays and Tau decays, and on various aspects of CP symmetry violation closer to the end. In addition, in the final ...

Recent research suggests that today's children are less physically active and more overweight/obese than those of previous generations. A superior physical education program hires college-educated specialists, requires daily physical activities, stresses improvement-oriented fitness education and skill development, includes all children, and…

Multiple lines of geochemical and geophysical evidence suggest the Moon has a small metallic core, yet the composition of the core is poorly constrained. The physical state of the core (now or in the past) depends on detailed knowledge of its composition, and unfortunately, there is little available data on relevant multicomponent systems (i.e., Fe-Ni-S-C) at lunar interior conditions. In particular, there is a dearth of phase equilibrium data to elucidate whether a specific core composition could help to explain an early lunar geodynamo and magnetic field intensities, or current solid inner core/liquid outer core states. We utilize geochemical information to estimate the Ni, S and C contents of the lunar core, and then carry out phase equilibria experiments on several possible core compositions at the pressure and temperature conditions relevant to the lunar interior. The first composition is 0.5 wt% S and 0.375 wt% C, based on S and C contents of Apollo glasses. A second composition contains 1 wt% each of S and C, and assumes that the lunar mantle experienced degassing of up to 50% of its S and C. Finally a third composition contains C as the dominant light element. Phase equilibrium experiments were completed at 1, 3 and 5 GPa, using piston cylinder and multi-anvil techniques. The first composition has a liquidus near 1550 °C and solidus near 1250 °C. The second composition has a narrower liquidus and solidus temperatures of 1400 and 1270 °C, respectively, while the third composition is molten down to 1150 °C. As the composition crystallizes, the residual liquid becomes enriched in S and C, but S enrichment is greater due to the incorporation of C (but not S) into solid metallic FeNi. Comparison of these results to thermal models for the Moon allow an evaluation of which composition is consistent with the geophysical data of an early dynamo and a currently solid inner and liquid outer core. Composition 1 has a high enough liquidus to start crystallizing

Lasers are employed throughout science and technology, in fundamental research, the remote sensing of atmospheric gases or pollutants, communications, medical diagnostics and therapies, and the manufacturing of microelectronic devices. Understanding the principles of their operation, which underlie all of these areas, is essential for a modern scientific education. This text introduces the characteristics and operation of lasers through laboratory experiments designed for the undergraduate curricula in chemistry and physics. Introductory chapters describe the properties of light, the history of laser invention, the atomic, molecular, and optical principles behind how lasers work, and the kinds of lasers available today. Other chapters include the basic theory of spectroscopy and computational chemistry used to interpret laser experiments. Experiments range from simple in-class demonstrations to more elaborate configurations for advanced students. Each chapter has historical and theoretical background, as well...

We have made self-consistent models of the density and temperature profiles of the gas and dust surrounding embedded luminous objects using a detailed radiative transfer model together with observations of the spectral energy distribution of hot molecular cores. Using these profiles we have investigated the hot core chemistry which results when grain mantles are evaporated, taking into account the different binding energies of the mantle molecules, as well a model in which we assume that all molecules are embedded in water ice and have a common binding energy. We find that most of the resulting column densities are consistent with those observed toward the hot core G34.3+0.15 at a time around 10$^4$ years after central luminous star formation. We have also investigated the dependence of the chemical structure on the density profile which suggests an observational possibility of constraining density profiles from determination of the source sizes of line emission from desorbed molecules.

Compared to other areas of physics research, Statistical Physics is heavily dominated by theory, with comparatively little experiment. One reason for the lack of experiments is the impracticality of tracking of individual atoms and molecules within a substance. Thus, there is a need for a different kind of experimental system, one where individual particles not only move stochastically as they collide with one another, but also are large enough to allow tracking. A dusty plasma can meet this need. A dusty plasma is a partially ionized gas containing small particles of solid matter. These micron-size particles gain thousands of electronic charges by collecting more electrons than ions. Their motions are dominated by Coulomb collisions with neighboring particles. In this so-called strongly coupled plasma, the dust particles self-organize in much the same way as atoms in a liquid or solid. Unlike atoms, however, these particles are large and slow, so that they can be tracked easily by video microscopy. Advantages of dusty plasma for experimental statistical physics research include particle tracking, lack of frictional contact with solid surfaces, and avoidance of overdamped motion. Moreover, the motion of a collection of dust particles can mimic an equilibrium system with a Maxwellian velocity distribution, even though the dust particles themselves are not truly in thermal equilibrium. Nonequilibrium statistical physics can be studied by applying gradients, for example by imposing a shear flow. In this talk I will review some of our recent experiments with shear flow. First, we performed the first experimental test to verify the Fluctuation Theorem for a shear flow, showing that brief violations of the Second Law of Thermodynamics occur with the predicted probabilities, for a small system. Second, we discovered a skewness of a shear-stress distribution in a shear flow. This skewness is a phenomenon that likely has wide applicability in nonequilibrium steady states

The CMS experiment is expected to start data taking during 2008, and large data samples, of the Peta-bytes scale, will be produced each year. The CMS Physics Tools package provides the CMS physicist with a powerful and flexible software layer for analysis of these huge datasets that is well integrated in the CMS experiment software. A core part of this package is the Candidate Model providing a coherent interface to different types of data. Standard tasks such as combinatorial analyses, generic cuts, MC truth matching and constrained fitting are supported. Advanced template techniques enable the user to add missing features easily. We explain the underlying model, certain details of the implementation and present some use cases showing how the tools are currently used in generator and full simulation studies as preparation for analysis of real data.

Physical education teachers are expected to implement the English language arts (ELA) Common Core State Standards (CCSS) in their instruction. This has proved to be challenging for many physical educators. The purpose of this article is to provide developmentally appropriate examples of how to incorporate the ELA CCSS into physical education,…

Full Text Available The discovery of neutrino mixing and oscillations over the past decade provides firm evidence for new physics beyond the Standard Model. Recently, θ13 has been determined to be moderately large, quite close to its previous upper bound. This represents a significant milestone in establishing the three-flavor oscillation picture of neutrinos. It has opened up exciting prospects for current and future long-baseline neutrino oscillation experiments towards addressing the remaining fundamental questions, in particular the type of the neutrino mass hierarchy and the possible presence of a CP-violating phase. Another recent and crucial development is the indication of non-maximal 2-3 mixing angle, causing the octant ambiguity of θ23. In this paper, I will review the phenomenology of long-baseline neutrino oscillations with a special emphasis on sub-leading three-flavor effects, which will play a crucial role in resolving these unknowns. First, I will give a brief description of neutrino oscillation phenomenon. Then, I will discuss our present global understanding of the neutrino mass-mixing parameters and will identify the major unknowns in this sector. After that, I will present the physics reach of current generation long-baseline experiments. Finally, I will conclude with a discussion on the physics capabilities of accelerator-driven possible future long-baseline precision oscillation facilities.

The China Jinping Underground Laboratory (CJPL), which has the lowest cosmic-ray muon flux and the lowest reactor neutrino flux of any laboratory, is ideal to carry out low-energy neutrino experiments. With two detectors and a total fiducial mass of 2000 tons for solar neutrino physics (equivalently, 3000 tons for geo-neutrino and supernova neutrino physics), the Jinping neutrino experiment will have the potential to identify the neutrinos from the CNO fusion cycles of the Sun, to cover the transition phase for the solar neutrino oscillation from vacuum to matter mixing, and to measure the geo-neutrino flux, including the Th/U ratio. These goals can be fulfilled with mature existing techniques. Efforts on increasing the target mass with multi-modular neutrino detectors and on developing the slow liquid scintillator will increase the Jinping discovery potential in the study of solar neutrinos, geo-neutrinos, supernova neutrinos, and dark matter. Supported by the National Natural Science Foundation of China (11235006, 11475093, 11135009, 11375065, 11505301, and 11620101004), the Tsinghua University Initiative Scientific Research Program (20121088035, 20131089288, and 20151080432), the Key Laboratory of Particle & Radiation Imaging (Tsinghua University), the CAS Center for Excellence in Particle Physics (CCEPP), U.S. National Science Foundation Grant PHY-1404311 (Beacom), and U.S. Department of Energy under contract DE-AC02-98CH10886 (Yeh).

To more accurately predict the temperature distribution inside the reactor core of pebble bed type high temperature reactors, in this thesis we investigated the stochastic properties of randomly stacked beds and the effects of the non-homogeneity of these beds on the neutronics and thermal-hydraulic

Full Text Available Thanks to the excellent tracking and muon identification performance, combined with a flexible trigger system, the CMS experiment at the Large Hadron Collider is conducting a rich and competitive program of measurements in the field of heavy flavor physics. We review the status of b-quark production cross section measurements in inclusive and exclusive final states, the measurement of B hadron angular correlations, the search for rare Bs0 and B0 decays to dimuons, and the observation of the X(3872 resonance.

For evaluating the effect of body physique, somatotype, and physical constitution on individual variability in the core interthreshold zone (CIZ), data from 22 healthy young Japanese male subjects were examined. The experiment was carried out in a climatic chamber in which air temperature was maintained at 20-24 degrees C. The subjects' body physique and the maximum work load were measured. Somatotype was predicted from the Heath-Carter Somatotype method. In addition, factors reflecting physical constitution, for example, susceptibility to heat and cold, and quality of sleep were obtained by questionnaire. The subjects wore a water-perfused suit which was perfused with water at a temperature of 25 degrees C and at a rate of 600 cc/min, and exercised on an ergometer at 50% of their maximum work rate for 10-15 min until their sweating rate increased. They then remained continuously seated without exercise until shivering increased. Rectal temperature (T(re)) and skin temperatures at four sites were monitored by thermistors, and sweating rate was measured at the forehead with a sweat rate monitor. Oxygen uptake was monitored with a gas analyzer. The results showed individual variability in the CIZ. According to the reciprocal cross-inhibition (RCI) theory, thermoafferent information from peripheral and core sensors is activated by T(re), mean skin temperature (T(sk)), and their changes. Since T(sk) was relatively unchanged, the data were selected to eliminate the influence of the core cooling rate on the sensor-to-effector pathway before RCI, and the relationship between the CIZ and the various factors was then analyzed. The results revealed that susceptibility to heat showed a good correlation with the CIZ, indicating that individual awareness of heat may change the CIZ due to thermoregulatory behavior.

This study assessed the nature of psychology and its consensus regarding core content. We hypothesized that psychology possesses little agreement regarding its core content areas and thus may "envy" more canonical sciences, such as physics. Using a global sample, we compared psychologists' and physicists' perceptions regarding…

This study assessed the nature of psychology and its consensus regarding core content. We hypothesized that psychology possesses little agreement regarding its core content areas and thus may "envy" more canonical sciences, such as physics. Using a global sample, we compared psychologists' and physicists' perceptions regarding…

The first hydrostatic core, also called the first Larson core, is one of the first steps in low-mass star formation as predicted by theory. With recent and future high-performance telescopes, the details of these first phases are becoming accessible, and observations may confirm theory and even present new challenges for theoreticians. In this context, from a theoretical point of view, we study the chemical and physical evolution of the collapse of prestellar cores until the formation of the first Larson core, in order to better characterize this early phase in the star formation process. We couple a state-of-the-art hydrodynamical model with full gas-grain chemistry, using different assumptions for the magnetic field strength and orientation. We extract the different components of each collapsing core (i.e., the central core, the outflow, the disk, the pseudodisk, and the envelope) to highlight their specific physical and chemical characteristics. Each component often presents a specific physical history, as well as a specific chemical evolution. From some species, the components can clearly be differentiated. The different core models can also be chemically differentiated. Our simulation suggests that some chemical species act as tracers of the different components of a collapsing prestellar dense core, and as tracers of the magnetic field characteristics of the core. From this result, we pinpoint promising key chemical species to be observed.

The Low Temperature Microgravity Physics Facility (LTMPF) is being developed by NASA to provide long duration low temperature and microgravity environment on the International Space Station (ISS) for performing fundamental physics investigations. Currently, six experiments have been selected for flight definition studies. More will be selected in a two-year cycle, through NASA Research Announcement. This program is managed under the Low Temperature Microgravity PhysicsExperiments Project Office at the Jet Propulsion Laboratory. The facility is being designed to launch and returned to earth on a variety of vehicles including the HII-A and the space shuttle. On orbit, the facility will be connected to the Exposed Facility on the Japanese Experiment Module, Kibo. Features of the facility include a cryostat capable of maintaining super-fluid helium at a temperature of 1.4 K for 5 months, resistance thermometer bridges, multi-stage thermal isolation system, thermometers capable of pico-Kelvin resolution, DC SQUID magnetometers, passive vibration isolation, and magnetic shields with a shielding factor of 80dB. The electronics and software architecture incorporates two VME buses run using the VxWorks operating system. Technically challenging areas in the design effort include the following: 1) A long cryogen life that survives several launch and test cycles without the need to replace support straps for the helium tank. 2) The minimization of heat generation in the sample stage caused by launch vibration 3) The design of compact and lightweight DC SQUID electronics. 4) The minimization of RF interference for the measurement of heat at pico-Watt level. 5) Light weighting of the magnetic shields. 6) Implementation of a modular and flexible electronics and software architecture. The first launch is scheduled for mid-2003, on an H-IIA Rocket Transfer Vehicle, out of the Tanegashima Space Center of Japan. Two identical facilities will be built. While one facility is onboard

New results derived for application to the earth's outer core using the modern theory of liquids and the hard-sphere model of liquid structure are presented. An expression derived in terms of the incompressibility and pressure is valid for a high-pressure liquid near its melting point, provided that the pressure is derived from a strongly repulsive pair potential; a relation derived between the melting point and density leads to a melting curve law of essentially the same form as Lindemann's law. Finally, it is shown that the 'core paradox' of Higgins and Kennedy (1971) can occur only if the Gruneisen parameter is smaller than 2/3, and this constant is larger than this value in any liquid for which the pair potential is strongly repulsive.

This presentation contains examples of recent atomic physicsexperiments with stored and cooled ion beams from the CRYRING facility in Stockholm. One of these experiments uses the high luminosity of a cooled MeV proton beam in a He COLTRIMS apparatus (COLd supersonic He gas-jet Target for Recoil Ion Momentum Spectroscopy) for measuring correlation effects in transfer ionization. Another class of experiments exploits the cold electron beam available in the CRYRING electron cooler and cooled heavy-ion beams for recombination experiments. A section concerns the still rather open question of the puzzling recombination enhancement over the radiative recombination theory. Dielectronic resonances at meV-eV energy are measured with a resolution in the order of 10-3-10-2 eV with highly charged ions stored at several hundreds of MeV kinetic energy in the ring. These resonances provide a serious challenge to theories for describing correlation, relativistic, QED effects, and isotope shifts in highly ionized ions. Applications of recombination rates with complex highly charged ions for fusion and astrophysical plasmas are shown.

This presentation contains examples of recent atomic physicsexperiments with stored and cooled ion beams from the CRYRING facility in Stockholm. One of these experiments uses the high luminosity of a cooled MeV proton beam in a He COLTRIMS apparatus (COLd supersonic He gas-jet Target for Recoil Ion Momentum Spectroscopy) for measuring correlation effects in transfer ionization. Another class of experiments exploits the cold electron beam available in the CRYRING electron cooler and cooled heavy-ion beams for recombination experiments. A section concerns the still rather open question of the puzzling recombination enhancement over the radiative recombination theory. Dielectronic resonances at meV-eV energy are measured with a resolution in the order of 10{sup -3}-10{sup -2} eV with highly charged ions stored at several hundreds of MeV kinetic energy in the ring. These resonances provide a serious challenge to theories for describing correlation, relativistic, QED effects, and isotope shifts in highly ionized ions. Applications of recombination rates with complex highly charged ions for fusion and astrophysical plasmas are shown.

In the framework of the INTERREG Project "SedAlp" physical scale model experiments are carried out in the hydraulic laboratory of the Institute of Mountain Risk Engineering at the University of Life Sciences in Vienna in order to optimize torrent protection structures. Two different types of check dams are investigated. A screen-dam with inclined vertical beams is compared with a beam-dam with horizontal beams. The experiments evaluate the variation of sediment transport of these structures including the influence of coarse woody debris. Therefore the distance between the steel elements can be adjusted to show their ability to filter sediment. The physical scale of the experiments is 1:30. All experimental runs are Froude scaled. Both dams are tested in elongated and pear-shaped sediment retention basins in order to investigate the shape effect of the deposition area. For a systematic comparison of the two check dams experiments with fluvial bedload transport are made. First a typical hydrograph for an extreme flood with unlimited sediment supply is modelled. A typical torrential sediment mixture with a wide grain-size distribution is fed by a conveyor belt according the transport capacity of the upstream reach. Then the deposition is scanned with a laser-scan device in order to analyse the deposition pattern and the deposited volume. Afterwards a flood with a lower reoccurrence period without sediment transport from upstream is modelled to investigate the ability of the protection structure for self-emptying. To investigate the influence of driftwood on the deposition behaviour experiments with logs are made. Different log diameters and lengths are added upstream the basin. The results show, that the deposition during the experiments was not controlled by sorting-effects at the location of the dam. The deposition always started from upstream, where the transport capacity was reduced due to the milder slope and the widening of the basin. No grain sorting effects

International research work for young people is common in physics. However, work experience and career plan of female workers in physics are little studied. We explore them by interviewing three international female workers in physics.

High energy particle physicsexperiments investigate the nature of matter through the identification of subatomic particles produced in collisions of protons, electrons, or heavy ions which have been accelerated to very high energies. Future experiments will have hundreds of millions of detector channels to observe the interaction region where collisions take place at a 40 MHz rate. This paper gives an overview of the electronics requirements for such experiments and explains how data reduction, timing distribution, and radiation tolerance in commercial CMOS circuits are achieved for these big systems. As a detailed example, the electronics for the innermost layers of the future tracking detector, the pixel vertex detector, is discussed with special attention to system aspects. A small-scale prototype (130 channels) implemented in standard 0.25 mu m CMOS remains fully functional after a 30 Mrad(SiO/sub 2/) irradiation. A full-scale pixel readout chip containing 8000 readout channels in a 14 by 16 mm/sup 2/ ar...

Aim:the article surveys the effect of core strength training to physical indicators of college girls and discusses its internal mechanism to verify the effectiveness of core strength to exercise. Method:random selects 100 college girls from“Movement of melody”fitness guidance team as object of study and divides them into experimental group and control group,and analyzes the change of indicator before and after experiment by pair T test. Result:core strength training could change woman’s cardio-pulmonary function,increases body diathesis and remit mental stress and psychological pressure in various degrees. Conclusion:core strength traininghas contributed to the improvement of college girls’body shape and the optimizing of body composition and improves body diathesis and cardio-pulmonary function obviously.%目的：研究核心力量训练对大学女生身体各指标的影响，探讨其内在机制，验证核心力量对健身方法的有效性。方法：在学校开展的本科生“运动旋律”健身指导团队中随机选取100名女大学生为研究对象，分成实验组和对照组，利用配对t检验对实验前后各指标变化进行分析。结果：核心力量训练可改变女性的心肺功能，提高女性身体素质，并能有效缓解不同程度的精神紧张与心理压力。结论：核心力量训练有助于大学女生身体形态的改善和身体成分的优化，使身体素质和心肺功得到明显的改善。

The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. The GeantV vector prototype for detector simulations has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth, parallelization needed to achieve optimal performance or memory access latency and speed. An additional challenge is to avoid the code duplication often inherent to supporting heterogeneous platforms. In this paper we present the first experience of vectorizing electromagnetic physics models developed for the GeantV project.

As an experiment in constructive transdisciplinary relationality, a theology of nonseparable difference here engages a physics of quantum entanglement. The metaphoric potential of "spooky action at a distance" to intensify a cosmology resistant to the dominant individualism and conducive to ethical ecologies of interdependence has only begun to develop across multiple discourses. This essay contemplates the specific unfolding of a theory of nonlocal superpositions by physicists such as Stapp, Bohm and Barad. It does not literalize any God-trope, but rather entangles theology in the mysterious uncertainty of our widest interdependencies. This essay, first presented as a lecture at the American Academy of Religion "Science, Technology and Religion" Group, San Francisco, November 2011, forms the core of a chapter in a book I am currently completing, The Cloud of the Impossible: Theological Entanglements.

The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. The GeantV vector prototype for detector simulations has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth, parallelization needed to achieve optimal performance or memory access latency and speed. An additional challenge is to avoid the code duplication often inherent to supporting heterogeneous platforms. In this paper we present the first experience of vectorizing electromagnetic physics models developed for the GeantV project.

Quantum Chromodynamics (QCD), the theory of strong interactions, in principle describes the interaction of quark and gluon fields. However, due to the self-coupling of the gluons, quarks and gluons are confined into hadrons and cannot exist as free particles. The quantitative understanding of this confinement phenomenon, which is responsible for about 98\\% of the mass of the visible universe, is one of the major open questions in particle physics. The measurement of the excitation spectrum of hadrons and of their properties gives valuable input to theory and phenomenology. In the Constituent Quark Model (CQM) two types of hadrons exist: mesons, made out of a quark and an antiquark, and baryons, which consist of three quarks. But more advanced QCD-inspired models and Lattice QCD calculations predict the existence of hadrons with exotic properties interpreted as excited glue (hybrids) or even pure gluonic bound states (glueballs). The COMPASS experiment at the CERN Super Proton Synchrotron has acquired large da...

The High Luminosity run of the Large Hadron Collider (LHC) will start in 2026 and aims to collect $3000\\;\\mathrm{fb}^{-1}$ of proton-proton collisions by 2037. This enormous dataset will increase the discovery potential of the LHC and allow precision measurements of Standard Model processes. However, the very high instantaneous luminosity of $5-7 \\times 10^{34}\\;\\mathrm{cm^{-}2 s^{-1}}$ poses serious challenges in terms of high “pile-up” of 140 or 200 overlapping proton-proton collisions per bunch crossing inside the ATLAS detector. In this talk, I will summarise the planned ATLAS detector upgrades and the analysis techniques, including pile-up mitigation, for High Luminosity-LHC running. I will also present the physics prospects for the ATLAS experiment, including results for precision measurements of the $125\\;\\mathrm{GeV}$ Higgs boson and the top quark, for vector boson scattering and the physics reach for supersymmetric and other beyond-the-Standard-Models.

With increasing physical event rates and the number of electronic channels,traditional readout schemes meet the challenge of improving readout speed caused by the limited bandwidth of the crate backplane.In this paper,a high-speed data readout method based on the Ethernet is presented to make each readout module capable of transmitting data to the DAQ.Features of exPlicitly parallel data transmitting and distributed network architecture give the readout system the advantage of adapting varying requirements of particle physicsexperiments.Furthermore,to guarantee the readout performance and flexibility,a standalone embedded CPU system is utilized for network protocol stack processing.To receive the customized data format and protocol from front-end electronics,a field programmable gate array (FPGA) is used for logic reconfiguration.To optimize the interface and to improve the data throughput between CPU and FPGA,a sophisticated method based on SRAM is presented in this paper.For the purpose of evaluating this high-speed readout method,a simplified readout module is designed and implemented.Test results show that this module can support up to 70 Mbps data throughput from the readout module to DAQ.

Understanding the acoustic and infrasound source generation mechanisms from underground explosions is of great importance for usage of this unique data type in non-proliferation activities. One of the purposes of the Source PhysicsExperiments (SPE), a series of underground explosive shots at the Nevada National Security Site (NNSS), is to gain an improved understanding of the generation and propagation of physical signals, such as seismic and infrasound, from the near to far field. Two of the SPE shots (SPE-1 and SPE-4') were designed to be small "Green's Function" sources with minimal spall or permanent surface deformation. We analyze infrasound data collected from these two shots at distances from ~300 m to ~1 km and frequencies up to 20 Hz. Using weather models based upon actual observations at the times of these sources, including 3-D variations in topography, temperatures, pressures, and winds, we synthesized full waveforms using Sandia's moving media acoustic propagation simulation suite. Several source mechanisms were simulated and compared and contrasted with observed waveforms using full waveform source inversion. We will discuss results of these source inversions including the relative roll of spall from these small explosions. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

The LHCf experiment has been designed to precisely measure very forward neutral particle spec- tra produced in the high energy hadron-hadron collisions at LHC up to an energy of 14 TeV in the center of mass system. These measurements are of fundamental importance to calibrate the Monte Carlo models widely used in the high energy cosmic ray (HECR) field, up to an equivalent laboratory energy of the order of 10 17 eV. The experiment has taken data in p-p collisions at √ s = 0 . 9 TeV, √ s = 2 . 76 TeV and √ s = 7 TeV as well as in p-Pb collisions at √ s = 5 TeV. In this paper the most up-to-date results on the inclusive photon spectra, π 0 and neutron spectra measured by LHCf are reported. Comparison of these spectra with the model expectations and the impact on high energy cosmic ray (HECR) Physics are discussed. In addition, perspectives for future analyses as well as the program for the next data taking period will be discussed.

The Chain Experiment is an annual competition which originated in Slovenia in 2005 and later expanded to Poland in 2013. For the purpose of the event, each participating team designs and builds a contraption that transports a small steel ball from one end to the other. At the same time the constructed machine needs to use a number of interesting phenomena and physics laws. In the competition’s finale, all contraptions are connected to each other to form a long chain transporting steel balls. In brief, they are all evaluated for qualities such as: creativity and advance in theoretical background, as well as the reliability of the constructed machine to work without human help. In this article, we present the contraptions developed by students taking part in the competition in order to demonstrate the advance in theoretical basis together with creativity in design and outstanding engineering skills of its participants. Furthermore, we situate the Chain Experiment in the context of other group competitions, at the same time demonstrating that—besides activating numerous group work skills—it also improves the ability to think critically and present one’s knowledge to a broader audience. We discussed it in the context of problem based learning, gamification and collaborative testing.

A Physical Chemistry Laboratory experiment was created that examines photocatalytic decomposition of organic compounds. Photocatalytic decomposition is a technique in which a solution containing a semiconducting material is irradiated with UV light, and the compounds in the solution are decomposed. This technique is commonly used for the destruction of environmentally detrimental compounds. In this experiment, the students study the photocatalytic, reduction of 1,4-benzoquinone, and the photocatalytic oxidation of 2-chlorophenol. The students examine the effect of different catalysts, the rate of the reaction, and the formation of intermediates and products. Each catalyst has a different effect on the rate of decomposition, depending on the oxidation and reduction potential of the compound and the band gap of the catalyst. The UV/Vis spectrometer will he used to study the affect of different catalysts on the initial rate of decomposition of 1,4-benzoquinone and 2-chlorophenol. The products and intermediates of each reaction are examined by High Performance Liquid Chromatography.

Part A. Dynamic measurements have been performed at the Aagesta reactor at power levels from 0.3 to 65 MW(th). The purposes of the experiments have been both to develop experimental methods and equipment for the dynamic studies and to measure the dynamic characteristics of the reactor in order to check the dynamic model. The experiments have been performed with four different perturbation functions: trapezoidal and step functions and two types of periodic multifrequency signals. Perturbations were introduced in the reactivity and in the load. The recordings were made of the responses of nuclear power, coolant inlet and outlet temperature and control rod position. The results are presented as step responses and transfer functions (Bode diagrams). Inmost cases the relative accuracy is {+-} 0.5 dB in amplitude and {+-} 5 deg in phase. The results from the experiments in general show rather good agreement with the results obtained from a dynamic model, which successively has been improved. Experience on reactor noise analysis based on measurements in the Agesta power reactor is discussed. It is shown that the noise measurements have given complementary dynamic information of the reactor. Part B. Static measurements of the physics parameters in the Agesta reactor are carried out to confirm theoretical methods for reactor calculations and to form a good basis for safe operation of the reactor. The reactivity worth of groups of control rods are determined with different methods and compared with calculations with the three-dimensional code HETERO. The excess reactivity as a function of burn up is obtained from the control rod positions. The temperature coefficient of the moderator is measured by lowering the moderator temperature at constant power and observing the change in control rod insertion. As burn up increases the experiments are repeated in order to follow the changes in the coefficient. The xenon poisoning effects are measured by changing the power level and

This report contains a diagram of the experimental setup for each experiment as well as giving a brief discussion of its purpose and list of collaborators for the experiment. Thirty-one experiments in the areas of nuclear physics and particle physics are covered. It concludes with a list of publications of the AGS experiments.

In this paper, we calculate the elastic modulus of 3D digital cores using the finite element method, systematically study the equivalence between the digital core model and various rock physics models, and carefully analyze the conditions of the equivalence relationships. The influences of the pore aspect ratio and consolidation coefficient on the equivalence relationships are also further refined. Theoretical analysis indicates that the finite element simulation based on the digital core is equivalent to the boundary theory and Gassmann model. For pure sandstones, effective medium theory models (SCA and DEM) and the digital core models are equivalent in cases when the pore aspect ratio is within a certain range, and dry frame models (Nur and Pride model) and the digital core model are equivalent in cases when the consolidation coefficient is a specific value. According to the equivalence relationships, the comparison of the elastic modulus results of the effective medium theory and digital rock physics is an effective approach for predicting the pore aspect ratio. Furthermore, the traditional digital core models with two components (pores and matrix) are extended to multiple minerals to more precisely characterize the features and mineral compositions of rocks in underground reservoirs. This paper studies the effects of shale content on the elastic modulus in shaly sandstones. When structural shale is present in the sandstone, the elastic modulus of the digital cores are in a reasonable agreement with the DEM model. However, when dispersed shale is present in the sandstone, the Hill model cannot describe the changes in the stiffness of the pore space precisely. Digital rock physics describes the rock features such as pore aspect ratio, consolidation coefficient and rock stiffness. Therefore, digital core technology can, to some extent, replace the theoretical rock physics models because the results are more accurate than those of the theoretical models.

The discovery of neutrino mixing and oscillations over the past decade provides firm evidence for new physics beyond the Standard Model. Recently, theta13 has been determined to be moderately large, quite close to its previous upper bound. This represents a significant milestone in establishing the three-flavor oscillation picture of neutrinos. It has opened up exciting prospects for current and future long-baseline neutrino oscillation experiments towards addressing the remaining fundamental questions, in particular the type of the neutrino mass hierarchy and the possible presence of a CP-violating phase. Another recent and crucial development is the indication of non-maximal 2-3 mixing angle, causing the octant ambiguity of theta23. In this paper, I will review the phenomenology of long-baseline neutrino oscillations with a special emphasis on sub-leading three-flavor effects, which will play a crucial role in resolving these unknowns. First, I will give a brief description of neutrino oscillation phenomeno...

Quantum Chromodynamics (QCD), the theory of strong interactions, in principle describes the interaction of quark and gluon fields. However, due to the self-coupling of the gluons, quarks and gluons are confined into hadrons and cannot exist as free particles. The quantitative understanding of this confinement phenomenon, which is responsible for about 98% of the mass of the visible universe, is one of the major open questions in particle physics. The measurement of the excitation spectrum of hadrons and of their properties gives valuable input to theory and phenomenology. In the Constituent Quark Model (CQM) two types of hadrons exist: mesons, made out of a quark and an antiquark, and baryons, which consist of three quarks. But more advanced QCD-inspired models and Lattice QCD calculations predict the existence of hadrons with exotic properties interpreted as excited glue (hybrids) or even pure gluonic bound states (glueballs). The Compass experiment at the CERN Super Proton Synchrotron has acquired large data sets, which allow to study light-quark meson and baryon spectra in unprecedented detail. The presented overview of the first results from this data set focuses in particular on the light meson sector and presents a detailed analysis of three-pion final states. A new JPC = 1++ state, the a1(1420), is observed with a mass and width in the ranges m = 1412 - 1422MeV/c2 and Γ = 130 - 150MeV/c2.

Solvent extraction is gaining much attention as an in-situ recovery method for difficult to produce heavy oil and tar sand deposits. Vapour extraction (VAPEX) is similar to the steam assisted gravity drainage (SAGD) process used in heavy oil production. In VAPEX, vaporized solvents are used instead of high temperature steam and the viscosity of the oil is reduced in situ. VAPEX is well suited for formations that are thin and where heat losses are unavoidable. It can be applied in the presence of overlying gas caps; bottom water aquifers; low thermal conductivity; high water saturation; clay swelling; and, formation damage. Modelling studies that use rectangular shaped models are limited at high reservoir pressures. This study presents a new design of physical models that overcomes this limitation. The annular space between two cylindrical pipes is used for developing slice-type and sand-filled models. This newly developed model is more compatible with high pressure. This paper compares results of VAPEX experiments using the cylindrical models and the rectangular models. The stabilized drainage rates from the newly developed cylindrical models are in very good agreement with those from the rectangular models. 16 refs., 3 tabs., 11 figs.

The first hydrostatic core, also called the first Larson core, is one of the first steps in low-mass star formation, as predicted by theory. With recent and future high performance telescopes, details of these first phases become accessible, and observations may confirm theory and even bring new challenges for theoreticians. In this context, we study from a theoretical point of view the chemical and physical evolution of the collapse of prestellar cores until the formation of the first Larson core, in order to better characterize this early phase in the star formation process. We couple a state-of-the-art hydrodynamical model with full gas-grain chemistry, using different assumptions on the magnetic field strength and orientation. We extract the different components of each collapsing core (i.e., the central core, the outflow, the disk, the pseudodisk, and the envelope) to highlight their specific physical and chemical characteristics. Each component often presents a specific physical history, as well as a sp...

Purpose: The new ABR core exam integrates physics into clinical teaching, with an emphasis on understanding image quality, image artifacts, radiation dose and patient safety for each modality and/or sub-specialty. Accordingly, physics training of radiological residents faces a challenge. A traditional teaching of physics through didactic lectures may not fully fulfill this goal. It is also difficult to incorporate physics teaching in clinical practice due to time constraints. A dedicated physics rotation may be a solution. This study is to evaluate a full week physics workshop developed for the first year radiological residents. Methods: The physics rotation took a full week. It included three major parts, introduction lectures, hand-on experiences and observation of technologist operation. An introduction of basic concepts was given to each modality at the beginning. Hand-on experiments were emphasized and took most of time. During hand-on experiments, residents performed radiation measurements, studied the relationship between patient dose and practice (i.e., fluoroscopy), investigated influence of acquisition parameters (i.g., kV, mAs) on image quality, and evaluated image quality using phantoms A physics test before and after the workshop was also given but not for comparison purpose. Results: The evaluation shows that the physics rotation during the first week of residency in radiology is preferred by all residents. The length of a full week of physics workshop is appropriate. All residents think that the intensive workshop can significantly benefit their coming clinical rotations. Residents become more comfortable regarding the use of radiation and counseling relevant questions such as a pregnant patient risk from a CE PE examination. Conclusion: A dedicated physics rotation, assisting with didactic lectures, may fulfill the requirements of physics of the new ABR core exam. It helps radiologists deeply understand the physics concepts and more efficiently use

Full Text Available Introduction: Compressive and tensile stresses of core materials are important properties because cores usually replace a large bulk of tooth structure and must resist multidirectional masticatory forces for many years. Material and Methods: The present study was undertaken to find out the best core build up material with respect to their physical properties among resin-based composites. Individual compressive, tensile, and flexural strength of fiber-reinforced dual cure resin core build up material, silorane-based composite resin, and dual curing composite for core build up with silver amalgam core was used as control were evaluated and compared using universal testing machine. Data were statistical analysed using Kruskal-Wallis test to determine whether statistically significant differences (P < 0.05 existed among core materials. Both dual cure composite materials with nanofillers were found superior to amalgam core. The silorane-based material showed the highest flexural strength, but other mechanical properties were inferior to dual cure composite materials with nanofillers.

Describes physicsexperiments (including speed, acceleration, and acceleration due to gravity) in which students write programs to obtain and manipulate experimental data using the Atari microcomputer game port. The approach emphasizes the essential physics of the experiments while affording students useful experience of automatic data collection.…

coordination in timing and distribution of work would probably help improve applicability and avoid duplication of work. CONCLUSIONS: The HTA Core Model can be developed into a platform that enables and encourages true HTA collaboration in terms of distribution of work and maximum utilization of a common pool...

ALICE is the experiment at the LHC collider at CERN dedicated to heavy ion physics. In this report, the ALICE detector will be presented, together with its expected performance as far as some selected physics topics are concerned.

Full Text Available We have developed the design of Thor: a pulsed power accelerator that delivers a precisely shaped current pulse with a peak value as high as 7 MA to a strip-line load. The peak magnetic pressure achieved within a 1-cm-wide load is as high as 100 GPa. Thor is powered by as many as 288 decoupled and transit-time isolated bricks. Each brick consists of a single switch and two capacitors connected electrically in series. The bricks can be individually triggered to achieve a high degree of current pulse tailoring. Because the accelerator is impedance matched throughout, capacitor energy is delivered to the strip-line load with an efficiency as high as 50%. We used an iterative finite element method (FEM, circuit, and magnetohydrodynamic simulations to develop an optimized accelerator design. When powered by 96 bricks, Thor delivers as much as 4.1 MA to a load, and achieves peak magnetic pressures as high as 65 GPa. When powered by 288 bricks, Thor delivers as much as 6.9 MA to a load, and achieves magnetic pressures as high as 170 GPa. We have developed an algebraic calculational procedure that uses the single brick basis function to determine the brick-triggering sequence necessary to generate a highly tailored current pulse time history for shockless loading of samples. Thor will drive a wide variety of magnetically driven shockless ramp compression, shockless flyer plate, shock-ramp, equation of state, material strength, phase transition, and other advanced material physicsexperiments.

Full Text Available The pebble-bed reactor HTR-PM is being built in China and is planned to be critical in one or two years. At present, one emphasis of engineering design is to determine the fuel management scheme of the initial core and running-in phase. There are many possible schemes, and many factors need to be considered in the process of scheme evaluation and analysis. Based on the experience from the constructed or designed pebble-bed reactors, the fuel enrichment and the ratio of fuel spheres to graphite spheres are important. In this paper, some relevant physical considerations of the initial core and running-in phase of HTR-PM are given. Then a typical scheme of the initial core and running-in phase is proposed and simulated with VSOP code, and some key physical parameters, such as the maximum power per fuel sphere, the maximum fuel temperature, the refueling rate, and the discharge burnup, are calculated. Results of the physical parameters all satisfy the relevant design requirements, which means the proposed scheme is safe and reliable and can provide support for the fuel management of HTR-PM in the future.

As new scientific challenges demand more comprehensive and multidisciplinary investigations, laboratory experiments are not expected to become simpler and/or faster. Experimental investigation is an indispensable element of scientific inquiry and must play a central role in the way current and future generations of scientist make decisions. To turn the complexity of laboratory work (and that of rocks!) into dexterity, engagement, and expanded learning opportunities, we are building an interactive, virtual laboratory reproducing in form and function the Stanford Rock Physics Laboratory, at Stanford University. The objective is to combine lectures on laboratory techniques and an online repository of visualized experiments consisting of interactive, 3-D renderings of equipment used to measure properties central to the study of rock physics (e.g., how to saturate rocks, how to measure porosity, permeability, and elastic wave velocity). We use a game creation system together with 3-D computer graphics, and a narrative voice to guide the user through the different phases of the experimental protocol. The main advantage gained in employing computer graphics over video footage is that students can virtually open the instrument, single out its components, and assemble it. Most importantly, it helps describe the processes occurring within the rock. These latter cannot be tracked while simply recording the physicalexperiment, but computer animation can efficiently illustrate what happens inside rock samples (e.g., describing acoustic waves, and/or fluid flow through a porous rock under pressure within an opaque core-holder - Figure 1). The repository of visualized experiments will complement lectures on laboratory techniques and constitute an on-line course offered through the EdX platform at Stanford. This will provide a virtual laboratory for anyone, anywhere to facilitate teaching/learning of introductory laboratory classes in Geophysics and expand the number of courses

Chemically bound mixtures have had the evolution effect upon the economical and quality aspects of the foundry operations since they presentation at the market. The higher output and significantly increased production efficiency of moulds and cores has lead to the material increase in the quality and profit of the foundries. It can be seen that in last several years the knowledge of bounds based on the organic resins has made enormous advances. The higher strength, improved properties under e...

Intel recently released the first commercial boards of its Many Integrated Core (MIC) Architecture. MIC is Intel's solution for the domain of throughput computing, currently dominated by general purpose programming on graphics processors (GPGPU). MIC allows the use of the more familiar x86 programming model and supports standard technologies such as OpenMP, MPI, and Intel's Threading Building Blocks. This should make it possible to develop for both throughput and latency devices using a single code base.\

Intel recently released the first commercial boards of its Many Integrated Core (MIC) Architecture. MIC is Intel's solution for the domain of throughput computing, currently dominated by general purpose programming on graphics processors (GPGPU). MIC allows the use of the more familiar x86 programming model and supports standard technologies such as OpenMP, MPI, and Intel's Threading Building Blocks. This should make it possible to develop for both throughput and latency devices using a single code base.\

Chemically bound mixtures have had the evolution effect upon the economical and quality aspects of the foundry operations since they presentation at the market. The higher output and significantly increased production efficiency of moulds and cores has lead to the material increase in the quality and profit of the foundries. It can be seen that in last several years the knowledge of bounds based on the organic resins has made enormous advances. The higher strength, improved properties under e...

As a result of the 1984 Regional Workshop for the Development of Packages of Adequate Learning Requirements in Population Education, the participants tackled the problem of non-institutionalization of population education into the formal and non-formal educational curricula in their countries. Based on their deliberations, several sets of guidelines and core messages were formulated to provide countries with a more definite direction that will hopefully ensure the functional and effective integration of population education in their respective national school and out-of-school curriculum system. Useful packages of learning materials in population education should help realize the country's population policy and goals within the broader framework of socioeconomic development, and the content of the package should comprehensively cover the core messages of the country's Population Information, Education and Communication (IEC) Program. The population knowledge base of the package should be accurate and relevant; the package should provide for graphic and visual presentation and for assessment of effects on the target groups. Proposed core messages in population education discuss the advantages of small family size and delayed marriage, and aspects of responsible parenthood. Other messages discuss population resource development and population-related beliefs and values.

In this study, relationship between prospective science and technology teachers' experiences in conducting Hands on physicsexperiments and their physics lab I achievement was investigated. Survey model was utilized and the study was carried out in the 2012 spring semester. Seven Hands on physicsexperiments were conducted with 28 prospective…

This article describes an experiment that measures the forces acting on a flying bird during takeoff. The experiment uses a minimum of equipment and only an elementary knowledge of kinematics and Newton's second law. The experiment involves first digitally videotaping a bird during takeoff, analyzing the video to determine the bird's position as a…

An experiment is conducted for obtaining quantum dots for physical or materials chemistry. This experiment serves to both reinforce the basic concept of quantum confinement and providing a useful bridge between the molecular and solid-state world.

This contribution describes forward physics measurements possible to make with current ATLAS forward detectors including the upgrade project AFP. The aim of AFP is to tag very forward going protons at high luminosities.

Contains more than 1,800 experiments in elementary particle physics from the Experience database. Search and browse by author; title; experiment number or prefix; institution; date approved, started or completed; accelerator or detector; polarization, reaction, final state or particle; or by papers produced. Maintained at SLAC for the Particle Data Group. Supplies the information for Current Experiments in Particle Physics (LBL-91). Print version updated every second year.

Dose-related radiobiological research results can only be compared meaningfully when radiation dosimetry is standardized. To this purpose, the National Institute of Allergy and Infectious Diseases (NIAID)-sponsored Medical Countermeasures Against Radiological Threats (MCART) consortium recently created a Radiation PhysicsCore (RPC) as an entity to assume responsibility of standardizing radiation dosimetry practices among its member laboratories. The animal research activities in these laboratories use a variety of ionizing photon beams from several irradiators such as 250-320 kVp x-ray generators, Cs irradiators, Co teletherapy machines, and medical linear accelerators (LINACs). In addition to this variety of sources, these centers use a range of irradiation techniques and make use of different dose calculation schemes to conduct their experiments. An extremely important objective in these research activities is to obtain a Dose Response Relationship (DRR) appropriate to their respective organ-specific models of acute and delayed radiation effects. A clear and unambiguous definition of the DRR is essential for the development of medical countermeasures. It is imperative that these DRRs are transparent between centers. The MCART RPC has initiated the establishment of standard dosimetry practices among member centers and is introducing a Remote Dosimetry Monitoring Service (RDMS) to ascertain ongoing quality assurance. This paper will describe the initial activities of the MCART RPC toward implementing these standardization goals. It is appropriate to report a summary of initial activities with the intent of reporting the full implementation at a later date.

The International Continental Scientific Drilling Program (ICDP) and the U.S. Geological Survey (USGS) drilled three core holes to a composite depth of 1766 m within the moat of the Chesapeake Bay impact structure. Core recovery rates from the drilling were high (??90%), but problems with core hole collapse limited the geophysical downhole logging to natural-gamma and temperature logs. To supplement the downhole logs, ??5% of the Chesapeake Bay impact structure cores was processed through the USGS GeoTek multisensor core logger (MSCL) located in Menlo Park, California. The measured physical properties included core thickness (cm), density (g cm-3), P-wave velocity (m s-1), P-wave amplitude (%), magnetic susceptibility (cgs), and resistivity (ohm-m). Fractional porosity was a secondary calculated property. The MSCL data-sampling interval for all core sections was 1 cm longitudinally. Photos of each MSCL sampled core section were imbedded with the physical property data for direct comparison. These data have been used in seismic, geologic, thermal history, magnetic, and gravity models of the Chesapeake Bay impact structure. Each physical property curve has a unique signature when viewed over the full depth of the Chesapeake Bay impact structure core holes. Variations in the measured properties reflect differences in pre-impact target-rock lithologies and spatial variations in impact-related deformation during late-stage crater collapse and ocean resurge. ?? 2009 The Geological Society of America.

A laser fusion experiment was performed based on the Shenguang Ⅱ facility. An image of thermonuclear burning region was obtained with a Fresnel zone plate-coded imaging technique, where the laser-driven target was served as an α-particle source, and the coded image obtained in the experiment was reconstructed by a numerical way.

different flows. Instead of maintaining these approaches separate, we propose a protocol (CORE) that brings together these coding mechanisms. Our protocol uses random linear network coding (RLNC) for intra- session coding but allows nodes in the network to setup inter- session coding regions where flows...... intersect. Routes for unicast sessions are agnostic to other sessions and setup beforehand, CORE will then discover and exploit intersecting routes. Our approach allows the inter-session regions to leverage RLNC to compensate for losses or failures in the overhearing or transmitting process. Thus, we...... increase the benefits of XORing by exploiting the underlying RLNC structure of individual flows. This goes beyond providing additional reliability to each individual session and beyond exploiting coding opportunistically. Our numerical results show that CORE outperforms both forwarding and COPE...

different flows. Instead of maintaining these approaches separate, we propose a protocol (CORE) that brings together these coding mechanisms. Our protocol uses random linear network coding (RLNC) for intra- session coding but allows nodes in the network to setup inter- session coding regions where flows...... intersect. Routes for unicast sessions are agnostic to other sessions and setup beforehand, CORE will then discover and exploit intersecting routes. Our approach allows the inter-session regions to leverage RLNC to compensate for losses or failures in the overhearing or transmitting process. Thus, we...... increase the benefits of XORing by exploiting the underlying RLNC structure of individual flows. This goes beyond providing additional reliability to each individual session and beyond exploiting coding opportunistically. Our numerical results show that CORE outperforms both forwarding and COPE...

Full Text Available Two hundred eighty-nine 2010 business management graduates of the Rizal Technological University, a state university in the Philippines were traced and their employment experiences solicited using the social networking site, the Facebook (FB. Using the descriptive research design, the data were used to portray the traced graduates’ employment profiles and capture their reflections to explore a wider array of dimensions and be able to give light on the graduate employment situation that would contribute to solving the government’s education and economic problems. One hundred eighty-six or 64% of the 289 traced graduates were found to have verifiable employment data. Out of the 186, 43 or 23% were working for banking and finance-related sector and 39 or 21% were in trading and merchandising. The study showed that most of the graduates were able to start a career path by gaining entry to prestigious companies. Five themes emerged from the 16 reflections: self confidence in performing assigned tasks, positive attitude, value of experience, traits developed in school and financial considerations. The themes that emerged reflected three core essences or factors that made them stay or contributory to their being retained by their employers on the job: financial or being able to buy what they want and help the family in the daily expenses; attitude, self-confidence and skills developed in school, as contributory to their work performance and retention; and experience, the main reason for staying on the job despite physical and emotional difficulties.

Activity-based collisional analysis is developed for introductory physics and astronomy laboratory experiments. Crushable floral foam is used to investigate the physics of projectiles undergoing completely inelastic collisions with a low-density solid forming impact craters. Simple drop experiments enable determination of the average acceleration,…

Introductory physics laboratory curricula usually include experiments on the moment of inertia, the centre of gravity, the harmonic motion of a physical pendulum, and Steiner's theorem. We present a simple experiment using very low cost equipment for investigating these subjects in the general case of an asymmetrical test body. (Contains 3 figures…

Makes use of distinctions between experiment and demonstrations to resolve a paradox for the sociology of scientific knowledge. Describes two public tests which illustrate these themes. Discusses types of core-set distortion and suggests a partial solution. (YP)

Full Text Available Chemically bound mixtures have had the evolution effect upon the economical and quality aspects of the foundry operations since they presentation at the market. The higher output and significantly increased production efficiency of moulds and cores has lead to the material increase in the quality and profit of the foundries. It can be seen that in last several years the knowledge of bounds based on the organic resins has made enormous advances. The higher strength, improved properties under elevated temperatures, the reduction of the environmental impacts of the organic bounds and at the same their highly improved regenerationability ensure that these systems will be still more significant binding system. The organic binding systems are predominantly being developed recemtly. The technology AlpHaset is ranked among the alkali binding systems. This technology has certain disadvantages – lower strength, speed of hardening- which have been gradually eliminated.

This report contains summaries of 551 approved experiments in elementary particle physics (experiments that finished taking data before 1 January 1980 are excluded). Included are experiments at Brookhaven, CERN, CESR, DESY, Fermilab, Moscow Institute of Theoretical and Experimental Physics, Tokyo Institute of Nuclear Studies, KEK, LAMPF, Leningrad Nuclear Physics Institute, Saclay, Serpukhov, SIN, SLAC, and TRIUMF, and also experiments on proton decay. Properties of the fixed-target beams at most of the laboratories are summarized. Instructions are given for searching online the computer database (maintained under the SLAC/SPIRES system) that contains the summaries.

A whole-core Monte Carlo n-particle (MCNP) model of a simplified CANDU reactor was developed and used to study core configurations and reactor physics phenomena of interest in CANDU safety analysis. The resulting reactivity data were compared with values derived from corresponding WIMS-AECL/RFSP, two-neutron-energy-group diffusion theory core simulations, thereby extending the range of CANDU-related code-to-code benchmark comparisons to include whole-core representations. These comparisons show a systematic discrepancy of about 6 mk between the respective absolute k{sub eff} values, but very good agreement to within about -0.15 {+-} 0.06 mk for the reactivity perturbation induced by G-core checkerboard coolant voiding. These findings are generally consistent with the results of much simpler uniform-lattice comparisons involving only WIMS-AECL and MCNP. In addition, MCNP fission-energy tallies were used to evaluate other core-wide properties, such as fuel bundle and total-channel power distributions, as well as intra-bundle details, such as outer-fuel-ring relative power densities and outer-ring fuel element azimuthal power variations, which cannot be determined directly from WIMS-AECL/RFSP core calculations. The average MCNP values for the ratio of outer fuel element to average fuel element power density agreed well with corresponding values derived from WIMS-AECL lattice-cell cases, showing a small systematic discrepancy of about 0.5 %, independent of fuel bum-up. For fuel bundles containing the highest-power fuel elements, the maximum peak-to-average outer-element azimuthal power variation was about 2.5% for cases where a statistically significant trend was observed, while much larger peak-to-average outer-element azimuthal power variations of up to around 42% were observed in low-power fuel bundles at the core/radial-neutron-reflector interface. (author)

In the last two decades a number of nuclear structure and astrophysics experiments were performed at heavy-ion storage rings employing unique experimental conditions offered by such machines. Furthermore, building on the experience gained at the two facilities presently in operation, several new storage ring projects were launched worldwide. This contribution is intended to provide a brief review of the fast growing field of nuclear structure and astrophysics research at storage rings.

Full Text Available This paper covers the core-shell nanomaterials, mainly, polymer-core polymer shell, polymer-core metal shell, and polymer-core nonmetal shells. Herein, various synthesis techniques, properties, and applications of these materials have been discussed. The detailed discussion of the properties with experimental parameters has been carried out. The various characterization techniques for the core-shell nanostructure have also been discussed. Their physical and chemical properties have been addressed. The future aspects of such core-shell nanostructures for biomedical and various other applications have been discussed with a special emphasis on their properties.

We investigated the effects of student-generated problems on exams. The process was gradual with some training throughout the semester. Initial results were highly positive with the students involved performing significantly better, and showing statistically significant improvement (t = 5.04) compared to the rest of the class, on average. Overall, performance improved when students generated problems. Motivation was a limiting factor. There is significant potential for improving student learning of physics and other problem-based topics.

A major problem of concen for the success of physical-modelling resides in the availability of experimental data for model validation, particularly in the hot hypersonic regime. In the past, validation data have been achieved as secondary product of expensive space-transportation programs. Since in the last ten years there has been almost no successfull program due to lack of investment, no new experimental data are available. According, a new trend is emerging for low cost technology vali...

While it has been agreed by the members of the European Community (except the UK) that all secondary students should study two EC languages in addition to their own, in Australia the recent emphasis has been on teaching languages for external trade, particularly in the Asian region. This policy over-looks the 13 per cent of the Australian population who already speak a language other than English at home (and a greater number who are second generation immigrants), and ignores the view that it is necessary to foster domestic multiculturalism in order to have fruitful links with other cultures abroad. During the 1980s there have been moves to reinforce the cultural identity of Australians of non-English speaking background, but these have sometimes been half-hearted and do not fully recognise that cultural core values, including language, have to achieve a certain critical mass in order to be sustainable. Without this recognition, semi-assimilation will continue to waste the potential cultural and economic contributions of many citizens, and to lead to frustration and eventual violence. The recent National Agenda for a Multicultural Australia addresses this concern.

Presents two experiments: the first one measures the heat of an exothermic reaction by the reduction of permanganate by the ferris ion; the second one measures the heat of an endothermic process, the mixing of ethanol and cyclohexane. Lists tables to aid in the use of the solution calorimeter. (MVL)

In Shifting Standards, Allan Franklin provides an overview of notable experiments in particle physics. Using papers published in Physical Review, the journal of the American Physical Society, as his basis, Franklin details the experiments themselves, their data collection, the events witnessed, and the interpretation of results. From these papers, he distills the dramatic changes to particle physics experimentation from 1894 through 2009.Franklin develops a framework for his analysis, viewing each example according to exclusion and selection of data; possible experimenter bias; details of the experimental apparatus; size of the data set, apparatus, and number of authors; rates of data taking along with analysis and reduction; distinction between ideal and actual experiments; historical accounts of previous experiments; and personal comments and style.From Millikan’s tabletop oil-drop experiment to the Compact Muon Solenoid apparatus measuring approximately 4,000 cubic meters (not including accelerators) and...

ATLAS is a general-purpose detector due to start operation next year at the Large Hadron Collider (LHC). The LHC will collide pairs of protons at a centre-of-mass energy of 14 TeV, with a bunch-crossing frequency of 40 MHz, and luminosities up to L = 10^34 cm^-2s^-1. The identification of photons is crucial for the study of a number of physics channels, including the search for a Higgs boson decaying to photon pairs, and measurements of direct production of single photons and photon pairs. Events containing true high-p_T photons must be selected with high efficiency, while rejecting the bulk of high-p_T jet events produced with enormously larger rate through QCD processes. The photon--photon and photon--jet channels are interesting in their own right, allowing the study of QCD at high energy. It is also essential to understand these proceses as the dominant background in the search for certain new physics processes, notably the production and decay of Higgs bosons to photon pairs. There are large uncertaintin...

As a rule, students enjoy conducting experiments in which the practical aspects are straightforward and well-defined. This also applies even when there is no anticipated result for students to ``prove.'' A laboratory exercise with such properties was created for students to undertake in a completely blind manner, and they happily proceeded without any knowledge at all of what they might expect to find. The philosophy developed for the research in this paper expands the pioneering approach formulated some half century ago and successfully employed more recently. In the present era of differentiated instruction (DI) being implemented in a diversity of educational settings, the design of the subject experiment is especially significant for its inclusive nature and for the positive outcomes it produces for less academically capable students. All students benefit from such an environment because it preempts the wasted effort of undue manipulation and it removes the need to contrive agreement with a textbook via irregular attempts at reverse engineering.

The discovery of the Higgs boson made headlines around the world. Two scientists, Peter Higgs and Francois Englert, whose theories predicted its existence, shared a Nobel Prize. The discovery was the culmination of the largest experiment ever run, the ATLAS and CMS experiments at CERN's Large Hadron Collider. But what really is a Higgs boson and what does it do? How was it found? And how has its discovery changed our understanding of the fundamental laws of nature? And what did it feel like to be part of it? Jon Butterworth is one of the leading physicists at CERN and this book is the first popular inside account of the hunt for the Higgs. It is a story of incredible scientific collaboration, inspiring technological innovation and ground-breaking science. It is also the story of what happens when the world's most expensive experiment blows up, of neutrinos that may or may not travel faster than light, and the reality of life in an underground bunker in Switzerland. This book will also leave you with a working...

Full Text Available An overview of recent top quark measurements in proton-proton collisions at √s = 7 and 8 TeV in data collected with the CMS experiment at the LHC, using a data sample collected during the years 2011 and 2012 is presented. Measurements of top quark pair production cross sections in several top quark final states are reported, as well as electroweak production of single top quarks in both t-and tW-channels. The mass of the top quark is estimated by different methods.

The Compass experiment at the CERN Super Proton Synchrotron has acquired large data sets, which allow to study light-quark meson and baryon spectra in unprecedented detail. The presented overview of the first results from this data set focuses in particular on the light meson sector and presents a detailed analysis of three-pion final states. A new JPC = 1++ state, the a1(1420, is observed with a mass and width in the ranges m = 1412 − 1422MeV/c2 and Γ = 130 − 150MeV/c2.

The Compass experiment at the CERN Super Proton Synchrotron has acquired large data sets, which allow to study light-quark meson and baryon spectra in unprecedented detail. The presented overview of the first results from this data set focuses in particular on the light meson sector and presents a detailed analysis of three-pion final states. A new JPC = 1++ state, the a1(1420, is observed with a mass and width in the ranges m = 1412 − 1422MeV/c2 and Γ = 130 − 150MeV/c2.

Full Text Available Development of modern school physicsexperiment is related to the extensive use of ICT not only for data processing and visualization. Interactive computer simulation for processes and phenomena, developed by scientists and methodologists by the site Phet, helps to improve the physical demonstration experiment with the support of modern pedagogical technologies that change the traditional procedure to form students' understanding of the processes and phenomena, active cognitive activity. To study the influence of methods to integrate interactive computer simulations for better understanding the students' physical processes, phenomena and laws of the international community, teachers and Ukrainian scientists and teachers of physics have been involved. The aim of the article is to introduce the research results in the development and testing of individual components of educational technology in performing a physicalexperiment in secondary school.

There is recognition that the provision of excellence in education and training results in a skilled and competent workforce. However, the educational experiences of dental core trainees (DCT's) working in the hospital oral and maxillofacial surgery (OMFS) setting have not been previously investigated. In this paper, we examine DCT's learning experiences both 'formal' and 'non-formal' within the hospital setting of ward and clinic-based teaching. Are hospital dental core trainees receiving a meaningful educational experience? To conclude this paper, the authors recommend methods, based upon sound educational principles, to maximise the value of clinical sessions for teaching.

We discuss weak kaon decays in a scenario in which the Standard Model is extended by massive sterile fermions. After revisiting the analytical expressions for leptonic and semileptonic decays we derive the expressions for decay rates with two neutrinos in the final state. By using a simple effective model with only one sterile neutrino, compatible with all current experimental bounds and general theoretical constraints, we conduct a thorough numerical analysis which reveals that the impact of the presence of massive sterile neutrinos on kaon weak decays is very small, less than 1% on decay rates. The only exception is B (KL→ν ν ) , which can go up to O (10-10), thus possibly within the reach of the KOTO, NA62 and SHIP experiments. Plans have also been proposed to search for this decay at the NA64 experiment. In other words, if all the future measurements of weak kaon decays turn out to be compatible with the Standard Model predictions, this will not rule out the existence of massive light sterile neutrinos with non-negligible active-sterile mixing. Instead, for a sterile neutrino of mass below mK, one might obtain a huge enhancement of B (KL→ν ν ), otherwise negligibly small in the Standard Model.

The Polymers and Cross-Linking experiment is presented via a new three phase learning cycle: CORE (Chemical Observations, Representations, Experimentation), which is designed to model productive chemical inquiry and to promote a deeper understanding about the chemistry operating at the submicroscopic level. The experiment is built on two familiar…

In support of experiments in the ATR, a new methodology was devised for loading the ATR Driver Core. This methodology will replace the existing methodology used by the INL Neutronic Analysis group to analyze experiments. Studied in this paper was the as-run analysis for ATR Cycle 152B, specifically comparing measured lobe powers and eigenvalue calculations.

This book presents experiments which will teach physics relevant to astronomy. The astronomer, as instructor, frequently faces this need when his college or university has no astronomy department and any astronomy course is taught in the physics department. The physicist, as instructor, will find this intellectually appealing when faced with teaching an introductory astronomy course. From these experiments, the student will acquire important analytical tools, learn physics appropriate to astronomy, and experience instrument calibration and the direct gathering and analysis of data. Experiments that can be performed in one laboratory session as well as semester-long observation projects are included. This textbook is aimed at undergraduate astronomy students.

Microfiche are included which contain summaries of 479 experiments in elementary particle physics. Experiments are included at the following laboratories: Brookhaven (BNL); CERN; CESR; DESY; Fermilab (FNAL); Institute for Nuclear Studies (INS); KEK; LAMPF; Serpukhov (SERP); SIN; SLAC; and TRIUMF. Also, summaries of proton decay experiments are included. A list of experiments and titles is included; and a beam-target-momentum index and a spokesperson index are given. Properties of beams at the facilities are tabulated. (WHK)

Describes a noncalculus, medical physics'' course with a basic element of direct hospital field experience. The course is intended primarily for premedical students but may be taken by nonscience majors. (Author/PR)

Many simple experiments can be performed in the classroom to explore the physics of vision. Students can learn of the two types of receptive cells (rods and cones), their distribution on the retina and the existence of the blind spot.

Effective communication of basic research to non-experts is necessary to inspire the public and to justify support for science by the taxpayers. The creative power of art is particularly important to engage an adult audience, who otherwise might not be receptive to standard didactic material. Interdisciplinarity defines new trends in research, and works at the intersection of art and science are growing in popularity, even though they are often isolated experiments. We present a public-facing collaboration between physicists/engineers performing research in fluid dynamics, and audiovisual artists working in cutting-edge media installation and performance. The result of this collaboration is a curated exhibition, with supporting public programming. We present the artworks, the lesson learned from the interactions between artists and scientists, the potential outreach impact and future developments. This project is supported by the APS Public Outreach Mini Grant.

INTRODUCTION: Volatile Organic Compounds (VOCs) and health. VOCs, present in the indoor air and adsorbed on/desorbed from solid surfaces, are suspected to contribute significantly to a number of health problems by respiration of air and polluted dust and by direct contact with the skin.VOC Sources.ADSORPTION....../DESORPTION IN BUILDING MATERIALS: Short description of our research project which deals with lab size and full scale experiments, mathematical modelling and development of a standard test method for characterization of the sorption properties of indoor materials.STUDIES OF ADSORPTION/DESORPTION IN DUST......:Collection and description of dust, experimental setup and procedure for measuring equilibria and kinetics. Experimental results for adsorption/desorption of a gaseous mixture of synthetic air, 2-butoxyethanol and water on different dust samples....

The CUORE (Cryogenic Underground Observatory for Rare Events) experiment projects to construct and operate an array of 1000 cryogenic thermal detectors of a mass of 760 g each to investigate rare events physics, in particular, double beta decay and non baryonic particle dark matter. A first step towards CUORE is CUORICINO, an array of 56 of such bolometers, currently being installed in the Gran Sasso. In this paper we report the physics potential of both stages of the experiment regarding neu...

Complication of physicalexperiments and increasing volumes of experimental data necessitate the application of supercomputer and distributed computing systems for data processing. Design and development of such systems, their mathematical modeling, and investigation of their characteristics and functional capabilities is an urgent scientific and practical problem. In the present work, the characteristics of operation of such distributed system of processing of data of physicalexperiments are investigated using the apparatus of theory of queuing networks.

Comprehension of the alloying effects of major candidate light elements on the phase diagram and elasticity of iron addresses pressing issues on the composition, thermal structures, and seismic features of the Earth's core. Integrating this mineral physics research with the educational objectives of the CAREER award was facilitated by collaboration with the University of Texas at Austin's premier teaching program, UTeach. The UTeach summer outreach program hosts three one-week summer camps every year exposing K-12th graders to university level academia, emphasizing math and science initiatives and research. Each week of the camp either focuses on math, chemistry, or geology. Many of the students were underrepresented minorities and some required simultaneous translation; this is an effect of the demographics of the region, and caused some language barrier challenges. The students' opportunity to see first-hand what it is like to be on a university campus, as well as being in a research environment, such as the mineral physics lab, helps them to visualize themselves in academia in the future. A collection of displayable materials with information about deep-Earth research were made available to participating students and teachers to disseminate accurate scientific knowledge and enthusiasm. These items included a diamond anvil cell and diagrams of the diamond crystal structure, the layers of the Earth, and the phases of carbon to show that one element can have very different physical properties purely based on differences in structure. The students learned how advanced X-ray and optical laser spectroscopies are used to study properties of planetary materials in the diamond anvil cell. Stress was greatly placed on the basic mathematical relationship between force, area, and pressure, the fundamental principle involved with diamond anvil cell research. Undergraduate researchers from the lab participated in the presentations and hands-on experiments, and answered any

This paper presents an overview of fast reactor corephysics results obtained in the context of the CAPRA-CADRA European collaborative programme, whose aim is to investigate a broad range of possible options for plutonium and radioactive waste management. Different types of fast reactors have been studied to evaluate their potential capabilities with respect to the long term management of plutonium, minor actinides (MAs) and long- lived fission products (LLFPs). Among the several options aiming at reducing waste and consequently radio toxicity are: homogeneous recycling of Minor Actinides, heterogeneous recycling of Minor Actinides either without or with moderation, dedicated critical cores (fuelled mainly with Minor Actinides) and Accelerator Driven System (ADS) variants. In order to achieve a detailed understanding of the potential of the various options, advanced corephysics methods have been implemented and tested and applied, for example, to improving control rod modeling and to studying safety aspects. There has also been code development and experimental work carried out to improve the understanding of fuel performance behaviors. (author)

This is the first part of a study to develop a modern theory of physical libration of the Moon caused by a liquid core. We use a special approach to studying Moon's rotation relying on Poincaré's planetary model and special forms of equations of motion in Andoyer and Poincaré variables. We construct expansions of the force function of the problem (the second harmonic of the selenopotential) in Andoyer and Poincaré variables for a high-precision description of disturbed orbital motion of the Moon. We investigate the main regularities in lunar rotational motion taken as a body with a solid nonspherical mantle and an ellipsoidal liquid core. The motion of the ideal liquid of the core is simple according to Poincaré. The Cassini laws can be dinamically interpreted for the motion of a synchronous satellite with a liquid core. The Cassini angle (the inclination of the rotation axis relative to the normal to the ecliptic plane) determined by us is very consistent with its determinations from laser observations.

The calculated prediction for reactor physics parameters in a metallic-fueled LMFBR was tested using the benchmark experiments performed at FCA. The reactivity feedback parameters such as sodium void worth, Doppler reactivity worth and {sup 238}U-capture-to-{sup 239}Pu -fission ratio have been measured. The fuel expansion reactivity has also measured. Direct comparison with the results from similar oxide fuel assembly was made. Analysis was done with the JENDL-2 cross section library and JENDL-3.2. Prediction of reactor physics parameters with JENDL-3.2 in the metallic-fueled core agreed reasonably well with the measured values and showed similar trend to the results in the oxide fuel core. (author)

The role of autonomy in the student experience in a large-enrollment undergraduate introductory physics course was studied from a Self-Determination Theory perspective with two studies. Study I, a correlational study, investigated whether certain aspects of the student experience correlated with how autonomy supportive (vs. controlling) students…

To familiarize first-year students with the important ingredients of a physicsexperiment, we offer them a project close to their daily life: measuring the effect of air resistance on a bicycle. Experiments are done with a bicycle freewheeling on a downhill slope. The data are compared with equations of motions corresponding to different models…

This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss-Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton-Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.

To evaluate the impact of a physical-assessment learning experience implemented in the problem-based learning (PBL) format of the third year of a doctor of pharmacy (PharmD) program. Students enrolled in a PBL course completed survey instruments to measure knowledge and confidence before and after participating in the learning experience. A simulation stethoscope was used to teach students abnormal pulmonary and cardiovascular sounds in 1-hour sessions for each of 12 PBL groups. The 92 students enrolled in the PBL course completed pre- and post-experience survey instruments. Students' scores on knowledge questions increased significantly (p experience questions. Students scored a median of 3 or 4 on a 5-point Likert scale after a learning experience on questions measuring confidence. Use of a simulation stethoscope in a physical-assessment learning experience increased pharmacy students' knowledge in performing pulmonary and cardiovascular assessment techniques.

The Interactive NASA Space Physics Ionosphere Radio Experiment (INSPIRE) designed to assist in a Space Experiments with Particle Accelerators (SEPAC) project is discussed. INSPIRE is aimed at recording data from a large number of receivers on the ground to determine the exact propagation paths and absorption of radio waves at frequencies between 50 Hz and 7 kHz. It is indicated how to participate in the experiment that will involve high school classes, colleges, and amateur radio operators.

[This paper is part of the Focused Collection on Gender in Physics.] There is growing evidence of persistent gender achievement gaps in university physics instruction, not only for learning physics content, but also for developing productive attitudes and beliefs about learning physics. These gaps occur in both traditional and interactive-engagement (IE) styles of physics instruction. We investigated one gender gap in the area of attitudes and beliefs. This was men's and women's physics self-efficacy, which comprises students' thoughts and feelings about their capabilities to succeed as learners in physics. According to extant research using pre- and post-course surveys, the self-efficacy of both men and women tends to be reduced after taking traditional and IE physics courses. Moreover, self-efficacy is reduced further for women than for men. However, it remains unclear from these studies whether this gender difference is caused by physics instruction. It may be, for instance, that the greater reduction of women's self-efficacy in physics merely reflects a broader trend in university education that has little to do with physics per se. We investigated this and other alternative causes, using an in-the-moment measurement technique called the Experience Sampling Method (ESM). We used ESM to collect multiple samples of university students' feelings of self-efficacy during four types of activity for two one-week periods: (i) an introductory IE physics course, (ii) students' other introductory STEM courses, (iii) their non-STEM courses, and (iv) their activities outside of school. We found that women experienced the IE physics course with lower self-efficacy than men, but for the other three activity types, women's self-efficacy was not reliably different from men's. We therefore concluded that the experience of physics instruction in the IE physics course depressed women's self-efficacy. Using complementary measures showing the IE physics course to be similar to

Full Text Available [This paper is part of the Focused Collection on Gender in Physics.] There is growing evidence of persistent gender achievement gaps in university physics instruction, not only for learning physics content, but also for developing productive attitudes and beliefs about learning physics. These gaps occur in both traditional and interactive-engagement (IE styles of physics instruction. We investigated one gender gap in the area of attitudes and beliefs. This was men’s and women’s physics self-efficacy, which comprises students’ thoughts and feelings about their capabilities to succeed as learners in physics. According to extant research using pre- and post-course surveys, the self-efficacy of both men and women tends to be reduced after taking traditional and IE physics courses. Moreover, self-efficacy is reduced further for women than for men. However, it remains unclear from these studies whether this gender difference is caused by physics instruction. It may be, for instance, that the greater reduction of women’s self-efficacy in physics merely reflects a broader trend in university education that has little to do with physics per se. We investigated this and other alternative causes, using an in-the-moment measurement technique called the Experience Sampling Method (ESM. We used ESM to collect multiple samples of university students’ feelings of self-efficacy during four types of activity for two one-week periods: (i an introductory IE physics course, (ii students’ other introductory STEM courses, (iii their non-STEM courses, and (iv their activities outside of school. We found that women experienced the IE physics course with lower self-efficacy than men, but for the other three activity types, women’s self-efficacy was not reliably different from men’s. We therefore concluded that the experience of physics instruction in the IE physics course depressed women’s self-efficacy. Using complementary measures showing the IE

The microscopic physics of the thermonuclear runaway in highly degenerate carbon-oxygen cores is investigated to determine if and how a detonation wave is generated. An expression for the electron-ion relaxation time is derived under the assumption of large degeneracy and extreme relativity of the electrons in a two-temperature plasma. Since the nuclear burning time proves to be several orders of magnitude shorter than the relaxation time, it is concluded that in studying the structure of the detonation wave the electrons and ions must be treated as separate fluids.

This brochure has been prepared by NASA on behalf of the European Space Agency (ESA), the Institute of Space and Astronautical Science (Japan) (ISAS), and the U.S. National Aeronautics and Space Administration (NASA) to describe the scope of the science problems to be investigated and the mission plan for the core International Solar-Terrestrial Physics (ISTP) Program. This information is intended to stimulate discussions and plans for the comprehensive worldwide ISTP Program. The plan for the study of the solar - terrestrial system is included. The Sun, geospace, and Sun-Earth interaction is discussed as is solar dynamics and the origins of solar winds.

In future Light Water Reactors special devices (core catchers) might be required to prevent containment failure by basement erosion after reactor pressure vessel melt-through during a core meltdown accident. Quick freezing of the molten core masses is desirable to reduce release of radioactivity. Several concepts of core catcher de-vices have been proposed based on the spreading of corium melt onto flat surfaces with subsequent cooling by flooding with water. Therefore a series of experiments to investigate high temperature melt spreading on flat surfaces has been carried out using alumina-iron thermite melts as a simulant. The oxidic thermite melt is conditioned by adding other oxides to simulate a realistic corium melt as close as possible. Spreading of oxidic and metallic melts have been performed in one- and two-dimensional geometry. Substrates were chemically inert ceramic layers, dry concrete and concrete with a shallow water layer on top. (authors)

The Thermionic Reactor Critical Experiments (TRCE) consisted of fast spectrum highly enriched U-235 cores reflected by different thicknesses of beryllium or beryllium oxide with a transition zone of stainless steel between the core and reflector. The mixed fast-thermal spectrum at the core reflector interface region poses a difficult neutron transport calculation. Calculations of TRCE using ENDF/B fast spectrum data and GATHER library thermal spectrum data agreed within about 1 percent for the multiplication factor and within 6 to 8 percent for the power peaks. Use of GAM library fast spectrum data yielded larger deviations. The results were obtained from DOT R Theta calculations with leakage cross sections, by region and by group, extracted from DOT RZ calculations. Delineation of the power peaks required extraordinarily fine mesh size at the core reflector interface.

In October through November 2006, scientists from the U. S. Geological Survey (USGS) Eastern Region Earth Surface Processes Team (EESPT) and the Raleigh (N.C.) Water Science Center (WSC), in cooperation with the North Carolina Geological Survey (NCGS) and the Onslow County Water and Sewer Authority (ONWASA), drilled a stratigraphic test hole and well in Onslow County, N.C. The Dixon corehole was cored on ONWASA water utility property north of the town of Dixon, N.C., in the Sneads Ferry 7.5-minute quadrangle at latitude 34deg33'35' N, longitude 77deg26'54' W (decimal degrees 34.559722 and -77.448333). The site elevation is 66.0 feet (ft) above mean sea level as determined using a Paulin precision altimeter. The corehole attained a total depth of 1,010 ft and was continuously cored by the USGS EESPT drilling crew. A groundwater monitoring well was installed in the screened interval between 234 and 254 ft below land surface. The section cored at this site includes Upper Cretaceous, Paleogene, and Neogene sediments. The Dixon core is stored at the NCGS Coastal Plain core storage facility in Raleigh. The Dixon corehole is the fourth and last in a series of planned North Carolina benchmark coreholes drilled by the USGS Coastal Carolina Project. These coreholes explore the physical stratigraphy, facies, and thickness of Cretaceous, Paleogene, and Neogene Coastal Plain sediments in North Carolina. Correlations of lithologies, facies, and sequence stratigraphy can be made with the Hope Plantation corehole, N.C., near Windsor in Bertie County (Weems and others, 2007); the Elizabethtown corehole, near Elizabethtown, N.C., in Bladen County (Self-Trail and others, 2004b); the Smith Elementary School corehole, near Cove City, N.C., in Craven County (Harris and Self-Trail, 2006; Crocetti, 2007); the Kure Beach corehole, near Wilmington, N.C., in New Hanover County (Self-Trail and others, 2004a); the Esso#1, Esso #2, Mobil #1, and Mobil #2 cores in Albermarle and Pamlico Sounds

The dissertation brings together approaches across the fields of physics, critical theory, literary studies, philosophy of physics, sociology of science, and history of science to synthesize a hybrid approach for instigating more rigorous and intense cross-disciplinary interrogations between the sciences and the humanities. There are two levels of conversations going on in the dissertation; at the first level, the discussion is centered on a critical historiography and philosophical implications of the discovery Higgs boson in relation to its position at the intersection of old (current) and the potential for new possibilities in quantum physics; I then position my findings on the Higgs boson in connection to the double-slit experiment that represents foundational inquiries into quantum physics, to demonstrate the bridge between fundamental physics and high energy particle physics. The conceptualization of the variants of the double-slit experiment informs the aforementioned critical comparisons. At the secon...

[This paper is part of the Focused Collection on Preparing and Supporting University Physics Educators.] In this study, we analyze the experience of students in the Physics Learning Assistant (LA) program at Texas State University in terms of the existing theoretical frameworks of community of practice and physics identity, and explore the implications suggested by these theories for LA program adoption and adaptation. Regression models from physics identity studies show that the physics identity construct strongly predicts intended choice of a career in physics. The goal of our current project is to understand the details of the impacts of participation in the LA experience on participants' practice and self-concept, in order to identify critical elements of LA program structure that positively influence physics identity and physics career intentions for students. Our analysis suggests that participation in the LA program impacts LAs in ways that support both stronger "physics student" identity and stronger "physics instructor" identity, and that these identities are reconciled into a coherent integrated physics identity. Increased comfort in interactions with peers, near peers, and faculty seems to be an important component of this identity development and reconciliation, suggesting that a focus on supporting community membership is useful for effective program design.

ZPR-6 Assembly 7 (ZPR-6/7) encompasses a series of experiments performed at the ZPR-6 facility at Argonne National Laboratory in 1970 and 1971 as part of the Demonstration Reactor Benchmark Program (Reference 1). Assembly 7 simulated a large sodium-cooled LMFBR with mixed oxide fuel, depleted uranium radial and axial blankets, and a core H/D near unity. ZPR-6/7 was designed to test fast reactor physics data and methods, so configurations in the Assembly 7 program were as simple as possible in terms of geometry and composition. ZPR-6/7 had a very uniform core assembled from small plates of depleted uranium, sodium, iron oxide, U{sub 3}O{sub 8} and Pu-U-Mo alloy loaded into stainless steel drawers. The steel drawers were placed in square stainless steel tubes in the two halves of a split table machine. ZPR-6/7 had a simple, symmetric core unit cell whose neutronic characteristics were dominated by plutonium and {sup 238}U. The core was surrounded by thick radial and axial regions of depleted uranium to simulate radial and axial blankets and to isolate the core from the surrounding room. The ZPR-6/7 program encompassed 139 separate core loadings which include the initial approach to critical and all subsequent core loading changes required to perform specific experiments and measurements. In this context a loading refers to a particular configuration of fueled drawers, radial blanket drawers and experimental equipment (if present) in the matrix of steel tubes. Two principal core configurations were established. The uniform core (Loadings 1-84) had a relatively uniform core composition. The high {sup 240}Pu core (Loadings 85-139) was a variant on the uniform core. The plutonium in the Pu-U-Mo fuel plates in the uniform core contains 11% {sup 240}Pu. In the high {sup 240}Pu core, all Pu-U-Mo plates in the inner core region (central 61 matrix locations per half of the split table machine) were replaced by Pu-U-Mo plates containing 27% {sup 240}Pu in the plutonium

Highlights: • A unified modeling approach for physicalexperiment design is presented. • Any laser facility can be flexibly defined and included with two scripts. • Complex targets and laser beams can be parametrically modeled for optimization. • Automatically mapping of laser beam energy facilitates targets shape optimization. - Abstract: Physicalexperiment design and optimization is very essential for laser driven inertial confinement fusion due to the high cost of each shot. However, only limited experiments with simple structure or shape on several laser facilities can be designed and evaluated in available codes, and targets are usually defined by programming, which may lead to it difficult for complex shape target design and optimization on arbitrary laser facilities. A unified modeling approach for physicalexperiment design and optimization on any laser facilities is presented in this paper. Its core idea includes: (1) any laser facility can be flexibly defined and included with two scripts, (2) complex shape targets and laser beams can be parametrically modeled based on features, (3) an automatically mapping scheme of laser beam energy onto discrete mesh elements of targets enable targets or laser beams be optimized without any additional interactive modeling or programming, and (4) significant computation algorithms are additionally presented to efficiently evaluate radiation symmetry on the target. Finally, examples are demonstrated to validate the significance of such unified modeling approach for physicalexperiments design and optimization in laser driven inertial confinement fusion.

In this study, we analyze the experience of students in the Physics Learning Assistant (LA) program at Texas State University in terms of the existing theoretical frameworks of "community of practice" and "physics identity," and explore the implications suggested by these theories for LA program adoption and adaptation.…

After the major success of B-factories to establish the CKM mechanism and its proven potential to search for new physics, the Belle II experiment will continue exploring the physics at the flavor frontier over the next years. Belle II will collect 50 times more data than its predecessor, Belle, and allow for various precision measurements and searches of rare decays and particles. This paper introduces the B-factory concept and the flavor frontier approach to search for new physics. It then describes the SuperKEKB accelerator and the Belle II detector, as well as some of the physics that will be analyzed in Belle II, concluding with the experiment status and schedule.

Since 2004, observations of Saturn's F ring have revealed that the ring's core is surrounded by structures with radial scales of hundreds of kilometers, called "spirals" and "jets". Gravitational scattering by nearby moons was suggested as a potential production mechanism; however, it remained doubtful because a population of Prometheus-mass moons is needed and, obviously, such a population does not exist in the F ring region. We investigate here another mechanism: dissipative physical collisions of kilometer-size moonlets (or clumps) with the F-ring core. We show that it is a viable and efficient mechanism for producing spirals and jets, provided that massive moonlets are embedded in the F-ring core and that they are impacted by loose clumps orbiting in the F ring region, which could be consistent with recent data from ISS, VIMS and UVIS. We show also that coefficients of restitution as low as ~0.1 are needed to reproduce the radial extent of spirals and jets, suggesting that collisions are very dissipative ...

This talk will review the current status and plans for high energy density physicsexperiments to be conducted on the National Ignition Facility (NIF). The NIF a multi-laboratory effort, presently under construction at the Lawrence Livermore National Laboratory, is a 192 beam solid state glass laser system designed to deliver 1.8MJ (at 351nm) in temporal shaped pulses. This review will begin by introducing the NIF in the context of its role in the overall United States Stockpile Stewardship Program. The major focus of this talk will be to describe the physicsexperiments planned for the NIF. By way of introduction to the experiments a short review of the NIF facility design and projected capabilities will be presented. In addition the current plans and time line for the activation of the laser and experimental facilities will also be reviewed. The majority of this talk will focus on describing the national inertial confinement fusion integrated theory and experimental target ignition plan. This national plan details the theory and experimental program required for achieving ignition and modest thermonuclear gain on the NIF. This section of the presentation will include a status of the current physics basis, ignition target designs, and target fabrication issues associated with the indirect-drive and direct-drive approaches to ignition. The NIF design provides the capabilities to support experiments for both approaches to ignition. Other uses for the NIF, including non ignition physics relevant to the national security mission, studies relevant to Inertial Fusion Energy, and basic science applications, will also be described. The NIF offers the potential to generate new basic scientific understanding about matter under extreme conditions by making available a unique facility for research into: astrophysics and space physics, hydrodynamics, condensed matter physics, material properties, plasma physics and radiation sources, and radiative properties. Examples of

This report contains summaries of current and recent experiments in Particle Physics. Included are experiments at BEPC (Beijing), BNL, CEBAF, CERN, CESR, DESY, FNAL, Frascati, ITEP (Moscow), JINR (Dubna), KEK, LAMPF, Novosibirsk, PNPI (St. Petersburg), PSI, Saclay, Serpukhov, SLAC, and TRIUMF, and also several proton decay and solar neutrino experiments. Excluded are experiments that finished taking data before 1991. Instructions are given for the World Wide Web (WWW) searching of the computer database (maintained under the SLAC-SPIRES system) that contains the summaries.

This research project was designed to investigate experimentally the transport properties of the 2D electrons in Si and GaAs, two prototype semiconductors, in several new physical regimes that were previously inaccessible to experiments. The research focused on the strongly correlated electron physics in the dilute density limit, where the electron potential energy to kinetic energy ratio rs>>1, and on the fractional quantum Hall effect related physics in nuclear demagnetization refrigerator temperature range on samples with new levels of purity and controlled random disorder.

This is the third edition of a compilation of current high energy physicsexperiments. It is a collaborative effort of the Berkeley Particle Data Group, the SLAC library, and ten participating laboratories: Argonne (ANL), Brookhaven (BNL), CERN, DESY, Fermilab (FNAL), the Institute for Nuclear Study, Tokyo (INS), KEK, Rutherford (RHEL), Serpukhov (SERP), and SLAC. The compilation includes summaries of all high energy physicsexperiments at the above laboratories that (1) were approved (and not subsequently withdrawn) before about January 1980, and (2) had not completed taking of data by 1 January 1976.

From Galileo's famous experiments in accelerated motion to Einstein's revolutionary theory of relativity, the experiments recorded here trace the evolution of modern physics from its beginnings to the mid-20th century. Brought together for the first time in one volume are important source readings on 25 epochal discoveries that changed man's understanding of the physical world. The accounts, written by the physicists who made them, include:Issac Newton: The Laws of MotionHenry Cavendish: The Law of GravitationAugustin Fresnel: The Diffraction of LightHans Christian Oersted: ElecromagnetismH

The CUORE experiment projects to construct and operate an array of 1000 cryogenic thermal detectors of TeO2, of a mass of 760 g each, to investigate rare events physics, in particular, double beta decay and non baryonic particle dark matter. A first step towards CUORE is CUORICINO, an array of 62 bolometers, currently being installed in the Gran Sasso Laboratory. In this paper we report the physics potential of both stages of the experiment regarding neutrinoless double beta decay of 130Te, W...

Full Text Available The demand for computing resources used for detector simulations and data analysis in HighEnergy Physics (HEP experiments is constantly increasing due to the development of studiesof rare physics processes in particle interactions. The latest generation of experiments at thenewly built LHC accelerator at CERN in Geneva is planning to use computing networks fortheir data processing needs. A Worldwide LHC Computing Grid (WLCG organization hasbeen created to develop a Grid with properties matching the needs of these experiments. Inthis paper we present the use of Grid computing by HEP experiments and describe activitiesat the participating computing centers with the case of Academic Computing Center, ACKCyfronet AGH, Kraków, Poland.

Full Text Available The influence of core sand properties on flow dynamics was investigated synchronously with various core sands, transparent core-box and high-speed camera. To confirm whether the core shooting process has significant turbulence, the flow pattern of sand particles in the shooting head and core box was reproduced with colored core sands. By incorporating the kinetic theory of granular flow (KTGF, kinetic-frictional constitutive correlation and turbulence model, a two-fluid model (TFM was established to study the flow dynamics of the core shooting process. Two-fluid model (TFM simulations were then performed and a areasonable agreement was achieved between the simulation and experimental results. Based on the experimental and simulation results, the effects of turbulence, sand density, sand diameter and binder ratio were analyzed in terms of filling process, sand volume fraction (αs and sand velocity (Vs.

Physical activity experiences of 12 age-matched boys with and without attention-deficit hyperactivity disorder (ADHD) were explored by converging information from Test of Gross Motor Development-2 assessments and semistructured interviews. The knowledge-based approach and the inhibitory model of executive functions, a combined theoretical lens,…

In our "Physics of Music" class for non-science majors, we have developed a laboratory exercise in which students experiment with Chladni sand patterns on drumheads. Chladni patterns provide a kinesthetic, visual, and entertaining way to illustrate standing waves on flat surfaces and are very helpful when making the transition from one-dimensional…

Dance is a form of physical activity that can be enjoyed for a lifetime. Students at the elementary level benefit greatly from successful experiences in dance that lead to competency in various dance forms as well as an appreciation of personal expression through dance. Teaching dance, however, may not be comfortable or easy for beginning…

Up-to-date knowledge about Skylab experiments is presented for the purpose of informing high school teachers about scientific research performed in orbit and enabling them to broaden their scope of material selection. The first volume is concerned with the solar astronomy program. The related fields are physics, electronics, biology, chemistry,…

A laboratory experiment currently used in an undergraduate physical chemistry lab to investigate the rates of crystallization of a polymer is described. Specifically, the radial growth rates of typical disc-shaped crystals, called spherulites, growing between microscope glass slides are measured and the data are treated according to polymer…

A laboratory experiment currently used in an undergraduate physical chemistry lab to investigate the rates of crystallization of a polymer is described. Specifically, the radial growth rates of typical disc-shaped crystals, called spherulites, growing between microscope glass slides are measured and the data are treated according to polymer…

The reform of the upper secondary school in Italy has recently introduced physics in the curricula of professional schools, in realities where it was previously absent. Many teachers, often with a temporary position, are obliged to teaching physics in schools where the absence of the laboratory is added to the lack of interest of students who feel this matter as very far from their personal interests and from the preparation for the work which could expect from a professional school. We report a leaning path for introducing students to the measurement of simple physical quantities, which continued with the study of some properties of matter (volume, mass, density) and ending with some elements of thermodynamics. Educational materials designed in order to involve students in an active learning, actions performed for improving the quality of laboratory experience and difficulties encountered are presented. Finally, we compare the active engagement of these students with a similar experience performed in a very ...

From the very first days of human spaceflight, NASA has been conducting experiments in space to understand the effect of weightlessness on physical and chemically reacting systems. NASA Glenn Research Center (GRC) in Cleveland, Ohio has been at the forefront of this research looking at both fundamental studies in microgravity as well as experiments targeted at reducing the risks to long duration human missions to the moon, Mars, and beyond. In the current International Space Station (ISS) era, we now have an orbiting laboratory that provides the highly desired condition of long-duration microgravity. This allows continuous and interactive research similar to Earth-based laboratories. Because of these capabilities, the ISS is an indispensible laboratory for low gravity research. NASA GRC has been actively involved in developing and operating facilities and experiments on the ISS since the beginning of a permanent human presence on November 2, 2000. As the lead Center both Combustion, Fluid Physics, and Acceleration Measurement GRC has led the successful implementation of an Acceleration Measurement systems, the Combustion Integrated Rack (CIR), the Fluids Integrated Rack (FIR) as well as the continued use of other facilities on the ISS. These facilities have supported combustion experiments in fundamental droplet combustion fire detection fire extinguishment soot phenomena flame liftoff and stability and material flammability. The fluids experiments have studied capillary flow magneto-rheological fluids colloidal systems extensional rheology pool and nucleate boiling phenomena. In this paper, we provide an overview of the experiments conducted on the ISS over the past 12 years. We also provide a look to the future development. Experiments presented in combustion include areas such as droplet combustion, gaseous diffusion flames, solid fuels, premixed flame studies, fire safety, and super critical oxidation processes. In fluid physics, experiments are discussed in

This paper analyses the eddy current loss in Homopolar magnetic bearings with laminated rotor cores produced by the high speed rotation in order to reduce the power loss for the aerospace applications. The analytical model of rotational power loss is proposed in Homopolar magnetic bearings with laminated rotor cores considering the magnetic circuit difference between Homopolar and Heteropolar magnetic bearings. Therefore, the eddy current power loss can be calculated accurately using the analytical model by magnetic field solutions according to the distribution of magnetic fields around the pole surface and boundary conditions at the surface of the rotor cores. The measurement method of rotational power loss in Homopolar magnetic bearing is proposed, and the results of the theoretical analysis are verified by experiments in the prototype MSCMG. The experimental results show the correctness of calculation results.

As a siderophile and a volatile element, sulfur's partitioning behavior allows constraints to be placed on processes in the primitive Earth. Sulfur's core-mantle distribution during Earth's accretion has consequences for core content and implications for volatile accretion. In this study, metal-silicate partitioning experiments of sulfur were conducted in a diamond anvil cell at pressures from 46 to 91 GPa and temperatures between 3100 and 4100 K, conditions that are directly relevant to core segregation in a deep magma ocean. The sulfur partition coefficients measured from these experiments are an order of magnitude less than those obtained from extrapolation of previous results to core formation conditions (e.g., Rose-Weston et al., 2009; Boujibar et al., 2014). These measurements challenge the idea that sulfur becomes a highly siderophile element at high pressures and temperatures. A relationship was derived that describes sulfur's partitioning behavior at the pressure-temperature range of core formation. This relationship combined with an accretion model was used to explore the effects of varying impactor sizes and volatile compositions on the sulfur contents of the Earth's core and mantle. The results show that homogeneous delivery of sulfur throughout accretion would overenrich the mantle in sulfur relative to the present day observations of 200 ± 80 ppm (Lorand et al., 2013) unless the bulk Earth sulfur content is lower than its cosmochemical estimate of ∼6400 ppm (e.g., McDonough, 2003). On the other hand, the mantle's sulfur content is matched if sulfur is delivered with large bodies (3 to 10% Earth mass) during the last 20% of Earth's accretion, combined with a chondritic late veneer of 0.5% Earth mass. These results are conditional on the lowered equilibration efficiency of large impactor cores in a terrestrial magma ocean. In each accretion scenario, the core sulfur content remains below ∼2 wt.% in close agreement with cosmochemical estimates and

Full Text Available This report of a collaborative self-study describes and interprets our pedagogical approach at the beginning of a preservice physics methods course and outlines the strategy that we used to create a context for productive learning. We focus on our attempt to engage teacher candidates in dialogue about learning physics and learning to teach physics by engaging them in brief teaching experiences in the first month of a preservice teacher education program, before the first practicum placement. Self-study methodologies are used to frame and reframe our perceptions of teaching and learning as we enacted a pedagogy of teacher education that was unfamiliar both to us and to our teacher candidates.Keywords: self-study of teacher education practices, lesson study, teacher education, physics, curriculum methods

The High Temperature Engineering Test Reactor (HTTR) of the Japan Atomic Energy Agency (JAEA) is a 30 MWth, graphite-moderated, helium-cooled reactor that was constructed with the objectives to establish and upgrade the technological basis for advanced high-temperature gas-cooled reactors (HTGRs) as well as to conduct various irradiation tests for innovative high-temperature research. The core size of the HTTR represents about one-half of that of future HTGRs, and the high excess reactivity of the HTTR, necessary for compensation of temperature, xenon, and burnup effects during power operations, is similar to that of future HTGRs. During the start-up corephysics tests of the HTTR, various annular cores were formed to provide experimental data for verification of design codes for future HTGRs. The Japanese government approved construction of the HTTR in the 1989 fiscal year budget; construction began at the Oarai Research and Development Center in March 1991 and was completed May 1996. Fuel loading began July 1, 1998, from the core periphery. The first criticality was attained with an annular core on November 10, 1998 at 14:18, followed by a series of start-up corephysics tests until a fully-loaded core was developed on December 16, 1998. Criticality tests were carried out into January 1999. The first full power operation with an average core outlet temperature of 850ºC was completed on December 7, 2001, and operational licensing of the HTTR was approved on March 6, 2002. The HTTR attained high temperature operation at 950 ºC in April 19, 2004. After a series of safety demonstration tests, it will be used as the heat source in a hydrogen production system by 2015. Hot zero-power critical, rise-to-power, irradiation, and safety demonstration testing , have also been performed with the HTTR, representing additional means for computational validation efforts. Power tests were performed in steps from 0 to 30 MW, with various tests performed at each step to confirm

These original contributions by philosophers and historians of science discuss a range of issues pertaining to the testing of hypotheses in modern physics by observation and experiment. Chapters by Lawrence Sklar, Dudley Shapere, Richard Boyd, R. C. Jeffrey, Peter Achinstein, and Ronald Laymon explore general philosophical themes with applications to modern physics and astrophysics. The themes include the nature of the hypothetico-deductive method, the concept of observation and the validity of the theoretical-observation distinction, the probabilistic basis of confirmation, and the testing of idealizations and approximations.The remaining four chapters focus on the history of particular twentieth-century experiments, the instruments and techniques utilized, and the hypotheses they were designed to test. Peter Galison reviews the development of the bubble chamber; Roger Stuewer recounts a sharp dispute between physicists in Cambridge and Vienna over the interpretation of artificial disintegration experiments;...

The Care of the Elderly (COE) Diploma Program is a six-to-twelve-month enhanced skills program taken after two years of core residency training in Family Medicine. In 2010, we developed and implemented a core-competency-based COE Diploma program (CC), in lieu of one based on learning objectives (LO). This study assessed the effectiveness of the core-competency-based program on residents' learning and their training experience as compared to residents trained using learning objectives. The data from the 2007-2013 COE residents were used in the study, with nine and eight residents trained in the LO and CC programs, respectively. Residents' learning was measured using preceptors' evaluations of residents' skills/abilities throughout the program (118 evaluations in total). Residents' rating of training experience was measured using the Graduate's Questionnaire which residents completed after graduation. For residents' learning, overall, there was no significant difference between the two programs. However, when examined as a function of the four CanMEDS roles, there were significant increases in the CC residents' scores for two of the CanMEDS roles: Communicator/Collaborator/Manager and Scholar compared to residents in the LO program. With respect to residents' training experience, seven out of ten program components were rated by the CC residents higher than by the LO residents. The implementation of a COE CC program appears to facilitate resident learning and training experience.

The behavior of atmospheric models is sensitive to the algorithms that are used to represent the equations of motion. Typically, comprehensive models are conceived in terms of the resolved fluid dynamics (i.e. the dynamical core) and subgrid, unresolved physics represented by parameterizations. Deterministic weather predictions are often validated with feature-by-feature comparison. Probabilistic weather forecasts and climate projects are evaluated with statistical methods. We seek to develop model evaluation strategies that identify like “objects” - coherent systems with an associated set of measurable parameters. This makes it possible to evaluate processes in models without needing to reproduce the time and location of, for example, a particular observed cloud system. Process- and object-based evaluation preserves information in the observations by avoiding the need for extensive spatial and temporal averaging. As a concrete example, we focus on analyzing how the choice of dynamical core impacts the representation of precipitation in the Pacific Northwest of the United States, Western Canada, and Alaska; this brings attention to the interaction of the resolved and the parameterized components of the model. Two dynamical cores are considered within the Community Atmosphere Model. These are the Spectral (Eulerian), which relies on global basis functions and the Finite Volume (FV), which uses only local information. We introduce the concept of "meteorological realism" that is, do local representations of large-scale phenomena, for example, fronts and orographic precipitation, look like the observations? A follow on question is, does the representation of these phenomena improve with resolution? Our approach to quantify meteorological realism starts with methods of geospatial statistics. Specifically, we employ variography, which is a geostatistical method which is used to measure the spatial continuity of a regionalized variable, and principle component

An overstudy committee was formed to study and recommend fundamental experiments in fluid physics, thermodynamics, and heat transfer for experimentation in orbit, using the space shuttle system and a space laboratory. The space environment, particularly the low-gravity condition, is an indispensable requirement for all the recommended experiments. The experiments fell broadly into five groups: critical-point thermophysical phenomena, fluid surface dynamics and capillarity, convection at reduced gravity, non-heated multiphase mixtures, and multiphase heat transfer. The Committee attempted to assess the effects of g-jitter and other perturbations of the gravitational field on the conduct of the experiments. A series of ground-based experiments are recommended to define some of the phenomena and to develop reliable instrumentation.

When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadrature required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)

Full Text Available Patient experience reflects quality of care from the patients' perspective; therefore, patients' experiences are important data in the evaluation of the quality of health services. The development of an abbreviated, reliable and valid instrument for measuring inpatients' experience would reflect the key aspect of inpatient care from patients' perspective as well as facilitate quality improvement by cultivating patient engagement and allow the trends in patient satisfaction and experience to be measured regularly. The study developed a short-form inpatient instrument and tested its ability to capture a core set of inpatients' experiences. The Hong Kong Inpatient Experience Questionnaire (HKIEQ was established in 2010; it is an adaptation of the General Inpatient Questionnaire of the Care Quality Commission created by the Picker Institute in United Kingdom. This study used a consensus conference and a cross-sectional validation survey to create and validate a short-form of the Hong Kong Inpatient Experience Questionnaire (SF-HKIEQ. The short-form, the SF-HKIEQ, consisted of 18 items derived from the HKIEQ. The 18 items mainly covered relational aspects of care under four dimensions of the patient's journey: hospital staff, patient care and treatment, information on leaving the hospital, and overall impression. The SF-HKIEQ had a high degree of face validity, construct validity and internal reliability. The validated SF-HKIEQ reflects the relevant core aspects of inpatients' experience in a hospital setting. It provides a quick reference tool for quality improvement purposes and a platform that allows both healthcare staff and patients to monitor the quality of hospital care over time.

The Fort St. Vrain (FSV) power plant was the most recent operating graphite-moderated, helium-cooled nuclear power plant in the United States. Many similarities exist between the FSV design and the current design of the GT-MHR. Both designs use graphite as the basic building blocks of the core, as structural material, in the reflectors, and as a neutron moderator. Both designs use hexagonal fuel elements containing cylindrical fuel rods with coated fuel particles. Helium is the coolant and the power densities vary by less than 5%. Since material and geometric properties of the GT-MHR core am very similar to the FSV core, it is logical to draw upon the FSV experience in support of the GT-MHR design. In the Physics area, testing at FSV during the first three cycles of operation has confirmed that the calculational models used for the core design were very successful in predicting the core nuclear performance from initial cold criticality through power operation and refueling. There was excellent agreement between predicted and measured initial core criticality and control rod positions during startup. Measured axial flux distributions were within 5% of the predicted value at the peak. The isothermal temperature coefficient at zero power was in agreement within 3%, and even the calculated temperature defect over the whole operating range for cycle 3 was within 8% of the measured defect. In the Fuel Performance area, fuel particle coating performance, and fission gas release predictions and an overall plateout analysis were performed for decommissioning purposes. A comparison between predicted and measured fission gas release histories of Kr-85m and Xe-138 and a similar comparison with specific circulator plateout data indicated good agreement between prediction and measured data. Only I-131 plateout data was overpredicted, while Cs-137 data was underpredicted.

The LHC physics program at CERN addresses some of the fundamental issues in particle physics and CMS experiment would concentrate on them. The CMS detector is designed for the search of Standard Model Higgs boson in the whole possible mass range. Also it will be sensitive to Higgs bosons in the minimal supersymmetric model and well adapted to searches for SUSY particles, new massive vector bosons, CP-violation in the B-system, search for substructure of quarks and leptons, etc. In the LHC heavy ion collisions the energy density would be well above the threshold for the possible formation of quark-gluon plasma. (15 refs).

The LHC physics program at CERN addresses some of the fundamental issues in particle physics and CMS experiment would concentrate on them. The CMS detector is designed for the search of Standard Model Higgs boson in the whole possible mass range. Also it will be sensitive to Higgs bosons in the minimal supersymmetric model and well adapted to searches for SUSY particles, new massive vector bosons, CP-violation in -system, search for substructure of quarks and leptons, etc. In the LHC heavy ion collisions the energy density would be well above the threshold for the possible formation of quark–gluon plasma.

Full Text Available The LHCf experiment, optimized for the study of forward physics at LHC, completes its main physics program in this year 2015, with the proton-proton collisions at the energy of 13 TeV. LHCf gives important results on the study of neutral particles at extreme pseudo-rapidity, both for proton-proton and for proton-ion interactions. These results are an important reference for tuning the models of the hadronic interaction currently used for the simulation of the atmospheric showers induced by very high energy cosmic rays. The results of this analysis and the future perspective are presented in this paper.

The PANDA experiment is designed to achieve the above mentioned physics goals with a setup with the following characteristics: an almost full solid angle acceptance; excellent tracking capabilities with high resolution (1–2 % at 1 GeV/c in the central region; secondary vertex detection with resolution ≈ 100 microns or better; electromagnetic calorimetry for detections of gammas and electrons up to 10 GeV; good particle identification of charge tracks (electrons, muons, pions, kaons, protons; a dedicated interchangeable central apparatus for the hypernuclear physics; detector and data acquisition system capable of working at 20 MHz interaction rate with an intelligent software trigger that can provide maximum flexibility.

In this progress report we summarize the activities of the University of Massachusetts- Amherst group for the three years of this research project. We are fully engaged in research at the energy frontier with the ATLAS experiment at the CERN Large Hadron Collider. We have made leading contributions in software development and performance studies for the ATLAS Muon Spectrometer, as well as on physics analysis with an emphasis on Standard Model measurements and searches for physics beyond the Standard Model. In addition, we have increased our contributions to the Muon Spectrometer New Small Wheel upgrade project.

In view of the rapid development of experimental facilities and their costs, the systematic design and preparation of particle physicsexperiments have become crucial. A software system is proposed as an aid for the experimental designer, mainly for experimental geometry analysis and experimental simulation. The following model is adopted: the description of an experiment is formulated in a language (here called XL) and put by its processor in a data base. The language is based on the entity-relationship-attribute approach. The information contained in the data base can be reported and analysed by an analyser (called XA) and modifications can be made at any time. In particular, the Monte Carlo methods can be used in experiment simulation for both physical phenomena in experimental set-up and detection analysis. The general idea of the system is based on the design concept of ISDOS project information systems. The characteristics of the simulation module are similar to those of the CERN Geant system, but some extensions are proposed. The system could be treated as a component of a greater, integrated software environment for the design of particle physicsexperiments, their monitoring and data processing.

In a recent editorial in Physics Today (July, 2006, p. 10) the ability of physicists to "imagine new realities" was correlated with what have been traditionally considered non-scientific qualities of imagination and creativity, which are usually associated with fine arts. In view of the current developments in physics of the 21st Century, including the searches for cosmic dark energy and evidence from the Large Hadron Collider which, it is hoped, will verify or refute the proposals of String Theory, the importance of developing creativity and imagination through education is gaining recognition. Two questions are addressed by this study: First, How can we bring the sense of aesthetics and creativity, which are important in the practice of physics, into the teaching and learning of physics at the introductory college level, without sacrificing the mathematical rigor which is necessary for proper understanding of physics? Second, How can we provide access to physics for a diverse population of students which includes physics majors, arts majors, and future teachers? An interdisciplinary curriculum which begins with teaching math as a language of nature, and utilizes arts to help visualize the connections between mathematics and the physical universe, may provide answers to these questions. In this dissertation I describe in detail the case study of the eleven students - seven physics majors and four arts majors - who participated in an experimental course, Symmetry and Aesthetics in Introductory Physics, in Winter Quarter, 2007, at UCSB's College of Creative Studies. The very positive results of this experiment suggest that this model deserves further testing, and could provide an entry into the study of physics for physics majors, liberal arts majors, future teachers, and as a foundation for media arts and technology programs.

The behavior of atmospheric models is sensitive to the algorithms that are used to represent the equations of motion. Typically, comprehensive models are conceived in terms of the resolved fluid dynamics (i.e. the dynamical core) and subgrid, unresolved physics represented by parameterizations. Deterministic weather predictions are often validated with feature-by-feature comparison. Probabilistic weather forecasts and climate projects are evaluated with statistical methods. We seek to develop model evaluation strategies that identify like "objects" - coherent systems with an associated set of measurable parameters. This makes it possible to evaluate processes in models without needing to reproduce the time and location of, for example, a particular observed cloud system. Process- and object-based evaluation preserves information in the observations by avoiding the need for extensive spatial and temporal averaging. As a concrete example, we focus on analyzing how the choice of dynamical core impacts the representation of precipitation in the Pacific Northwest of the United States, Western Canada, and Alaska; this brings attention to the interaction of the resolved and the parameterized components of the model. Two dynamical cores are considered within the Community Atmosphere Model. These are the Spectral (Eulerian), which relies on global basis functions and the Finite Volume (FV), which uses only local information. We introduce the concept of "meteorological realism" that is, do local representations of large-scale phenomena, for example, fronts and orographic precipitation, look like the observations? A follow on question is, does the representation of these phenomena improve with resolution? Our approach to quantify meteorological realism starts with identification and isolation of key features of orographic precipitation that are represented differently by Spectral and FV models, using objective pattern recognition methods. Then we aim to quantitatively compare

The behavior of atmospheric models is sensitive to the algorithms that are used to represent the equations of motion. Typically, comprehensive models are conceived in terms of the resolved fluid dynamics (i.e. the dynamical core) and subgrid, unresolved physics represented by parameterizations. There are model uncertainties inherent to both components. In this study, we investigate the role of the dynamical core as the source of uncertainty in simulation of orographic precipitation by different models. As a concrete example, we focus on analyzing how the choice of dynamical core impacts the representation of precipitation in the Pacific Northwest of the United States, Western Canada, and Alaska; this brings attention to the interaction of the resolved and the parameterized components of the model. Two dynamical cores are considered within the Community Atmosphere Model. These are the Spectral (Eulerian), which relies on global basis functions and the Finite Volume (FV), which uses only local information. We aim to identify and quantify the relationship between the model uncertainty and the numerical scheme as well as other model parameters such as the treatment of topography, SST etc. We also focus on the evolution of the uncertainty as a function of model resolution. In order to evaluate model uncertainty through validation against observations we introduce the concept of "meteorological realism" that is, do local representations of large-scale phenomena, for example, fronts and orographic precipitation, look like the observations? Our approach to quantify meteorological realism employs objective pattern recognition methods using semantic lists for isolated features to define their characteristics. We seek to develop model evaluation strategies that identify like "objects" - coherent systems with an associated set of measurable parameters. This makes it possible to evaluate processes and assess the sources of uncertainty in models without needing to reproduce the

WaveCore is a scalable many-core processor technology. This technology is specifically developed and optimized for real-time acoustical modeling applications. The programmable WaveCore soft-core processor is silicon-technology independent and hence can be targeted to ASIC or FPGA technologies. The W

Full Text Available Intimate partner violence directed towards females by male partners is a common significant global public health problem. Most victims of physical aggression such as women and children are subjected to multiple acts of violence over extended periods of time, suffering from more than one type of abuse, for example physical which is more symbolic and evidenced by scars. The purpose of this study is to increase understanding of the symbols of physical violence as experienced by women who live with intimate partners in the Vhembe district of the Limpopo Province. The research design of this study was qualitative, exploratory and descriptive in nature. The accessible population was those participants who used the trauma unit A in a particular hospital. Seven women comprised the sample of the study. In-depth individual interviews were conducted exploring the women’s experiences in the context of physical violence. From the data collected all seven participants experienced some form of physical violence which resulted in permanent deformity. They experienced some form of battering such as kicking, stabbing, burning, fracturing, strangling and choking. Recommendations were made that health care providers are encouraged to implement screening for physical violence, to provide appropriate interventions if assault is identified and to provide appropriate education regarding, employment opportunities, legal literacy, and rights to inheritance. Human rights education and information regarding domestic violence should be provided to them because this is their absolute right (UNICEF, 2000:14.

Aiming at the observation of cosmic-ray chemical composition at the "knee" energy region, we have been developinga new type air-shower core detector (YAC, Yangbajing Air shower Core detector array) to be set up at Yangbajing (90.522$^\\circ$ E, 30.102$^\\circ$ N, 4300 m above sea level, atmospheric depth: 606 g/m$^2$) in Tibet, China. YAC works together with the Tibet air-shower array (Tibet-III) and an underground water cherenkov muon detector array (MD) as a hybrid experiment. Each YAC detector unit consists of lead plates of 3.5 cm thick and a scintillation counter which detects the burst size induced by high energy particles in the air-shower cores. The burst size can be measured from 1 MIP (Minimum Ionization Particle) to $10^{6}$ MIPs. The first phase of this experiment, named "YAC-I", consists of 16 YAC detectors each having the size 40 cm $\\times$ 50 cm and distributing in a grid with an effective area of 10 m$^{2}$. YAC-I is used to check hadronic interaction models. The second phase of the experiment,...

This study investigates how work-site health promotion intervention, by involving group-based physical coordination training, may increase participants’ social awareness of new ways to use the body. Purpose: We investigated cleaners’ experiences with a one-year health promotion intervention...... involving group-based physical coordination training (PCT) during working hours. Design: We conducted a qualitative evaluation using method triangulation; continuous unfocused participant observation during the whole intervention, semi-structured focus group interview, and individual written evaluations one...... for implementation seem to be important for sustained effects of health-promotion interventions in the workplace. Originality: The social character of the physical training facilitated a community of practice, which potentially supported the learning of new competencies, and how to improve the organization...

With contributions by many of today's leading quantum physicists, philosophers and historians, including three Nobel laureates, this comprehensive A to Z of quantum physics provides a lucid understanding of the key concepts of quantum theory and experiment. It covers technical and interpretational aspects alike, and includes both traditional topics and newer areas such as quantum information and its relatives. The central concepts that have shaped contemporary understanding of the quantum world are clearly defined, with illustrations where helpful, and discussed at a level suitable for undergraduate and graduate students of physics, history of science, and philosophy of physics. All articles share three main aims: (1) to provide a clear definition and understanding of the term concerned; (2) where possible, to trace the historical origins of the concept; and (3) to provide a small but optimal selection of references to the most relevant literature, including pertinent historical studies. Also discussed are th...

Forward physics measurements with the Compact Muon Solenoid (CMS) experiment, one of the two large multi-purpose experiments at the Large Hadron Collider (LHC) at CERN, cover a wide range of physics subjects. The forward calorimeters of CMS, HF and CASTOR, are used to collect data up to a pseudo-rapidity of 6.6. These detectors provide sensitivity to a large part of the total inelastic cross section, including diffractive events that produce particles only at forward rapidity, with the exception of very low mass diffraction. The results obtained with a centre-of-mass energy of 13 TeV are presented. The measurements are compared to model predictions and provide valuable input for tuning of Monte Carlo models used to describe high-energy hadronic interactions.

This compilation of current high-energy physicsexperiments is a collaborative effort of the Berkeley Particle Data Group, the SLAC library, and the nine participating laboratories: Argonne (ANL), Brookhaven (BNL), CERN, DESY, Fermilab (FNAL), KEK, Rutherford (RHEL), Serpukhov (SERP), and SLAC. Nominally, the compilation includes summaries of all high-energy physicsexperiments at the above laboratories that were approved (and not subsequently withdrawn) before about June 1978, and had not completed taking of data by 1 January 1975. The experimental summaries are supplemented with three indexes to the compilation, several vocabulary lists giving names or abbreviations used, and a short summary of the beams at each of the laboratories (except Rutherford). The summaries themselves are included on microfiche. (RWR)

Alpha particle physicsexperiments were done on the Tokamak Fusion Test Reactor (TFTR) during its deuterium-tritium (DT) run from 1993-1997. These experiments utilized several new alpha particle diagnostics and hundreds of DT discharges to characterize the alpha particle confinement and wave-particle interactions. In general, the results from the alpha particle diagnostics agreed with the classical single-particle confinement model in magnetohydrodynamic (MHD) quiescent discharges. Also, the observed alpha particle interactions with sawteeth, toroidal Alfvén eigenmodes (TAE), and ion cyclotron resonant frequency (ICRF) waves were roughly consistent with theoretical modeling. This paper reviews what was learned and identifies what remains to be understood.

Before the Tevatron Collider Run II ended in September of 2011, a number of specialized beam study periods were dedicated to the experiments on various accelerator physics concepts and effects during the last year of the machine operation. The study topics included collimation with bent crystals and hollow electron beams, diffusion measurements and various aspects of beambeam interactions. In this report we concentrate on the subject of beam-beam interactions, summarizing the results of beam experiments. The covered topics include offset collisions, coherent beam stability, effect of the bunch-length-to-beta-function ratio, and operation of AC dipole with colliding beams.

Current science educational practice is coming under heavy criticism based on the dismaying results of the Third International Mathematics and Science Study of 1998, the latest in a series of large scale surveys; and from research showing the appallingly low representation of females in science-related fields. These critical evaluations serve to draw attention to science literacy in general and lack of persistence among females in particular, two issues that relate closely to the "preparation for future study" goal held by many high school science teachers. In other words, these teachers often seek to promote future success and to prevent future failure in their students' academic careers. This thesis studies the connection between the teaching practices recommended by reformers and researchers for high school teachers, and their students' subsequent college physics performance. The teaching practices studied were: laboratory experiences, class discussion experiences, content coverage, and reliance on textbooks. This study analyzed a survey of 1500 students from 16 different lecture-format college physics courses at 14 different universities. Using hierarchical linear modeling, this study accounted for course-level variables (Calculus-based/Non-calculus course type, professor's gender, and university selectivity). This study controlled for the student's parents education, high school science/mathematics achievement, high school calculus background, and racial background. In addition, the interactions between gender and both pedagogical/curricular and course-level variables were analyzed. The results indicated that teaching fewer topics in greater depth in high school physics appeared to be helpful to college physics students. An interaction between college course type and content coverage showed that students in Calculus-based physics reaped even greater benefits from a depth-oriented curriculum. Also students with fewer labs per month in high school physics

The ARGO-YBJ experiment has been in stable data taking since November 2007 at the YangBaJing Cosmic Ray Laboratory (Tibet, P.R. China, 4300 m a.s.l.). In this paper we report a few selected results in Gamma-Ray Astronomy (Crab Nebula and Mrk421 observations, search for high energy tails of GRBs) and Cosmic Ray Physics (Moon and Sun shadow observations, proton-air cross section and antiproton/proton preliminary measurements).

Physicists investigating space, time and matter at the Planck scale will probably have to work with much less guidance from experimental input than has ever happened before in the history of Physics. This may imply that we should insist on much higher demands of logical and mathematical rigour than before. Working with long chains of arguments linking theories to experiment, we must be able to rely on logical precision when and where experimental checks cannot be provided.

We consider two scenarios of New Physics: the Large Extra Dimensions (LED), where sterile neutrinos can propagate in a (4+d) -dimensional space-time, and the Non Standard Interactions (NSI), where the neutrino interactions with ordinary matter are parametrized at low energy in terms of effective flavour-dependent complex couplings \\varepsilon_{αβ} . We study how these models have an impact on oscillation parameters in reactor and accelerator experiments.

A simple high precision digital delay for nuclear physicsexperiments was developed using fast ECL electronics. The circuit uses an oscillator synchronized with the signal to be delayed and a presettable counter. It is capable of delaying a negative NIM signal by 2 µs with a precision better than 50 ps. The circuit was developed for use in slow-fast coincidence units for Perturbed Angular Correlation spectrometers but it is not limited to this application.

In the fast ignition realization experiment project phase-I (FIREX-I) [H. Azechi and the FIREX Project, Plasma Phys. Control. Fusion 48, B267 (2006)], core heating up to an ion temperature of 5keV is expected for subignition-class carbon-deuterium (CD) and deuterium-tritium (DT) fuels. The dependence of the achieved ion temperature on heating pulse parameters, and core density is investigated using two-dimensional simulations. Since the core size in FIREX-I is insufficient for self-ignition, and the confinement time is comparable to the heating duration (˜10ps), the temperature relaxation between the bulk electrons and ions is important for efficient ion heating. High compression (a core density of ρ >200g/cm3) is required for pure DT fuel to shorten the relaxation time. In this case, a heating energy of Eh>2kJ and a duration of τh2kJ and τh˜10ps.

A two part research study has been completed on the topic of compression after impact (CAI) of thin facesheet honeycomb core sandwich panels. The research has focused on both experiments and analysis in an effort to establish and validate a new understanding of the damage tolerance of these materials. Part one, the subject of the current paper, is focused on the experimental testing. Of interest are sandwich panels, with aerospace applications, which consist of very thin, woven S2-fiberglass (with MTM45-1 epoxy) facesheets adhered to a Nomex honeycomb core. Two sets of specimens, which were identical with the exception of the density of the honeycomb core, were tested. Static indentation and low velocity impact using a drop tower are used to study damage formation in these materials. A series of highly instrumented CAI tests was then completed. New techniques used to observe CAI response and failure include high speed video photography, as well as digital image correlation (DIC) for full-field deformation measurement. Two CAI failure modes, indentation propagation, and crack propagation, were observed. From the results, it can be concluded that the CAI failure mode of these panels depends solely on the honeycomb core density.

To review pediatric physical therapy experiences described in the literature and to analyze the production of knowledge on physical therapy in the context of pediatric primary health care (PPHC). A systematic review was conducted according to the PRISMA criteria. The following databases were searched: MEDLINE, LILACS, SciELO, PubMed, Scopus and Cochrane; Brazilian Ministry of Health's CAPES doctoral dissertations database; and System for Information on Grey Literature in Europe (SIGLE). The following search terms were used: ["primary health care" and ("physical therapy" or "physiotherapy") and ("child" or "infant")] and equivalent terms in Portuguese and Spanish, with no restriction on publication year. Thirteen articles from six countries were analyzed and grouped into three main themes: professional dilemmas (three articles), specific competencies and skills required in a PPHC setting (seven articles), and practice reports (four articles). Professional dilemmas involved expanding the role of physical therapists to encompass community environments and sharing the decision-making process with the family, as well as collaborative work with other health services to identify the needs of children. The competencies and skills mentioned in the literature related to the identification of clinical and sociocultural symptoms that go beyond musculoskeletal conditions, the establishment of early physical therapy diagnoses, prevention of overmedication, and the ability to work as team players. Practice reports addressed stimulation in children with neurological diseases, respiratory treatment, and establishing groups with mothers of children with these conditions. The small number of studies identified in this review suggests that there is little knowledge regarding the roles of physical therapists in PPHC and possibly regarding the professional abilities required in this setting. Therefore, further studies are required to provide data on the field, along with a continuing

This talk of non-technical nature describes experience of the author in teaching the intensive course of thermal physics for the undergraduate physics students at the Universidad Autonoma de Madrid, Spain. After brief introduction to the program, description of the WEB support of the course, I shall describe practical classes ( home-works, visits to the Laboratories, experimental demonstrations, typical problems and typical topics for presentations on the advanced thermodynamics, etc. ). I shall further discuss different possible actions to wake up an interest of the students to the thermal physics and ways to simulate their active participation in the class discussions. I also describe different schemes employed in the last few years to evaluate effectively and clearly the students work and knowledge. Finally, I will analyze the efficiency of our methodic in improving teaching of thermal physics at University level.

The next generation of dark matter direct detection experiments will be sensitive to both coherent neutrino-nucleus and neutrino-electron scattering. This will enable them to explore aspects of solar physics, perform the lowest energy measurement of the weak angle to date, and probe contributions from new theories with light mediators. In this article, we compute the projected nuclear and electron recoil rates expected in several dark matter direct detection experiments due to solar neutrinos, and use these estimates to infer errors on future measurements of the neutrino fluxes, weak mixing angle and solar observables, as well as to constrain new physics in the neutrino sector. The combined rates of solar neutrino events in second generation experiments (SuperCDMS and LZ) can yield a measurement of the pp flux to 2.5% accuracy via electron recoil, and slightly improve the boron-8 flux determination. Assuming a low-mass argon phase, projected tonne-scale experiments like DARWIN can reduce the uncertainty on bo...

Two major cooperative experiments, code-named Shelf Edge Exchange Processes (SEEP) I and II, were carried out on the northeast U.S. continental shelf and slope by an interdisciplinary group of scientists in the past decade. The work, supported by the Department of Energy, Office of Health and Environmental Research, had the broad aim of determining whether or to what extent energy-related human activities interfere with the high biological productivity of coastal waters. Much of SEEP I work was reported in a dedicated issue of Continental Shelf Research, including a summary article on the experiment as a whole [Walsh et al., 1988[. A parallel experiment, supported by the Minerals Management Service and code-named Mid Atlantic Slope and Rise (MASAR), had the objective of exploring physical processes over the continental slope and rise, including especially currents in the upper part of the water column. A good deal of MASAR work was also reported in the SEEP issue just mentioned, mainly in an article by Csanady and Hamilton (1988). There have been other papers and publications on these experiments, and more are forthcoming. While many questions remain, our horizons have broadened considerably after a decade of work on this problem, as if our aeroplane had just emerged from clouds to expose an interesting landscape. In this article I shall try to describe the physical (-oceanographic) features of that landscape, not in the chronological order in which we have espied them, but as the logic of the subject dictates.

PANDA is an experiment that will run at the future facility FAIR, Darmstadt, Germany. A high intensity and cooled antiproton beam will collide on a fixed hydrogen or nuclear target covering center-of-mass energies between 2.2 and 5.5 GeV. PANDA addresses various physics aspects from the low energy non-perturbative region towards the perturbative regime of QCD. With the impressive theoretical developments in this field, e.g. lattice QCD, the predictions are becoming more accurate in the course of time. The data harvest with PANDA will, therefore, be an ideal test bench with the aim to provide a deeper understanding of hadronic phenomena such as confinement and the generation of hadron masses. A variety of physics topics will be covered with PANDA, for example: the formation or production of exotic non-qqbar charm meson states connected to the recently observed XYZ spectrum; the study of gluon-rich matter, such as glueballs and hybrids; the spectroscopy of the excited states of strange and charm baryons, their production cross section and their spin correlations; the behaviour of hadrons in nuclear matter; the hypernuclear physics; the electromagnetic proton form factors in the timelike region. The PANDA experiment is designed to achieve the above mentioned physics goals with a setup with the following characteristics: an almost full solid angle acceptance; excellent tracking capabilities with high resolution (1-2 % at 1 GeV/c in the central region); secondary vertex detection with resolution ≈ 100 microns or better; electromagnetic calorimetry for detections of gammas and electrons up to 10 GeV; good particle identification of charge tracks (electrons, muons, pions, kaons, protons); a dedicated interchangeable central apparatus for the hypernuclear physics; detector and data acquisition system capable of working at 20 MHz interaction rate with an intelligent software trigger that can provide maximum flexibility.

The Tank-Type Critical Assembly (TCA) of Japan Atomic Energy Research Institute is research equipment for light water reactor physics. In the present report, the lectures given to the graduate students of Tokyo Institute of Technology who participated in the educational experiment course held on 26-30 August at TCA are rearranged to provide useful information for those who will implement educational basic experiments with TCA in the future. This report describes the principles, procedures, and data analyses for (1) Critical approach and Exponential experiment, (2) Measurement of neutron flux distribution, (3) Measurement of power distribution, (4) Measurement of fuel rod worth distribution, and (5) Measurement of safety plate worth by the rod drop method. (author)

Software development in high energy physicsexperiments offers unique experience with rapidly changing environment and variety of different standards and frameworks that software must be adapted to. As such, regular methods of software development are hard to use as they do not take into account how greatly some of these changes influence the whole structure. The following thesis summarizes development of TAUOLA C++ Interface introducing tau decays to new event record standard. Documentation of the program is already published. That is why it is not recalled here again. We focus on the development cycle and methodology used in the project, starting from the definition of the expectations through planning and designing the abstract model and concluding with the implementation. In the last part of the paper we present installation of the software within different experiments surrounding Large Hadron Collider and the problems that emerged during this process.

The Wii, a video game console by Nintendo, utilizes several different controllers, such as the Wii remote (Wiimote) and the balance board, for game-playing. The balance board was introduced in early 2008. It contains four strain gauges and has Bluetooth connectivity at a relatively low price. Thanks to available open source code, such as GlovePie, any PC with Bluetooth capability can detect the information sent out by the balance board. Based on the ease with which the forces measured by each strain gauge can be obtained, we have designed several experiments for introductory physics courses that make use of this device. We present experiments to measure the forces generated when students lift their arms with and without added weights, distribution of forces on an extended object when weights are repositioned, and other normal forces cases. The results of our experiments are compared with those predicted by Newtonian mechanics. )

The Coso geothermal field, located along the Eastern California Shear Zone, is composed of fractured granitic rocks above a shallow heat source. Temperatures exceed 640 ?F (~338 ?C) at a depth of less than 10000 feet (3 km). Permeability varies throughout the geothermal field due to the competing processes of alteration and mineral precipitation, acting to reduce the interconnectivity of faults and fractures, and the generation of new fractures through faulting and brecciation. Currently, several hot regions display very low permeability, not conducive to the efficient extraction of heat. Because high rates of seismicity in the field indicate that the area is highly stressed, enhanced permeability can be stimulated by increasing the fluid pressure at depth to induce faulting along the existing network of fractures. Such an Enhanced Geothermal System (EGS), planned for well 46A-19RD, would greatly facilitate the extraction of geothermal fluids from depth by increasing the extent and depth of the fracture network. In order to prepare for and interpret data from such a stimulation experiment, the physical properties and failure behavior of the target rocks must be fully understood. Various diorites and granodiorites are the predominant rock types in the target area of the well, which will be pressurized from 10000 feet measured depth (MD) (3048m MD) to the bottom of the well at 13,000 feet MD (3962 m MD). Because there are no core rocks currently available from well 46A-19RD, we report here on the results of compressive strength, frictional sliding behavior, and elastic measurements of a granodiorite and diorite from another well, 34-9RD2, at the Coso site. Rocks cored from well 34-9RD2 are the deepest samples to date available for testing, and are representative of rocks from the field in general.

40% Mg-content cored-wire was used to desulphurize and spheroidize ductile iron melt in industrial experiments. The optimal feeding speed and suitable treatment temperature were determined in the experiments. And cored-wire method and pouring method were compared.Conclusions have been drawn that, under these conditions in the experiments, the optimal feeding speed is 15m/min, treatment temperature should be as low as possible, 1400-1450℃ generally; and cored-wire method can act more effective in ductile iron melt desulphurization and spheroid than pouring method.

An analysis of the scientific areas in High Energy Density (HED) physics that underpin the enduring LANL mission in Stockpile Stewardship (SS) has identified important research needs that are not being met. That analysis has included the work done as part of defining the mission need for the High Intensity Laser Laboratory (HILL) LANL proposal to NNSA, LDRD DR proposal evaluations, and consideration of the Predictive Capability Framework and LANL NNSA milestones. From that evaluation, we have identified several specific and scientifically-exciting experimental concepts to address those needs. These experiments are particularly responsive to physics issues in Campaigns 1 and 10. These experiments are best done initially at the LANL Trident facility, often relying on the unique capabilities available there, although there are typically meritorious extensions envisioned at future facilities such as HILL, or the NIF once the ARC short-pulse laser is available at sufficient laser intensity. As the focus of the LANL HEDP effort broadens from ICF ignition of the point design at the conclusion of the National Ignition Campaign, into a more SS-centric effort, it is useful to consider these experiments, which address well-defined issues, with specific scientific hypothesis to test or models to validate or disprove, via unit-physicsexperiments. These experiments are in turn representative of a possible broad experimental portfolio to elucidate the physics of interest to these campaigns. These experiments, described below, include: (1) First direct measurement of the evolution of particulates in isochorically heated dense plasma; (2) Temperature relaxation measurements in a strongly-coupled plasma; (3) Viscosity measurements in a dense plasma; and (4) Ionic structure factors in a dense plasma. All these experiments address scientific topics of importance to our sponsors, involve excellent science at the boundaries of traditional fields, utilize unique capabilities at LANL

I report on an analysis of the alignment between the South African Grade 12 Physical Sciences core curriculum content and the exemplar papers of 2008, and the final examination papers of 2008 and 2009. A two-dimensional table was used for both the curriculum and the examination in order to calculate the Porter alignment index, which indicates the…

People's feelings toward physical activity are often influenced by memories of their childhood experiences in physical education and sport. Unfortunately, many adults remember negative experiences, which may affect their desire to maintain a physically active lifestyle. A survey that asked 293 students about recollections from their childhood…

Full Text Available PANDA (antiProton ANnihilation at DArmstadt is an experiment that will run at the GSI laboratory, Darmstadt, Germany, in 2019. A high intensity antiproton beam with momentum up to 15 GeV/c will collide on a fixed proton target (pellet target or jet target. A wide range of physics topics will be investigated: char- monium states and open charm states above the DD¯$D\\overline D $ threshold; exotic states like glueballs, oddballs, hybrids, multiquarks, molecules; the spectroscopy of the excited states of strange and charm baryons; non-perturbative QCD dynamics in the pp¯$p\\overline p $ production cross section of charm and strange baryons and their spin correlations; the behaviour of hadrons in nuclear matter; hypernuclear physics; electromagnetic proton form factors in the timelike region; the CP violation in the charm sector, rare and forbidden decays of charm baryons and mesons.

The data acquisition in high energy physicsexperiments is typically started by a pulse from a fast coincidence- based trigger system. It is essential that such a system can identify an event in a shortest possible time and with as good selectivity as possible. In order to meet these requirements, several new techniques and developments in the domain of signal discrimination and rapid hittopology analysis are presented. Two digital rise-time compensation methods were developed to improve the time resolution of the comparatively slow signals from inorganic scintillators. Both methods utilize double threshold analog comparators and digital processing logic. A unique adaptive threshold discrimination method was developed to reject after-pulses. The method was found to give the best timing, the smallest dead time and a complete rejection of noise pulses without missing physically significant pulses. Algorithms for fast multiplicity calculations of clusters of hits in two- dimensional matrices, in strings and in p...

An exciting new era in flavour physics has just begun with the start of the Large Hadron Collider (LHC). The LHCb (where b stands for beauty) experiment, designed specifically to search for new phenomena in quantum loop processes and to provide a deeper understanding of matter-antimatter asymmetries at the most fundamental level, is producing many new and exciting results. It gives me great pleasure to describe a selected few of the results here-in particular, the search for rare B(0)(s)-->μ+ μ- decays and the measurement of the B(0)(s) charge-conjugation parity-violating phase, both of which offer high potential for the discovery of new physics at and beyond the LHC energy frontier in the very near future.

215 graduates (118 women and 97 men) of the University of Arizona's International Health Core Curriculum received a questionnaire after completion of their clinical practice in order to evaluate the experience of 10 years from 1982-91. The curriculum consisted of a 3- week orientation course given to 4th year medical students with core contents of population, nutrition, and infectious diseases followed up by student evaluation upon completion. 192 students were eligible for the survey of whom 154 completed it yielding an 80% response rate: 139 future physicians and 15 nurses, health educators, and nutritionists. 113 of 154 respondents completed an international health field experience after the course in 43 developing countries: 22% in Africa, 39% in Asia-Pacific, and 39% in Latin American-Caribbean. 79% were in rural and 34% in urban areas. A public health-community medicine program was incorporated in the clinical work at most sites. 95% of them participated in clinical care, 73% in community teaching, and 51% in research and evaluation. The duration of this field experience lasted 6-12 months for 69% of them. The median responses regarding the possibility of postcourse international field work and rating the worth of the course for clinical care, teaching others, and research were well or very well. They also rated the preparation of the course for subsequent work at 43 specific sites as good and dealing with limited resources and cross-culture communication as very good. All were willing to recommend the course to their peers.

The Large Hadron Collider (LHC) at CERN promises a major step forward in the understanding of the fundamental nature of matter. The ATLAS experiment is a general-purpose detector for the LHC, whose design was guided by the need to accommodate the wide spectrum of possible physics signatures. The major remit of the ATLAS experiment is the exploration of the TeV mass scale where groundbreaking discoveries are expected. In the focus are the investigation of the electroweak symmetry breaking and linked to this the search for the Higgs boson as well as the search for Physics beyond the Standard Model. In this report a detailed examination of the expected performance of the ATLAS detector is provided, with a major aim being to investigate the experimental sensitivity to a wide range of measurements and potential observations of new physical processes. An earlier summary of the expected capabilities of ATLAS was compiled in 1999 [1]. A survey of physics capabilities of the CMS detector was published in [2]. The design of the ATLAS detector has now been finalised, and its construction and installation have been completed [3]. An extensive test-beam programme was undertaken. Furthermore, the simulation and reconstruction software code and frameworks have been completely rewritten. Revisions incorporated reflect improved detector modelling as well as major technical changes to the software technology. Greatly improved understanding of calibration and alignment techniques, and their practical impact on performance, is now in place. The studies reported here are based on full simulations of the ATLAS detector response. A variety of event generators were employed. The simulation and reconstruction of these large event samples thus provided an important operational test of the new ATLAS software system. In addition, the processing was distributed world-wide over the ATLAS Grid facilities and hence provided an important test of the ATLAS computing system - this is the origin of

We developed a reconfigurable nuclear instrument system (RNIS) that could satisfy the requirements of diverse nuclear and particle physicsexperiments, and the inertial confinement fusion diagnostic. Benefiting from the reconfigurable hardware structure and digital pulse processing technology, RNIS shakes off the restrictions of cumbersome crates and miscellaneous modules. It retains all the advantages of conventional nuclear instruments and is more flexible and portable. RNIS is primarily composed of a field programmable hardware board and relevant PC software. Separate analog channels are designed to provide different functions, such as amplifiers, ADC, fast discriminators and Schmitt discriminators for diverse experimental purposes. The high-performance field programmable gate array could complete high-precision time interval measurement, histogram accumulation, counting, and coincidence anticoincidence measurement. To illustrate the prospects of RNIS, a series of applications to the experiments are described in this paper. The first, for which RNIS was originally developed, involves nuclear energy spectrum measurement with a scintillation detector and photomultiplier. The second experiment applies RNIS to a G-M tube counting experiment, and in the third, it is applied to a quantum communication experiment through reconfiguration.

Momentum flux for imploding a target plasma in magnetized target fusion (MTF) may be delivered by an array of plasma guns launching plasma jets that would merge to form an imploding plasma shell (liner). In this paper, we examine what would be a worthwhile experiment to do in order to explore the dynamics of merging plasma jets to form a plasma liner as a first step in establishing an experimental database for plasma-jets driven magnetized target fusion (PJETS-MTF). Using past experience in fusion energy research as a model, we envisage a four-phase program to advance the art of PJETS-MTF to fusion breakeven Q is approximately 1). The experiment (PLX (Plasma Liner Physics Exploratory Experiment)) described in this paper serves as Phase I of this four-phase program. The logic underlying the selection of the experimental parameters is presented. The experiment consists of using twelve plasma guns arranged in a circle, launching plasma jets towards the center of a vacuum chamber. The velocity of the plasma jets chosen is 200 km/s, and each jet is to carry a mass of 0.2 mg - 0.4 mg. A candidate plasma accelerator for launching these jets consists of a coaxial plasma gun of the Marshall type.

Since the beginning of the Nuclear Power industry, numerous experiments concerned with nuclear energy and technology have been performed at different research laboratories, worldwide. These experiments required a large investment in terms of infrastructure, expertise, and cost; however, many were performed without a high degree of attention to archival of results for future use. The degree and quality of documentation varies greatly. There is an urgent need to preserve integral reactor physics experimental data, including measurement methods, techniques, and separate or special effects data for nuclear energy and technology applications and the knowledge and competence contained therein. If the data are compromised, it is unlikely that any of these experiments will be repeated again in the future. The International Reactor Physics Evaluation Project (IRPhEP) was initiated, as a pilot activity in 1999 by the by the Organization of Economic Cooperation and Development (OECD) Nuclear Energy Agency (NEA) Nuclear Science Committee (NSC). The project was endorsed as an official activity of the NSC in June of 2003. The purpose of the IRPhEP is to provide an extensively peer reviewed set of reactor physics related integral benchmark data that can be used by reactor designers and safety analysts to validate the analytical tools used to design next generation reactors and establish the safety basis for operation of these reactors. A short history of the IRPhEP is presented and its purposes are discussed in this paper. Accomplishments of the IRPhEP, including the first publication of the IRPhEP Handbook, are highlighted and the future of the project outlined.

A better understanding of the effect of impact damage on composite structures is necessary to give the engineer an ability to design safe, efficient structures. Current composite structures suffer severe strength reduction under compressive loading conditions, due to even light damage, such as from low velocity impact. A review is undertaken to access the current state-of-development in the areas of experimental testing, and analysis methods. A set of experiments on honeycomb core sandwich panels, with thin woven fiberglass cloth facesheets, is described, which includes detailed instrumentation and unique observation techniques.

We review the status of the Lake Baikal Neutrino Experiment. The Neutrino Telescope NT200 has been operating since 1998 and has been upgraded to the 10 Mton detector NT200+ in 2005. We present selected astroparticle physics results from long-term operation of NT200. Also discussed are activities towards acoustic detection of UHE-energy neutrinos, and results of associated science activities. Preparation towards a km3-scale (Gigaton volume) detector in Lake Baikal is currently a central activity. As an important milestone, a km3-prototype string, based on completely new technology, has been installed and is operating together with NT200+ since April, 2008.

In our "Physics of Music" class for non-science majors, we have developed a laboratory exercise in which students experiment with Chladni sand patterns on drumheads. Chladni patterns provide a kinesthetic, visual, and entertaining way to illustrate standing waves on flat surfaces and are very helpful when making the transition from one-dimensional systems, such as string and wind instruments, to the two-dimensional membranes and plates of the percussion family. Although the sand patterns attributed to Ernst Florens Friedrich Chladni (1756-1827) are often demonstrated for this purpose using metal plates,2-4 the use of drumheads offers several pedagogical and practical advantages in the lab.

The project involved a study of physical processes that create eroded channel and drainage networks. A particular focus was on how the shape of the channels and the network depended on the nature of the fluid flow. Our approach was to combine theoretical, experimental, and observational studies in close collaboration with Professor Daniel Rothman of the Massachusetts Institute of Technology. Laboratory -scaled experiments were developed and quantitative data on the shape of the pattern and erosion dynamics are obtained with a laser-aided topography technique and fluorescent optical imaging techniques.

We describe a laboratory plasma physicsexperiment at Los Alamos National Laboratory that uses two merging supersonic plasma jets formed and launched by pulsed-power-driven rail guns. The jets can be formed using any atomic species or mixture available in a compressed-gas bottle and have the following nominal initial parameters at the railgun nozzle exit: $n_e\\approx n_i \\sim 10^{16}$ cm$^{-3}$, $T_e \\approx T_i \\approx 1.4$ eV, $V_{\\rm jet}\\approx 30$-100 km/s, mean charge $\\bar{Z}\\approx 1$...

LHCb is a dedicated experiment designed to search for CP violation in both neutral and charged B meson decays at the LHC proton-proton collider. A general review of the main B decay channels is given by stressing on the physical parameters which can be measured and interpreted in the framework of the standard CKM matrix. On the experimental side, emphasis will be put on the different sub-detectors which enter the whole detector, the main role of the trigger system made out from four levels and finally, the global performance of the LHCb detector.

The observation of the recent electron neutrino appearance in a muon neutrino beam and the high-precision measurement of the mixing angle $\\theta_{13}$ have led to a re-evaluation of the physics potential of the T2K long-baseline neutrino oscillation experiment. Sensitivities are explored for CP violation in neutrinos, non-maximal $\\sin^22\\theta_{23}$, the octant of $\\theta_{23}$, and the mass hierarchy, in addition to the measurements of $\\delta_{CP}$, $\\sin^2\\theta_{23}$, and $\\Delta m^2_{32}$, for various combinations of $\

Objective: The present study is aimed at describing the experience of an oncology center with computed tomography guided core-needle biopsy of pulmonary lesions. Materials and Methods: Retrospective analysis of 97 computed tomography-guided core-needle biopsy of pulmonary lesions performed in the period between 1996 and 2004 in a Brazilian reference oncology center (Hospital do Cancer - A.C. Camargo). Information regarding material appropriateness and the specific diagnoses were collected and analyzed. Results: Among 97 lung biopsies, 94 (96.9%) supplied appropriate specimens for histological analyses, with 71 (73.2%) cases being diagnosed as malignant lesions and 23 (23.7%) diagnosed as benign lesions. Specimens were inappropriate for analysis in three cases. The frequency of specific diagnosis was 83 (85.6%) cases, with high rates for both malignant lesions with 63 (88.7%) cases and benign lesions with 20 (86.7%). As regards complications, a total of 12 cases were observed as follows: 7 (7.2%) cases of hematoma, 3 (3.1%) cases of pneumothorax and 2 (2.1%) cases of hemoptysis. Conclusion: Computed tomography-guided core needle biopsy of lung lesions demonstrated high rates of material appropriateness and diagnostic specificity, and low rates of complications in the present study. (author)

Some type of piercing into the subsurface formation is required in future planetary explorations to enhance the understanding of early stars' geological evolution and the origin of life. Compared with other technical methods, drilling & coring, only utilizing the compound locomotion of rotation and penetration, can sample the subsurface soil relatively efficient and convenient. However, given the uncertain mechanical properties of planetary soil, drilling state signals should be monitored online to improve the robustness of drilling system and avoid potential drilling faults. Since the flowing characteristics of interacted soil, such as removal volume, coring height, removal velocity and accumulation angle, directly reveal the drilling conditions, they are enhancing resources to comprehend the sampling phenomenon and can be used to help control the drill tool. This paper proposed a novel soil flowing characteristics (SFC) monitoring method by applying an industrial camera to record the flowing characteristics of removed cuttings and by utilizing an ultrasonic sensor into the hollow auger to monitor the sampled core. Experiments in one typical lunar regolith simulant indicate that the monitored SFC accurately reflects the interaction between the drill tool and soil.

Laboratory experiments document that liquid iron reacts chemically with silicates at high pressures (above 2.4 x 10 to the 10th Pa) and temperatures. In particular, (Mg,Fe)SiO3 perovskite, the most abundant mineral of earth's lower mantle, is expected to react with liquid iron to produce metallic alloys (FeO and FeSi) and nonmetallic silicates (SiO2 stishovite and MgSiO3 perovskite) at the pressures of the core-mantle boundary, 14 x 10 to the 10th Pa. The experimental observations, in conjunction with seismological data, suggest that the lowermost 200 to 300 km of earth's mantle, the D-double-prime layer, may be an extremely heterogeneous region as a result of chemical reactions between the silicate mantle and the liquid iron alloy of earth's core. The combined thermal-chemical-electrical boundary layer resulting from such reactions offers a plausible explanation for the complex behavior of seismic waves near the core-mantle boundary and could influence earth's magnetic field observed at the surface.

The present state of the Earth evolved from energetic events that were determined early in the history of the Solar System. A key process in reconciling this state and the observable mantle composition with models of the original formation relies on understanding the planetary processing that has taken place over the past 4.5Ga. Planetary size plays a key role and ultimately determines the pressure and temperature conditions at which the materials of the early solar nebular segregated. We summarize recent developments with the laser-heated diamond anvil cell that have made possible extension of the conventional pressure limit for partitioning experiments as well as the study of volatile trace elements. In particular, we discuss liquid-liquid, metal-silicate (M-Sil) partitioning results for several elements in a synthetic chondritic mixture, spanning a wide range of atomic number-helium to iodine. We examine the role of the core as a possible host of both siderophile and trace elements and the implications that early segregation processes at deep magma ocean conditions have for current mantle signatures, both compositional and isotopic. The results provide some of the first experimental evidence that the core is the obvious replacement for the long-sought, deep mantle reservoir. If so, they also indicate the need to understand the detailed nature and scale of core-mantle exchange processes, from atomic to macroscopic, throughout the age of the Earth to the present day.

The mantles of the Earth and Moon are similarly depleted in V, Cr, and Mn relative to the concentrations of these elements in chondritic meteorites [1,2]. The similar depletions have been suggested to be due to a common genesis of the Earth and Moon, with the Moon inheriting its mantle, complete with V, Cr, and Mn depletions, from the Earth during the impact-induced formation of the Moon. We have conducted multi-anvil experiments that systematically examined the effects of pressure, temperature, and silicate and metallic compositions on liquid metal-liquid silicate partitioning of V, Cr, and Mn. Increasing temperature is found to significantly increase the metal-silicate partition coefficients for all three elements. Increasing the S or C content of the metallic liquid also causes the partition coefficients to increase. Silicate composition has an effect consistent with Cr and Mn being divalent and V being trivalent. Over our experimental range of 3-14 GPa, the partitioning behavior of V, Cr, and Mn did not vary with pressure. With the effects of oxygen fugacity, metallic and silicate compositions, temperature and pressure understood, the partition coefficient for each element was expressed as a function of these thermodynamic variables and applied to different core formation scenarios. Our new metal-silicate experimental partitioning data can explain the mantle depletions of V, Cr, and Mn by core formation in a high temperature magma ocean under oxygen fugacity conditions two log units below the iron-wuestite buffer, conditions similar to those proposed by [3] from their metal-magnesiowuestite study. In contrast, more oxidizing conditions proposed in recent core formation models [4] cannot account for the V, Cr, and Mn depletions. Additionally, because we observe little or no pressure effect on V, Cr, and Mn partitioning in our experiments, we conclude that the mantle depletions of these elements during core formation are not dependent on planet size. Accordingly

When encountering novel object, humans are able to infer a wide range of physical properties such as mass, friction and deformability by interacting with them in a goal driven way. This process of active interaction is in the same spirit of a scientist performing an experiment to discover hidden facts. Recent advances in artificial intelligence have yielded machines that can achieve superhuman performance in Go, Atari, natural language processing, and complex control problems, but it is not clear that these systems can rival the scientific intuition of even a young child. In this work we introduce a basic set of tasks that require agents to estimate hidden properties such as mass and cohesion of objects in an interactive simulated environment where they can manipulate the objects and observe the consequences. We found that state of art deep reinforcement learning methods can learn to perform the experiments necessary to discover such hidden properties. By systematically manipulating the problem difficulty and...

Because of their performance characteristics high-performance fabrics like Infiniband or OmniPath are interesting technologies for many local area network applications, including data acquisition systems for high-energy physicsexperiments like the ATLAS experiment at CERN. This paper analyzes existing APIs for high-performance fabrics and evaluates their suitability for data acquisition systems in terms of performance and domain applicability. The study finds that existing software APIs for high-performance interconnects are focused on applications in high-performance computing with specific workloads and are not compatible with the requirements of data acquisition systems. To evaluate the use of high-performance interconnects in data acquisition systems a custom library, NetIO, is presented and compared against existing technologies. NetIO has a message queue-like interface which matches the ATLAS use case better than traditional HPC APIs like MPI. The architecture of NetIO is based on a interchangeable bac...

Overview of Experiments to Study the Physics of Fast Reactors Represented in the International Directories of Critical and Reactor Experiments John D. Bess Idaho National Laboratory Jim Gulliford, Tatiana Ivanova Nuclear Energy Agency of the Organisation for Economic Cooperation and Development E.V.Rozhikhin, M.Yu.Sem?nov, A.M.Tsibulya Institute of Physics and Power Engineering The study the physics of fast reactors traditionally used the experiments presented in the manual labor of the Working Group on Evaluation of sections CSEWG (ENDF-202) issued by the Brookhaven National Laboratory in 1974. This handbook presents simplified homogeneous model experiments with relevant experimental data, as amended. The Nuclear Energy Agency of the Organization for Economic Cooperation and Development coordinates the activities of two international projects on the collection, evaluation and documentation of experimental data - the International Project on the assessment of critical experiments (1994) and the International Project on the assessment of reactor experiments (since 2005). The result of the activities of these projects are replenished every year, an international directory of critical (ICSBEP Handbook) and reactor (IRPhEP Handbook) experiments. The handbooks present detailed models of experiments with minimal amendments. Such models are of particular interest in terms of the settlements modern programs. The directories contain a large number of experiments which are suitable for the study of physics of fast reactors. Many of these experiments were performed at specialized critical stands, such as BFS (Russia), ZPR and ZPPR (USA), the ZEBRA (UK) and the experimental reactor JOYO (Japan), FFTF (USA). Other experiments, such as compact metal assembly, is also of interest in terms of the physics of fast reactors, they have been carried out on the universal critical stands in Russian institutes (VNIITF and VNIIEF) and the US (LANL, LLNL, and others.). Also worth mentioning

The three neutrino model has 9 physical parameters, 3 neutrino masses, 3 mixing angles and 3 CP violating phases. Among them, neutrino oscillation experiments can probe 6 parameters: 2 mass squared differences, 3 mixing angles, and 1 CP phase. The experiments performed so far determined the magnitudes of the two mass squared differences, the sign of the smaller mass squared difference, the magnitudes of two of the three mixing angles, and the upper bound on the third mixing angle. The sign of the larger mass squared difference (the neutrino mass hierarchy pattern), the magnitude of the third mixing angle and the CP violating phase, and a two-fold ambiguity in the mixing angle that dictates the atmospheric neutrino oscillation should be determined by future oscillation experiments. In this talk, I introduce a few ideas of future long baseline neutrino oscillation experiments which make use of the super neutrino beams from J-PARC (Japan Proton Accelerator Research Complex) in Tokai village. We examine the poten...

The deep underground neutrino experiment (DUNE) is a proposed next generation superbeam experiment at Fermilab. Its aims include measuring the unknown neutrino oscillation parameters - the neutrino mass hierarchy, the octant of the mixing angle θ{sub 23}, and the CP-violating phase δ{sub CP}. The current and upcoming experiments T2K, NOνA, and ICAL rate at IN will also be collecting data for the same measurements. In this paper, we explore the sensitivity reach of DUNE in combination with these other experiments. We evaluate the least exposure required by DUNE to determine the above three unknown parameters with reasonable confidence.We find that for each case, the inclusion of data from T2K, NOνA, and ICAL rate at IN help to achieve the same sensitivity with a reduced exposure from DUNE thereby helping to economize the configuration. Further, we quantify the effect of the proposed near detector on systematic errors and study the consequent improvement in sensitivity. We also examine the role played by the second oscillation cycle in furthering the physics reach of DUNE. Finally, we present an optimization study of the neutrino-antineutrino running of DUNE. (orig.)

In the late 1990's PPPL's Science Education Department developed an innovative online site called the Interactive Plasma Physics Educational Experience (IPPEX). It featured (among other modules) two Java based applications which simulated tokamak physics: A steady state tokamak (SST) and a time dependent tokamak (TDT). The physics underlying the SST and the TDT are based on the ASPECT code which is a global power balance code developed to evaluate the performance of fusion reactor designs. We have relaunched the IPPEX site with updated modules and functionalities: The site itself is now dynamic on all platforms. The graphic design of the site has been modified to current standards. The virtual tokamak programming has been redone in Javascript, taking advantage of the speed and compactness of the code. The GUI of the tokamak has been completely redesigned, including more intuitive representations of changes in the plasma, e.g., particles moving along magnetic field lines. The use of GPU accelerated computation provides accurate and smooth visual representations of the plasma. We will present the current version of IPPEX as well near term plans of incorporating real time NSTX-U data into the simulation.

All high school students that wish to continue onto college are seeking opportunities to be competitive in the college market. They participate in extra-curricular activities which are seen to foster creativity and the skills necessary to do well in the college environment. In the case of students with an interest in physics, participating in a small scale research project while in high school gives them the hands on experience and ultimately prepares them more for the college experience. SUNY Plattsburgh’s Physics department started a five-week summer program for high school students in 2012. This program has proved not only beneficial for students while in the program, but also as they continue on in their development as scientists/engineers. Independent research, such as that offered by SUNY Plattsburgh’s five-week summer program, offers students a feel and taste of the culture of doing research, and life as a scientist. It is a short-term, risk free way to investigate whether a career in research or a particular scientific field is a good fit.

The data acquisition in high energy physicsexperiments is typically started by a pulse from a fast coincidence- based trigger system. It is essential that such a system can identify an event in a shortest possible time and with as good selectivity as possible. In order to meet these requirements, several new techniques and developments in the domain of signal discrimination and rapid hittopology analysis are presented. Two digital rise-time compensation methods were developed to improve the time resolution of the comparatively slow signals from inorganic scintillators. Both methods utilize double threshold analog comparators and digital processing logic. A unique adaptive threshold discrimination method was developed to reject after-pulses. The method was found to give the best timing, the smallest dead time and a complete rejection of noise pulses without missing physically significant pulses. Algorithms for fast multiplicity calculations of clusters of hits in two- dimensional matrices, in strings and in planar detector configurations were evaluated. All techniques described in this thesis were implemented and verified in the trigger systems built for the experiments WASA (Wide Angle Shower Apparatus) at TSL, Uppsala, Sweden and the AMANDA (Antarctic Muon And Neutrino Detector Array) at the South Pole.

The Division of Development and Technology has sponsored a four day US-Japan workshop ''Plasma-Wall Interaction Data Needs Critical to a Burning CoreExperiment (BCX)'', held at Sandia National Laboratories, Livermore, California on June 24 to 27, 1985. The workshop, which brought together fifty scientists and engineers from the United States, Japan, Germany, and Canada, considered the plasma-material interaction and high heat flux (PMI/HHF) issues for the next generation of magnetic fusion energy devices, the Burning CoreExperiment (BCX). Materials options were ranked, and a strategy for future PMI/HHF research was formulated. The foundation for international collaboration and coordination of this research was also established. This volume contains the last three of the five technical sessions. The first of the three is on plasma materials interaction issues, the second is on research facilities and the third is from smaller working group meetings on graphite, beryllium, advanced materials and future collaborations.

The study was conducted to evaluate effect of ratio of face to core particles on mechanical and physical properties of oriented strand board produced from Ethiopian highland bamboo.Three-layered oriented particleboards were manufactured with 4 proportions of face to core particles at 750 kg/m~3 target density.Ten percent urea formaldehyde resin was used as a binder.Strength and dimensional stability performances of all boards were assessed based on ISO standards.The results showed that modulus of rupture...

Background: This paper presents a case study from a physics course at a Norwegian university college, investigating key aspects of a group-work project, so-called learning labs, from the participating students' perspective. Purpose: In order to develop these learning labs further, the students' perspective is important. Which aspects are essential for how the students experience the learning labs, and how do these aspects relate to the emergence of occurrences termed joint workspace, i.e. the maintenance of content-related dialogues within the group? Programme description: First year mechanical engineering students attended the learning labs as a compulsory part of the physics course. The student groups were instructed to solve physics problems using the interactive whiteboard and then submit their work as whiteboard files. Sample: One group of five male students was followed during their work in these learning labs through one term. Design and methods: Data were collected as video recordings and fieldwork observation. In this paper, a focus group interview with the students was the main source of analysis. The interpretations of the interview data were compared with the video material and the fieldwork observations. Results: The results show that the students' overall experience with the learning labs was positive. They did, however, point to internal aspects of conflicting common and personal goals, which led to a group-work dynamics that seemed to inhibit elaborate discussions and collaboration. The students also pointed to external aspects, such as a close temporal proximity between lectures and exercises, which also seemed to inhibit occurrences termed joint workspace. Conclusions: In order to increase the likelihood of a joint workspace throughout the term in the learning labs, careful considerations have to be made with regard to timing between lectures and exercises, but also with regard to raising the students' awareness about shared and personal goals.

The purpose of this study was to determine the effect of core training program on speed, acceleration, vertical jump, and standing long jump in female soccer players. A total of 40 female soccer players volunteered to participate in this study. They were divided randomly into 1 of 2 groups: core training group (CTG; n = 20) and control group (CG;…

Due to the advantages of universality, flexibility and high performance, fast Ethernet is widely used in readout system design of modern particle physicsexperiments. However, Ethernet is usually used together with TCP/IP protocol stack, which makes it difficult to be implemented because designers have to use operating system to process this protocol. Furthermore, TCP/IP protocol degrades the transmission efficiency and real-time performance. To maximize the performance of Ethernet in physicsexperiment applications, a data readout method based on physical layer (PHY) is proposed in this paper. In this method, TCP/IP protocol is forsaken and replaced with a customized and simple protocol, which make it easier to be implemented. On each readout module, data from front-end electronics is first fed into an FPGA for protocol processing and then sent out to a PHY chip controlled by this FPGA for transmission. This kind of data path is fully implemented by hardware. While from the side of data acquisition system (D...

以中国戏曲学院篮球男运动员为研究对象，运用文献资料法、实验法、数理统计法对核心稳定性训练对篮球运动员核心区域肌电身体素质影响进行实验研究，探讨核心稳定性训练对篮球运动员的影响。对身体素质指标以及核心区域肌电进行实验前和实验后测试。训练每周2次，每次60min，在8周的核心稳定性训练后对核心区域肌电、身体素质的各项指标进行了测试。结果：实验后经测试发现实验组中反映核心区域肌电、身体素质的各项指标都有了及其显著性的提高（P<0.01）。结论：核心稳定性训练能够有效的提高核心区域肌电、身体素质，有助于运动技能的发挥和身体素质的提高。%Objective: through the experimental study to discuss the influences of core stability training for basketball players. Methods: in this paper using the literature method, experimental method, statistic method to explore core stability training on basketball player core region electrical physical effects. Taking theatre Arts basketball male athlete as the research object, this article through to the experimental group athletes training in core stability monitoring, analysis, finally completing the experiment. Core area of physical quality index and electrical experiment before and after the experiment test, 2 times a week, and 60 minutes of core stability training, after eight weeks of core stability training of core area electrical physical quality of the indicators was tested. Results: tests found that after the experiment the indicators reflect the core region electrical physical quality has improved and the effect is significant (P<0.01).Conclusion: core stability training can effectively improve the quality of the core region electrical body; helpingdevelop sports skills and improve technical level.

The Physics of Colloids in Space (PCS) experiment was accommodated within International Space Station (ISS) EXpedite the PRocessing of Experiments to Space Station (EXPRESS) Rack 2 and was remotely operated from early June 2001 until February 2002 from NASA Glenn Research Center's Telescience Support Center (TSC) in Cleveland, Ohio, and from the remote site at Harvard University in Cambridge, Massachusetts. PCS was launched on 4/19/2001 on Space Shuttle STS-100. The experiment was activated on 5/31/2001. The entire experimental setup performed remarkably well, and accomplished 2400 hours of science operations on-orbit. The sophisticated instrumentation in PCS is capable of dynamic and static light scattering from 11 to 169 degrees, Bragg scattering over the range from 10 to 60 degrees, dynamic and static light scattering at low angles from 0.3 to 6.0 degrees, and color imaging. The long duration microgravity environment on the ISS facilitated extended studies on the growth and coarsening characteristics of binary crystals. The de-mixing of the colloid-polymer critical-point sample was also studied as it phase-separated into two phases. Further, aging studies on a col-pol gel, gelation rate studies in extremely low concentration fractal gels over several days, and studies on a glass sample, all provided valuable information. Several exciting and unique aspects of these results are discussed here.

Full Text Available Experiment, as a new form of knowledge, was aBaconian creation. It was in Bacon’s project of Great Instauration and inBacon’s reformed natural history that experiment and experimentationceased to be illustrations of theories and become relatively autonomousdevices for the production of knowledge and for setting the mind straightin its attempts to gain knowledge. This paper explores the way in whichBacon’s Latin natural history transformed experiment and experimentationin such devices. More precisely, I investigate the way in which Bacon’sLatin natural histories were put together from a limited number ofsignificant experiments listed in the Novum Organum under the general title“instances of special power” or “instances of the lamp.” Contrary to thereceived view, my claim is that Bacon’s natural histories are based on alimited number of ‘core experiments’ and are generated through a specificmethodological procedure known under the name of experientia literata. Thispaper is an attempt to reconstruct the procedure of putting such naturalhistories together and a more in-depth exploration of their epistemologicaland therapeutic character.

For those with true near-death experiences (NDEs), Greyson's (1983, 1990) NDE Scale satisfactorily fits the Rasch rating scale model, thus yielding a unidimensional measure with interval-level scaling properties. With increasing intensity, NDEs reflect peace, joy and harmony, followed by insight and mystical or religious experiences, while the most intense NDEs involve an awareness of things occurring in a different place or time. The semantics of this variable are invariant across True-NDErs' gender, current age, age at time of NDE, and latency and intensity of the NDE, thus identifying NDEs as 'core' experiences whose meaning is unaffected by external variables, regardless of variations in NDEs' intensity. Significant qualitative and quantitative differences were observed between True-NDErs and other respondent groups, mostly revolving around the differential emphasis on paranormal/mystical/religious experiences vs. standard reactions to threat. The findings further suggest that False-Positive respondents reinterpret other profound psychological states as NDEs. Accordingly, the Rasch validation of the typology proposed by Greyson (1983) also provides new insights into previous research, including the possibility of embellishment over time (as indicated by the finding of positive, as well as negative, latency effects) and the potential roles of religious affiliation and religiosity (as indicated by the qualitative differences surrounding paranormal/mystical/religious issues).

We have investigated the properties of the OMC-2 and OMC-3 cores in the Orion giant molecular cloud using high spatial spectral resolution observations of several transitions of the (13)CO, C(18)O, C(S-32) and C(S-34) molecules taken with the SEST telescope. The OMC-2 core consists of one clump (22 solar mass) with a radius of 0.11 pc surrounded by a cluster of 11 discrete infrared sources. The H2 column density and volume density in the center of this clump are 2 x 10(exp 22)/sq cm and 9 x 10(exp 5)/cu cm respectively. From a comparison between physical parameters derived from C(18)O and C(S-32) observations we conclude that the molecular envelope around the core has been completely removed by these sources and that only the very dense gas is left. OMC-3 shows a more complex elongated structure in C(18)O and CS than OMC-2. The C(S-32) and C(S-34) maps show that the denser region can be separated into at least sub-cores of roughly equal sizes (radius approximately equals 0.13 pc), with n(H2) = 6 x 10(exp 5)/cu cm, and a mass of 10 solar mass (from C(S-32)). The very different masses obtained for the central core from C(18)O and C(S-32) (55 and 12 solar mass respectively) indicate that a massive envelope is still present around the very dense sub-cores. We report the first detection of several molecular outflows in OMC-3. The presence of an IRAS source and the first detection of these outflows confirm that star formation is going on in OMC-3. Based on the different physical properties of these regions compared with OMC-1, OMC-2 appears to be in an intermediate evolutionary stage between OMC-1 and OMC-3.

We have investigated the properties of the OMC-2 and OMC-3 cores in the Orion giant molecular cloud using high spatial spectral resolution observations of several transitions of the (13)CO, C(18)O, C(S-32) and C(S-34) molecules taken with the SEST telescope. The OMC-2 core consists of one clump (22 solar mass) with a radius of 0.11 pc surrounded by a cluster of 11 discrete infrared sources. The H2 column density and volume density in the center of this clump are 2 x 10(exp 22)/sq cm and 9 x 10(exp 5)/cu cm respectively. From a comparison between physical parameters derived from C(18)O and C(S-32) observations we conclude that the molecular envelope around the core has been completely removed by these sources and that only the very dense gas is left. OMC-3 shows a more complex elongated structure in C(18)O and CS than OMC-2. The C(S-32) and C(S-34) maps show that the denser region can be separated into at least sub-cores of roughly equal sizes (radius approximately equals 0.13 pc), with n(H2) = 6 x 10(exp 5)/cu cm, and a mass of 10 solar mass (from C(S-32)). The very different masses obtained for the central core from C(18)O and C(S-32) (55 and 12 solar mass respectively) indicate that a massive envelope is still present around the very dense sub-cores. We report the first detection of several molecular outflows in OMC-3. The presence of an IRAS source and the first detection of these outflows confirm that star formation is going on in OMC-3. Based on the different physical properties of these regions compared with OMC-1, OMC-2 appears to be in an intermediate evolutionary stage between OMC-1 and OMC-3.

Ex-core neutron transport calculations are needed to evaluate radiation loading parameters (neutron fluence, fluence rate and spectra) on the in-vessel equipment, reactor pressure vessel (RPV) and support constructions of VVER type reactors. Due to these parameters are used for reactor equipment life-time assessment, neutron transport calculations should be carried out by precise and reliable calculation methods. In case of RPVs, especially, of first generation VVER-440s, the neutron fluence plays a key role in the prediction of RPV lifetime. Main part of VVER ex-core neutron transport calculations are performed by deterministic and Monte-Carlo methods. This paper deals with precise calculations of the Russian first generation VVER-440 by MCNP-5 code. The purpose of this work was an application of this code for expert calculations, verification of results by comparison with deterministic calculations and validation by neutron activation measured data. Deterministic discrete ordinates DORT code, widely used for RPV neutron dosimetry and many times tested by experiments, was used for comparison analyses. Ex-vessel neutron activation measurements at the VVER-440 NPP have provided space (in azimuth and height directions) and neutron energy (different activation reactions) distributions data for experimental (E) validation of calculated results. Calculational intercomparison (DORT vs. MCNP-5) and comparison with measured values (MCNP-5 and DORT vs. E) have shown agreement within 10-15% for different space points and reaction rates. The paper submits a discussion of results and makes conclusions about practice use of MCNP-5 code for ex-core neutron transport calculations in expert analysis. (authors)

In this paper the authors review a selection of recent results obtained, in the area of QCD physics, from the CDF-II experiment that studies p{bar p} collisions at {radical}s = 1.96 TeV provided by the Fermilab Tevatron Collider. All results shown correspond to analysis performed using the Tevatron Run II data samples. In particular they will illustrate the progress achieved and the status of the studies on the following QCD processes: jet inclusive production, using different jet clustering algorithm, W({yields} e{nu}{sub e}) + jets and Z({yields} e{sup +}e{sup -}) + jets production, {gamma} + b-jet production, dijet production in double pomeron exchange and finally exclusive e{sup +}e{sup -} and {gamma}{gamma} production. No deviations from the Standard Model have been observed so far.

We present a comprehensive review of physics effects generated by leptoquarks (LQs), i.e., hypothetical particles that can turn quarks into leptons and vice versa, of either scalar or vector nature. These considerations include discussion of possible completions of the Standard Model that contain LQ fields. The main focus of the review is on those LQ scenarios that are not problematic with regard to proton stability. We accordingly concentrate on the phenomenology of light leptoquarks that is relevant for precision experiments and particle colliders. Important constraints on LQ interactions with matter are derived from precision low energy observables such as electric dipole moments, (g-2) of charged leptons, atomic parity violation, neutral meson mixing, Kaon, B, and D meson decays, etc. We provide a general analysis of indirect constraints on LQ Yukawa interactions to make statements that are as model independent as possible. We address complementary constraints that originate from electroweak precision mea...

We perform a detailed combined fit to the {{\\bar{ν }}e}\\to {{\\bar{ν }}e} disappearence data of the Daya Bay experiment and the appearance {{ν }μ }\\to {{ν }e} and disappearance {{ν }μ }\\to {{ν }μ } data of the Tokai to Kamioka (T2K) one in the presence of two models of new physics affecting neutrino oscillations, namely a model where sterile neutrinos can propagate in a large compactified extra dimension and a model where non-standard interactions (NSI) affect the neutrino production and detection. We find that the Daya Bay ⨁ T2K data combination constrains the largest radius of the compactified extra dimensions to be R≲ 0.17 μm at 2σ C.L. (for the inverted ordering of the neutrino mass spectrum) and the relevant NSI parameters in the range O({{10}-3})-O({{10}-2}), for particular choices of the charge parity violating phases.

The basic subject of this volume is the solar astronomy program conducted on Skylab. In addition to descriptions of the individual experiments and the principles involved in their performance, a brief description is included of the sun and the energy characteristics associated with each zone. Wherever possible, related classroom activities have been identified and discussed in some detail. It will be apparent that the relationships rest not only in the field of solar astronomy, but also in the following subjects: (1) physics - optics, electromagnetic spectrum, atomic structure, etc.; (2) chemistry - emission spectra, kinetic theory, X-ray absorption, etc.; (3) biology - radiation and dependence on the sun; (4) electronics - cathode ray tubes, detectors, photomultipliers, etc.; (5) photography; (6) astronomy; and (7) industrial arts.

The Balloon-borne Large-Aperture Submillimeter Telescope (BLAST) carried out a 250, 350 and 500 micron survey of the galactic plane encompassing the Vela Molecular Ridge, with the primary goal of identifying the coldest, dense cores possibly associated with the earliest stages of star formation. Here we present the results from observations of the Vela-D region, covering about 4 square degrees, in which we find 141 BLAST cores. We exploit existing data taken with the Spitzer MIPS, IRAC and SEST-SIMBA instruments to constrain their (single-temperature) spectral energy distributions, assuming a dust emissivity index beta = 2.0. This combination of data allows us to determine the temperature, luminosity and mass of each BLAST core, and also enables us to separate starless from proto-stellar sources. We also analyze the effects that the uncertainties on the derived physical parameters of the individual sources have on the overall physical properties of starless and proto-stellar cores, and we find that there appe...

We present results from experiments on a biologically inspired cyber-physical system, composed of a two-dimensional heaving and pitching rigid airfoil attached to a six component load cell, mounted to a traverse that can move along a water channel. A feedback controller, influenced by the apparatus of Mackowski and Williamson, introduces the effects of a fictional drag force specified by a virtual body profile and drives the traverse accordingly. Free-swimming protocols using the force-feedback system are compared with similar motions on a motionless traverse. The propulsive efficiency of burst-and-coast kinematics is also considered. Of particular interest are (1) the implementation of the cyber-physical control system with respect to the accessible experimental parameter space, (2) the impact of force-based streamwise actuation on experimental data, and (3) the effects of burst-and-coast motions on propulsive efficiency. The work was supported by the Office of Naval Research (ONR) under MURI Grant N00014-14-1-0533.

Cyclonic Gulf Stream rings are energetic eddies in the warm Sargasso Sea consisting of a ring of Gulf Stream water surrounding a core of cold Slope Water. Initially a ring core has the characteristics of the Slope Water; it is rich in plants, animals, and nutrients. As a ring decays the Slope Water properties of its core are gradually replaced by those of the Sargasso Sea, where standing crops of plants, animals, and nutrients generally are low. Although the decay rate suggests a rather long lifetime (2 to 4 years), the usual death of a ring comes when it rejoins the Gulf Stream after 6 to 12 months.

Archaeomagnetic field models cover longer timescales than historical models and may therefore resolve the motion of geomagnetic features on the core-mantle boundary (CMB) in a more meaningful statistical sense. Here we perform a detailed appraisal of archaeomagnetic field models to infer some aspects of the physics of the outer core. We characterize and compare the identification and tracking of reversed flux patches (RFPs) in order to assess the RFPs robustness. We find similar behaviour within a family of models but differences among different families, suggesting that modelling strategy is more influential than data set. Similarities involve recurrent positions of RFPs, but no preferred direction of motion is found. The tracking of normal flux patches shows similar qualitative behaviour confirming that RFPs identification and tracking is not strongly biased by their relative weakness. We also compare the tracking of RFPs with that of the historical field model gufm1 and with seismic anomalies of the lowermost mantle to explore the possibility that RFPs have preferred locations prescribed by lower mantle lateral heterogeneity. The archaeomagnetic field model that most resembles the historical field is interpreted in terms of core dynamics and core-mantle thermal interactions. This model exhibits correlation between RFPs and low seismic shear velocity in co-latitude and a shift in longitude. These results shed light on core processes, in particular we infer toroidal field lines with azimuthal orientation below the CMB and large fluid upwelling structures with a width of about 80° (Africa) and 110° (Pacific) at the top of the core. Finally, similar preferred locations of RFPs in the past 9 and 3 kyr of the same archaeomagnetic field model suggest that a 3 kyr period is sufficiently long to reliably detect mantle control on core dynamics. This allows estimating an upper bound of 220-310 km for the magnetic boundary layer thickness below the CMB.

Full Text Available Most of the high energy physicsexperiments require their detectors to be embedded in a high intensity magnetic field. In particular the biggest of them, ATLAS, running in the CERN Large Hadron Collider (LHC particle accelerator, generates a field of 2 T by means of a gigantic toroidal magnet working in open air. Its future phase 2 upgrade plans to move the DC/DC power supplies from the present positions on the external balconies directly on the detectors, where the field is of the order of 1 T. This presentation describes the development of samples made of special magnetic material for inductor cores suitable to work in such an environment. Starting from iron-silicon powders, at FN plant a plastic forming process, based on powder extrusion, injection moulding and sintering, was developed. To get the best compromise between the forming process requirements (good coupling among the metallic powder and the organic components to assure the right mouldability and the debinding and sintering conditions, several mixtures (with different percentages and kind of organic additives were experimented. A proper mould was designed and realized to get torous-shaped prototypes. The preliminary results of the physical-microstructural characterization performed on the first prototypes made will be shown.

This document is a design overview that describes the scoping studies and preconceptual design effort performed in FY 1983 on the Tokamak Fusion CoreExperiment (TFCX) class of device. These studies focussed on devices with all-superconducting toroidal field (TF) coils and on devices with superconducting TF coils supplemented with copper TF coil inserts located in the bore of the TF coils in the shield region. Each class of device is designed to satisfy the mission of ignition and long pulse equilibrium burn. Typical design parameters are: major radius = 3.75 m, minor radius = 1.0 m, field on axis = 4.5 T, plasma current = 7.0 MA. These designs relay on lower hybrid (LHRH) current rampup and heating to ignition using ion cyclotron range of frequency (ICRF). A pumped limiter has been assumed for impurity control. The present document is a design overview; a more detailed design description is contained in a companion document.

Icy satellites and similar objects likely form from a mixture of hydrated rocky material, such as the CI chondrites, and various amounts of ices. Mass-balance estimates show that hydrous silicates such as serpentine, and brucite, the simple Mg-Fe hydroxide, dominate fully hydrated mineralogy. The inferred iron content of these minerals is, however, very dependent on assumptions of iron redox state, and whether it forms sulfides or segregates into a metal core. From the determination of the moment of inertia inferred from gravity measurements at Jupiter and Saturn by the Galileo and Cassini spacecraft, Ganymede and Europa would have a differentiated iron-rich core whereas Titan and Enceladus would not. Whatever the case, iron content is generally significantly higher than that of the terrestrial ultrabasic rocks used as analogs in modeling of hydrated satellite cores. Thus, we investigated the phase relations of iron-rich ultrabasic systems based on chondritic composition by combining thermodynamic modeling and preliminary high-pressure experiments. Our starting composition model is that of CI carbonaceous chondrites. Stable mineral assemblages are calculated with the PerpleX package (Connolly, 1990), assuming excess water, and various amounts of iron in the silicate phase through varying the amount of iron sulfide (troilite) or iron oxide (magnetite). Results show stable hydrated minerals are serpentine, chlorite, brucite, Na-phlogopite and in extreme cases, talc in the 1.5-5 GPa range relevant to bodies larger than about 1000 km in radius. Dehydration temperatures are extremely sensitive to the iron content, hence on the chosen amount of iron bearing phase (troilite or magnetite), and to a lower extent on average CI composition. An experimental approach was developed to simulate hydrous alteration of CI-like material. A mixture of synthetic silicates, troilite, and organic compounds, to which excess water is added, is used. Mineralogy and composition is checked

Full Text Available The COMPASS experiment, at CERN SPS, has been compiling for more than a decade successful and precise results on nucleon structure and hadron spectroscopy, leading to statistical errors much smaller than previously measured. The new COMPASS spin physics program, starting this year, aims to a rather complete nucleon structure description; this new representation goes beyond the collinear approximation by including the quark intrinsic transverse momentum distributions. The theoretical framework, for this new picture of the nucleon, is given by the Transverse Momentum Dependent distributions (TMDs and by the Generalised Parton Distributions (GPDs. The TMDs, in particular Sivers, Boer-Mulders, pretzelosity and transversity functions will be obtained through the polarised Drell-Yan process, for the first time. The results will be complementary to those already obtained via polarised Semi-Inclusive Deep Inelastic Scattering (SIDIS. Also unpolarised SIDIS will be studied, allowing the knowledge improvement of the strange quark PDF and the access to the kaon fragmentation functions (FFs. Deeply Virtual Compton Scattering (DVCS off an unpolarised hydrogen target will be used to study the GPDs, in a kinematic region not yet covered by any existing experiment.

Full Text Available A system for optical tracking of frozen hydrogen microsphere targets (pellets has been designed. It is intended for the upcoming hadron physicsexperiment PANDA at FAIR, Darmstadt, Germany. With such a tracking system one can reconstruct the positions of the individual pellets at the time of a hadronic interaction in the offline event analysis. This gives information on the position of the primary interaction vertex with an accuracy of a few 100 µm, which is very useful e.g. for reconstruction of charged particle tracks and secondary vertices and for background suppression. A study has been done at the WASA detector setup (Forschungszentrum Jülich, Germany to check the possibility of classification of hadronic events as originating in pellets or in background. The study has been done based on the instantaneous rate a Long Range TDC which was used to determine if a pellet was present in the accelerator beam region. It was clearly shown that it is possible to distinguish the two event classes. Also, an experience was gained with operation of two synchronized systems operating in different time scales, as it will also be the case with the optical pellet tracking.

CALET (CALorimetric Electron Telescope) is a high energy astroparticle physicsexperiment planned for a long exposure mission aboard the International Space Station (ISS) by the Japanese Aerospace Exploration Agency, in collaboration with the Italian Space Agency (ASI) and NASA. The main science goal is high precision measurements of the inclusive electron (+positron) spectrum below 1 TeV and the exploration of the energy region above 1 TeV, where the shape of the high end of the spectrum might unveil the presence of nearby sources of acceleration. CALET has been designed to achieve a large proton rejection capability (>10$^5$) with a fine grained imaging calorimeter (IMC) followed by a total absorption calorimeter (TASC), for a total thickness of 30 X$_{0}$ and 1.3 proton interaction length. With an excellent energy resolution and a lower background contamination with respect to previous experiments, CALET will search for possible spectral signatures of dark matter with both electrons and gamma rays. CALET w...

The Department of Community Medicine and Family Medicine (CMFM) has been started as a new model for imparting the components of family medicine and delivering health-care services at primary and secondary levels in all six newly established All India Institute of Medical Sciences (AIIMS), but there is no competency-based curriculum for it. The paper aims to share the experience of Delphi method in the process of developing consensus on core competencies of the new model of CMFM in AIIMS for undergraduate medical students in India. The study adopted different approaches and methods, but Delphi was the most critical method used in this research. In Delphi, the experts were contacted by e-mail and their feedback on the same was analyzed. Two rounds of Delphi were conducted in which 150 participants were contacted in Delphi-I but only 46 responded. In Delphi-II, 26 participants responded whose responses were finally considered for analysis. Three of the core competencies namely clinician, primary-care physician, and professionalism were agreed by all the participants, and the least agreement was observed in the competencies of epidemiologist and medical teacher. The experts having more experience were less consistent as responses were changed from agree to disagree in more than 15% of participants and 6% changed from disagree to agree. Within the given constraints, the final list of competencies and skills for the discipline of CMFM compiled after the Delphi process will provide a useful insight into the development of competency-based curriculum of the subject.

Siderophile elements in the Earth.s mantle are depleted relative to chondrites. This is most pronounced for the highly siderophile elements (HSEs), which are approximately 400x lower than chondrites. Also remarkable is the relative chondritic abundances of the HSEs. This signature has been interpreted as representing their sequestration into an iron-rich core during the separation of metal from silicate liquids early in the Earth's history, followed by a late addition of chondritic material. Alternative efforts to explain this trace element signature have centered on element partitioning experiments at varying pressures, temperatures, and compositions (P-T-X). However, first results from experiments conducted at 1 bar did not match the observed mantle abundances, which motivated the model described above, a "late veneer" of chondritic material deposited on the earth and mixed into the upper mantle. Alternatively, the mantle trace element signature could be the result of equilibrium partitioning between metal and silicate in the deep mantle, under P-T-X conditions which are not yet completely identified. An earlier model determined that equilibrium between metal and silicate liquids could occur at a depth of approximately 700 km, 27(plus or minus 6) GPa and approximately 2000 (plus or minus 200) C, based on an extrapolation of partitioning data for a variety of moderately siderophile elements obtained at lower pressures and temperatures. Based on Ni-Co partitioning, the magma ocean may have been as deep as 1450 km. At present, only a small range of possible P-T-X trace element partitioning conditions has been explored, necessitating large extrapolations from experimental to mantle conditions for tests of equilibrium models. Our primary objective was to reduce or remove the additional uncertainty introduced by extrapolation by testing the equilibrium core formation hypothesis at P-T-X conditions appropriate to the mantle.

The observation of the recent electron neutrino appearance in a muon neutrino beam and the high-precision measurement of the mixing angle θ _{13} have led to a re-evaluation of the physics potential of the T2K long-baseline neutrino oscillation experiment. Sensitivities are explored for CP violation in neutrinos, non-maximal sin ^22θ _{23}, the octant of θ _{23}, and the mass hierarchy, in addition to the measurements of δ _{CP}, sin ^2θ _{23}, and Δ m^2_{32}, for various combinations of ν-mode and bar {ν }-mode data-taking. With an exposure of 7.8× 10^{21} protons-on-target, T2K can achieve 1σ resolution of 0.050 (0.054) on sin ^2θ _{23} and 0.040 (0.045)× 10^{-3} {eV}^2 on Δ m^2_{32} for 100% (50%) neutrino beam mode running assuming sin ^2θ _{23}=0.5 and Δ m^2_{32} = 2.4× 10^{-3} eV^2. T2K will have sensitivity to the CP-violating phase δ _{CP} at 90% C.L. or better over a significant range. For example, if sin ^22θ _{23} is maximal (i.e. θ _{23}=45°) the range is -115° < δ _{CP}< -60° for normal hierarchy and +50° < δ _{CP}< +130° for inverted hierarchy. When T2K data is combined with data from the NOνA experiment, the region of oscillation parameter space where there is sensitivity to observe a non-zero δ _{CP} is substantially increased compared to if each experiment is analyzed alone.

There is growing evidence of persistent gender achievement gaps in university physics instruction, not only for learning physics content, but also for developing productive attitudes and beliefs about learning physics. These gaps occur in both traditional and interactive-engagement (IE) styles of physics instruction. We investigated one gender gap…

Due to its advantages of universality, flexibility and high performance, fast Ethernet is widely used in readout system design for modern particle physicsexperiments. However, Ethernet is usually used together with the TCP/IP protocol stack, which makes it difficult to implement readout systems because designers have to use the operating system to process this protocol. Furthermore, TCP/IP degrades the transmission efficiency and real-time performance. To maximize the performance of Ethernet in physicsexperiment applications, a data readout method based on the physical layer (PHY) is proposed. In this method, TCP/IP is replaced with a customized and simple protocol, which makes it easier to implement. On each readout module, data from the front-end electronics is first fed into an FPGA for protocol processing and then sent out to a PHY chip controlled by this FPGA for transmission. This kind of data path is fully implemented by hardware. From the side of the data acquisition system (DAQ), however, the absence of a standard protocol causes problems for the network related applications. To solve this problem, in the operating system kernel space, data received by the network interface card is redirected from the traditional flow to a specified memory space by a customized program. This memory space can easily be accessed by applications in user space. For the purpose of verification, a prototype system has been designed and implemented. Preliminary test results show that this method can meet the requirements of data transmission from the readout module to the DAQ with an efficient and simple manner. Supported by National Natural Science Foundation of China (11005107) and Independent Projects of State Key Laboratory of Particle Detection and Electronics (201301)

The accident tolerant fuels (ATF) considered in this work includes metallic microcell UO{sub 2} pellets and outer Cr-based alloy coating on cladding, which is being developed in KAERI (Korea Atomic Energy Research Institute). Chromium metals have been used in many fields because of its hardness and corrosion-resistance. The use of the chromium metal in nuclear fuel rod can enhance the conductivity of pellets and corrosion-resistance of cladding. The objective of this work is to study the neutronic performances and characteristics of the commercial PWR core loaded the ATF-bearing assemblies. In this work, we studied the PWR cores which are loaded with ATF assemblies to improve the safety of reactor core. The ATF rod consists of the metallic microcell UO2 pellet which includes chromium of 3.34 wt% and the outer 0.05mm thick coating of Cr-based alloy with atomic number ratio of 85:15. We performed the cycle-by-cycle reload core analysis from the cycle 8 at which the ATF fuel assemblies start to be loaded into the core. The target nuclear power plant is the Hanbit-3 nuclear power plant. From the analysis, it was found that 1) the uranium enrichment is required to be increased up to 5.20/4.70 wt% in order to satisfy a required cycle length of 480 EFPDs, 2) the cycle length for the core using ATF fuel assemblies with the same uranium enrichments as those in the reference UO{sub 2} fueled core is decreased from 480 EFPDs to 430 EFPDs.

The accelerating cavities used in the rapid cycling synchrotron (RCS) of the Japan Proton Accelerator Research Complex (J-PARC) are loaded with magnetic alloy (MA) cores. Over lengthly periods of RCS operation, significant reductions in the impedance of the cavities resulting from the buckling of the cores were observed. A series of thermal structural simulations and compressive strength tests showed that the buckling can be attributed to the low-viscosity epoxy resin impregnation of the MA core that causes the stiffening of the originally flexible MA–ribbon–wound core. Our results showed that thermal stress can be effectively reduced upon using a core that is not epoxy-impregnated. -- Highlights: • Study to identify the origin of buckling in the MA cores is presented. • Thermal stress simulations and compressive strength tests were carried out. • Results show that thermal stress is the origin of core buckling. • Thermal stress can be reduced by using cores without epoxy impregnation.

The hydrodynamic operation of the "Forest Flyer" type of explosive launching system for shock physics projectiles was investigated in detail using one and two dimensional continuum dynamics simulations. The simulations were numerically converged and insensitive to uncertainties in the material properties; they reproduced the speed of the projectile and the shape of its rear surface. The most commonly used variant, with an Al alloy case, was predicted to produce a slightly curved projectile, subjected to some shock heating and likely exhibiting some porosity from tensile damage. The curvature is caused by a shock reflected from the case; tensile damage is caused by the interaction of the Taylor wave pressure profile from the detonation wave with the free surface of the projectile. The simulations gave only an indication of tensile damage in the projectile, as damage is not understood well enough for predictions in this loading regime. The flatness can be improved by using a case of lower shock impedance, such as polymethyl methacrylate. High-impedance cases, including Al alloys but with denser materials improving the launching efficiency, can be used if designed according to the physics of oblique shock reflection, which indicates an appropriate case taper for any combination of explosive and case material. The tensile stress induced in the projectile depends on the relative thickness of the explosive, expansion gap, and projectile. The thinner the projectile with respect to the explosive, the smaller the tensile stress. Thus if the explosive is initiated with a plane wave lens, the tensile stress is lower than that for initiation with multiple detonators over a plane. The previous plane wave lens designs did, however, induce a tensile stress close to the spall strength of the projectile. The tensile stress can be reduced by changes in the component thicknesses. Experiments verifying the operation of explosively launched projectiles should attempt to measure

Low cost graphic cards today use many, relatively simple, compute cores to deliver support for memory bandwidth of more than 100GB/s and theoretical floating point performance of more than 500 GFlop/s. Right now this performance is, however, only accessible to highly parallel algorithm implementations that, (i) can use a hundred or more, 32-bit floating point, concurrently executing cores, (ii) can work with graphics memory that resides on the graphics card side of the graphics bus and (iii) can be partially expressed in a language that can be compiled by a graphics programming tool. In this talk we describe our experiences implementing a complete, but relatively simple, time dependent shallow-water equations simulation targeting a cluster of 30 computers each hosting one graphics card. The implementation takes into account the considerations (i), (ii) and (iii) listed previously. We code our algorithm as a series of numerical kernels. Each kernel is designed to be executed by multiple threads of a single process. Kernels are passed memory blocks to compute over which can be persistent blocks of memory on a graphics card. Each kernel is individually implemented using the NVidia CUDA language but driven from a higher level supervisory code that is almost identical to a standard model driver. The supervisory code controls the overall simulation timestepping, but is written to minimize data transfer between main memory and graphics memory (a massive performance bottle-neck on current systems). Using the recipe outlined we can boost the performance of our cluster by nearly an order of magnitude, relative to the same algorithm executing only on the cluster CPU's. Achieving this performance boost requires that many threads are available to each graphics processor for execution within each numerical kernel and that the simulations working set of data can fit into the graphics card memory. As we describe, this puts interesting upper and lower bounds on the problem sizes

The Atlas facility, now under construction at Los Alamos National Laboratory (LANL), will provide a unique capability for performing high-energy-density experiments in support of weapon-physics and basic-research programs. It is intended to be an international user facility, providing opportunities for researchers from national laboratories and academic institutions around the world. Emphasizing institutions around the world. Emphasizing hydrodynamic experiments, Atlas will provide the capability for achieving steady shock pressures exceeding 10-Mbar in a volume of several cubic centimeters. In addition, the kinetic energy associated with solid liner implosion velocities exceeding 12 km/s is sufficient to drive dense, hydrodynamic targets into the ionized regime, permitting the study of complex issues associated with strongly-coupled plasmas. The primary element of Atlas is a 23-MJ capacitor bank, comprised of 96 separate Marx generators housed in 12 separate oil-filled tanks, surrounding a central target chamber. Each tank will house two, independently-removable maintenance units, with each maintenance unit consisting of four Marx modules. Each Marx module has four capacitors that can each be charged to a maximum of 60 kilovolts. When railgap switches are triggered, the marx modules erect to a maximum of 240 kV. The parallel discharge of these 96 Marx modules will deliver a 30-MA current pulse with a 4-5-{micro}s risetime to a cylindrical, imploding liner via 24 vertical, tri-plate, oil-insulated transmission lines. An experimental program for testing and certifying all Marx and transmission line components has been completed. A complete maintenance module and its associated transmission line (the First Article) are now under construction and testing. The current Atlas schedule calls for construction of the machine to be complete by August, 2000. Acceptance testing is scheduled to begin in November, 2000, leading to initial operations in January, 2001.

We consider the set E of all the yes-no experiments that can be performed on a given physical system and the related posets (E,<=) of the ''effects'' and (L,<=) of the ''propositions'', illustrate by means of examples the relations <= and <= and give counter examples for properties that one might suspect to hold in (E,<=); in particular, we show that Mackey's axiom V does not usually hold either in (E,<=) or in its greatest subposet (E/sub 0/,<=) which can be orthocomplemented with standard methods in quantum logic. Following on the suggestions arising from the examples, we associate with every observable T, by means of the concept of ''efficiency'', a family Esub(T) of yes-no experiments, hence a family Esub(T) of effects parameterized by the Borel fuzzy sets on the real line, and show that the description of the effects by means of operators, which is usual in some axiomatic approaches, can be recovered in standard Hilbert-space quantum theory as an immediate consequence of simple, ''intuitive'' assumptions on E. This description is used in order to explicitly display (possibly in the presence of superselection rules) some properties of the representations of (E,<=) and (L,<=), and the links between some different axiomatic approaches (in particular, Mackey and Piron). Finally, we point out some mathematical properties of the lattice of the operators that describe Esub(T).

The set E of all the yes-no experiments that can be performed on a given physical system and the related posets (E, <=) of the 'effects' and (L, '<=') of the propositions are considered. The relations <= and '<=' are illustrated by means of examples, and counterexamples for properties that one might suspect to hold in (E, '<=') are given. In particular it is shown that Mackey's axiom V does not usually hold either in (E, <=) or in its greatest subposet (E/sub 0/, <=) which can be orthocomplemented with standard methods in quantum logic. Following on the suggestions arising from the examples, it is associated with every observable T, by means of the concept of 'efficiency', a family Esub(T) of yes-no experiments, hence a family Esub(T) of effects parametrized by the Borel fuzzy sets on the real line, and it is shown that the description of the effects by means of operators, which is usual in some axiomatic approaches, can be recovered in standard Hilbert-space quantum theory as an immediate consequence of simple, 'intuitive' assumptions on E. This description is used in order to explicitly display (possibly in the presence of superselection rules) some properties of the representations of (E, <=) and (L, '<='), and the links between some different axiomatic approaches (in particular, Mackey and Piron). Finally, some mathematical properties of the lattice of the operators that describe Esub(T) are pointed out.

Recent solar observations suggest that the Sun's corona is heated by Alfven waves that dissipate at unexpectedly low heights in the corona. These observations raise a number of questions. Among them are the problems of accurately quantifying the energy flux of the waves and that of describing the physical mechanism that leads to the wave damping. We are performing laboratory experiments to address both of these issues.The energy flux depends on the electron density, which can be measured spectroscopically. However, spectroscopic density diagnostics have large uncertainties, because they depend sensitively on atomic collisional excitation, de-excitation, and radiative transition rates for multiple atomic levels. Essentially all of these data come from theory and have not been experimentally validated. We are conducting laboratory experiments using the electron beam ion trap (EBIT) at Lawrence Livermore National Laboratory that will provide accurate empirical calibrations for spectroscopic density diagnostics and which will also help to guide theoretical calculations.The observed rapid wave dissipation is likely due to inhomogeneities in the plasma that drive flows and currents at small length scales where energy can be more efficiently dissipated. This may take place through gradients in the Alfvén speed along the magnetic field, which causes wave reflection and generates turbulence. Alternatively, gradients in the Alfvén speed across the field can lead to dissipation through phase-mixing. Using the Large Plasma Device (LAPD) at the University of California Los Angeles, we are studying both of these dissipation mechanisms in the laboratory in order to understand their potential roles in coronal heating.

Full Text Available In this study, we investigated the effect of eight-week core training on physical and physiological parameters of football players. 44 football players, 22 experimental group (EG and 22 control group (CG, between 18-30 years of age were included in the study. While eight-week core trainings were applied to EG, normal trainings were continued in CG. Body composition, leg strength, back strength, flexibility, vertical jump, 20-m speed and VO2max (maximal oxygen consumption capacity measurements of the groups were taken. Independent t-test for paired comparison of the groups and dependent t-test for the comparison of pre- and post-tests of the groups were used. Significant improvement was observed in all parameters of EG. A significant improvement was seen in BMI (Body Mass Index, weight, vertical jump and leg and back strength values of CG. In the differences of the groups, the significance at p<0.05 level was detected in weight, BMI, flexibility, leg and back strength, 20-m speed and VO2max values in favor of EG. All in all, it can be concluded that there are some positive effects of core strength training on physical and physiological parameters.

ALEGRA is a coupled physics framework originally written to simulate inertial confinement fusion (ICF) experiments being conducted at the PBFA-II facility at Sandia National Laboratories. It has since grown into a large software development project supporting a number of computational programs at Sandia. As the project has grown, so has the development team, from the original two authors to a group of over fifteen programmers crossing several departments. In addition, ALEGRA now runs on a wide variety of platforms, from large PCs to the ASCI Teraflops massively parallel supercomputer. The authors discuss the reasons for ALEGRA`s success, which include the intelligent use of object-oriented techniques and the choice of C++ as the programming language. They argue that the intelligent use of development tools, such as build tools (e.g. make), compiler, debugging environment (e.g. dbx), version control system (e.g. cvs), and bug management software (e.g. ClearDDTS), is nearly as important as the choice of language and paradigm.

New physics beyond the Standard Model can lead to extra matter effects on neutrino oscillation if the new interactions distinguish among the three flavors of neutrino. In a previous paper, we argued that a long-baseline neutrino oscillation experiment in which the Fermilab-NUMI beam in its high-energy mode is aimed at the planned Hyper-Kamiokande detector would be capable of constraining the size of those extra effects, provided the vacuum value of \\sin^2 2\\theta_{23} is not too close to one. In this paper, we discuss how such a constraint would translate into limits on the coupling constants and masses of new particles in various models. The models we consider are: models with generation distinguishing Z's such as topcolor assisted technicolor, models containing various types of leptoquarks, R-parity violating SUSY, and extended Higgs sector models. In several cases, we find that the limits thus obtained could be competitive with those expected from direct searches at the LHC. In the event that any of the pa...

To improve the seismic performance of reinforced concrete core walls, reinforced concrete com-posite core walls with concealed steel truss were proposed and systemically investigated. Two 1/6 scale core wall specimens, including a normal reinforced concrete core wall and a reinforced concrete composite core wall with concealed steel truss, were designed. The experimental study on seismic performance under cyclic loading was carried out. The load-carrying capacity, stiffness, ductility,hysteretic behavior and energy dissipation of the core walls were discussed. The test results showed that the seismic performance of core walls is improved greatly by the concealed steel truss. The calculated results were found to agree well with the actual measured ones.

We conducted four core-flood experiments on samples of a micritic, reef limestone from Abu Dhabi under conditions of constant flow rate. The pore fluid was water in equilibrium with CO2, which, because of its lowered pH, is chemically reactive with the limestone. Flow rates were between 0.03 and 0.1 mL/min. The difference between up and downstream pore pressures dropped to final values ≪1 MPa over periods of 3-18 h. Scanning electron microscope and microtomography imaging of the starting material showed that the limestone is mostly calcite and lacks connected macroporosity and that the prevailing pores are few microns large. During each experiment, a wormhole formed by localized dissolution, an observation consistent with the decreases in pressure head between the up and downstream reservoirs. Moreover, we numerically modeled the changes in permeability during the experiments. We devised a network approach that separated the pore space into competing subnetworks of pipes. Thus, the problem was framed as a competition of flow of the reactive fluid among the adversary subnetworks. The precondition for localization within certain time is that the leading subnetwork rapidly becomes more transmissible than its competitors. This novel model successfully simulated features of the shape of the wormhole as it grew from few to about 100 µm, matched the pressure history patterns, and yielded the correct order of magnitude of the breakthrough time. Finally, we systematically studied the impact of changing the statistical parameters of the subnetworks. Larger mean radius and spatial correlation of the leading subnetwork led to faster localization.

Semiconductor nanocrystals (NCs) (also known as quantum dots, QDs) have attracted immense attention for their size-tunable optical properties that makes them impressive candidates for solar cells, light emitting devices, lasers, as well as biomedical imaging. However monodispersity, high and consistent photoluminescence, photostability, and biocompatibility are still major challenges. This work focuses on optimizing the photophysical properties and biocompatibility of QDs by forming core-shell nanostructures and their encapsulation by a carrier. Highly luminescent CdS and CdS-ZnS core-shell QDs with 5 nm sizes were synthesized using a facile approach based on pyrolysis of the single molecule precursors. After capping the CdS QDs with a thin layer of ZnS to reduce toxicity, the photoluminescence and photostability of the core-shell QDs was significantly enhanced. To make both the bare and core/shell structure QDs more resistant against photochemical reactions, a mesoporous silica layer was grown on the QDs through a reverse microemulsion technique based on hydrophobic interaction. This encapsulation enhanced the quantum yield and photostability compared to the bare QDs by providing much stronger resistance to oxidation and Oswald ripening of QDs. Encapsulation also improved biocompatibility of QDs that was evaluated with human umbilical vein endothelial cell lines (HUVEC)

Semiconductor nanocrystals (NCs) (also known as quantum dots, QDs) have attracted immense attention for their size-tunable optical properties that makes them impressive candidates for solar cells, light emitting devices, lasers, as well as biomedical imaging. However monodispersity, high and consistent photoluminescence, photostability, and biocompatibility are still major challenges. This work focuses on optimizing the photophysical properties and biocompatibility of QDs by forming core-shell nanostructures and their encapsulation by a carrier. Highly luminescent CdS and CdS-ZnS core-shell QDs with 5 nm sizes were synthesized using a facile approach based on pyrolysis of the single molecule precursors. After capping the CdS QDs with a thin layer of ZnS to reduce toxicity, the photoluminescence and photostability of the core-shell QDs was significantly enhanced. To make both the bare and core/shell structure QDs more resistant against photochemical reactions, a mesoporous silica layer was grown on the QDs through a reverse microemulsion technique based on hydrophobic interaction. This encapsulation enhanced the quantum yield and photostability compared to the bare QDs by providing much stronger resistance to oxidation and Oswald ripening of QDs. Encapsulation also improved biocompatibility of QDs that was evaluated with human umbilical vein endothelial cell lines (HUVEC).

An object-based evaluation method using a pattern recognition algorithm (i.e., classification trees) is applied to the simulated orographic precipitation for idealized experimental setups using the National Center of Atmospheric Research (NCAR) Community Atmosphere Model (CAM) with the finite volume (FV) and the Eulerian spectral transform dynamical cores with varying resolutions. Daily simulations were analyzed and three different types of precipitation features were identified by the classification tree algorithm. The statistical characteristics of these features (i.e., maximum value, mean value, and variance) were calculated to quantify the difference between the dynamical cores and changing resolutions. Even with the simple and smooth topography in the idealized setups, complexity in the precipitation fields simulated by the models develops quickly. The classification tree algorithm using objective thresholding successfully detected different types of precipitation features even as the complexity of the precipitation field increased. The results show that the complexity and the bias introduced in small-scale phenomena due to the spectral transform method of CAM Eulerian spectral dynamical core is prominent, and is an important reason for its dissimilarity from the FV dynamical core. The resolvable scales, both in horizontal and vertical dimensions, have significant effect on the simulation of precipitation. The results of this study also suggest that an efficient and informative study about the biases produced by GCMs should involve daily (or even hourly) output (rather than monthly mean) analysis over local scales.

We found that in regions of high mass star formation the CS emission correlates well with the dust continuum emission and is therefore a good tracer of the total mass while the N$_2$H$^+$ distribution is frequently very different. This is opposite to their typical behavior in low-mass cores where freeze-out plays a crucial role in the chemistry. The behavior of other high density tracers varies from source to source but most of them are closer to CS. Radial density profiles in massive cores are fitted by power laws with indices about -1.6, as derived from the dust continuum emission. The radial temperature dependence on intermediate scales is close to the theoretically expected one for a centrally heated optically thin cloud. The velocity dispersion either remains constant or decreases from the core center to the edge. Several cores including those without known embedded IR sources show signs of infall motions. They can represent the earliest phases of massive protostars. There are implicit arguments in favor...

Introduction In March and April, 2004, the U.S. Geological Survey (USGS), in cooperation with the North Carolina Geological Survey (NCGS) and the Raleigh Water Resources Discipline (WRD), drilled a stratigraphic test hole and well in Bertie County, North Carolina (fig. 1). The Hope Plantation test hole (BE-110-2004) was cored on the property of Hope Plantation near Windsor, North Carolina. The drill site is located on the Republican 7.5 minute quadradrangle at lat 36?01'58'N., long 78?01'09'W. (decimal degrees 36.0329 and 77.0192) (fig. 2). The altitude of the site is 48 ft above mean sea level as determined by Paulin Precise altimeter. This test hole was continuously cored by Eugene F. Cobbs, III and Kevin C. McKinney (USGS) to a total depth of 1094.5 ft. Later, a ground water observation well was installed with a screened interval between 315-329 feet below land surface (fig. 3). Upper Triassic, Lower Cretaceous, Upper Cretaceous, Tertiary, and Quaternary sediments were recovered from the site. The core is stored at the NCGS Coastal Plain core storage facility in Raleigh, North Carolina. In this report, we provide the initial lithostratigraphic summary recorded at the drill site along with site core photographs, data from the geophysical logger, calcareous nannofossil biostratigraphic correlations (Table 1) and initial hydrogeologic interpretations. The lithostratigraphy from this core can be compared to previous investigations of the Elizabethtown corehole, near Elizabethtown, North Carolina in Bladen County (Self-Trail, Wrege, and others, 2004), the Kure Beach corehole, near Wilmington, North Carolina in New Hanover County (Self-Trail, Prowell, and Christopher, 2004), the Esso #1, Esso #2, Mobil #1 and Mobil #2 cores in the Albermarle and Pamlico Sounds (Zarra, 1989), and the Cape Fear River outcrops in Bladen County (Farrell, 1998; Farrell and others, 2001). This core is the third in a series of planned benchmark coreholes that will be used to elucidate the

Full Text Available LHCf is a small detector installed at LHC accelerator to measure neutral particle ﬂow in the forward direction of proton -proton (p - p and proton -nucleus (p - A interactions. Thanks to the optimal performance that has characterized the last years’ running of the LHC collider, several measurements have been taken since 2009 in different running conditions. After data taking for p - p interactions at √s = 900 GeV, 2.76 TeV and 7 TeV and proton - Lead nucleus (p -Pb at √sNN = 5.02 TeV (energy of a couple of projectile and target nucleons in their center of mass reference frame, LHCf is now going to complete its physics program with the 13 TeV p - p run foreseen in 2015. The complete set of results will become a reference data set of forward physics for the calibration and tuning of the hadronic interaction models currently used for the simulation of the atmospheric showers induced by very high energy cosmic rays. For this reason we think that LHCf is giving an important contribution for the study of cosmic rays at the highest energies. In this paper the experiment, the published results and the current status are reviewed.

The objective of this study is to evaluate flexural strength, flexural modulus, compressive strength, curing temperature, curing depth, volumetric shrinkage, water sorption, and hygroscopic expansion of two self-, three dual-, and three light-curing resin-based core materials. Flexural strength and water sorption were measured according to ISO 4049, flexural modulus, compressive strength, curing temperature, and curing depth according to well-proven, literature-known methods, and the volumetric behavior was determined by the Archimedes' principle. ANOVA was calculated to find differences between the materials' properties, and correlation of water sorption and hygroscopic expansion was analysed according to Pearson (p Core demonstrated the highest flexural strength (125 ± 12 MPa) and curing depth (15.2 ± 0.1 mm) and had the highest flexural modulus (≈12.6 ± 1.2 GPa) concertedly with Multicore HB. The best compressive strength was measured for Voco Rebilda SC and Clearfil DC Core Auto (≈260 ± 10 MPa). Encore SuperCure Contrast had the lowest water sorption (11.8 ± 3.3 µg mm(-3)) and hygroscopic expansion (0.0 ± 0.2 vol.%). Clearfil Photo Core and Encore SuperCure Contrast demonstrated the lowest shrinkage (≈2.1 ± 0.1 vol.%). Water sorption and hygroscopic expansion had a very strong positive correlation. The investigated core materials significantly differed in the tested properties. The performance of the materials depended on their formulation, as well as on the respective curing process.

It is important to understand the behaviors of molten core materials to investigate the progression of a core meltdown accident. In the early stages of bundle degradation, low-melting-temperature liquid phases are expected to form via the eutectic reaction between Zircaloy and stainless steel. The main component of Zircaloy is Zr and those of stainless steel are Fe, Ni, and Cr. Our group has previously reported physical property data such as viscosity, density, and surface tension for Zr-Fe liquid alloys using an electrostatic levitation technique. In this study, we report the viscosity, density, and surface tension of Zr-Ni and Zr-Cr liquid alloys (Zr1-xNix (x = 0.12 and 0.24) and Zr0.77Cr0.23) using the electrostatic levitation technique.

Studies have found that movement can have a positive effect on the linguistic and intellectual capabilities of the brain, proving that physical fitness is related to academic performance. By allowing elementary students to move around and be involved in physical activity at school, the brain is able to make stronger connections with the material…

Recent trends highlight the connection between engagement in physical activity and cognitive function. This is a key point to consider when designing physical education curricula and the activities that are included. By exposing students to material in a variety of ways students' interest can be sparked, yielding greater learning and understanding…

For detailed reconstructions of atmospheric metal deposition using peat cores from bogs, a comprehensive protocol for working with peat cores is proposed. The first step is to locate and determine suitable sampling sites in accordance with the principal goal of the study, the period of time of interest and the precision required. Using the state of the art procedures and field equipment, peat cores are collected in such a way as to provide high quality records for paleoenvironmental study. Pertinent field observations gathered during the fieldwork are recorded in a field report. Cores are kept frozen at -18 degree C until they can be prepared in the laboratory. Frozen peat cores are precisely cut into 1 cm slices using a stainless steel band saw with stainless steel blades. The outside edges of each slice are removed using a titanium knife to avoid any possible contamination which might have occurred during the sampling and handling stage. Each slice is split, with one-half kept frozen for future studies (archived), and the other half further subdivided for physical, chemical, and mineralogical analyses. Physical parameters such as ash and water contents, the bulk density and the degree of decomposition of the peat are determined using established methods. A subsample is dried overnight at 105 degree C in a drying oven and milled in a centrifugal mill with titanium sieve. Prior to any expensive and time consuming chemical procedures and analyses, the resulting powdered samples, after manual homogenisation, are measured for more than twenty-two major and trace elements using non-destructive X-Ray fluorescence (XRF) methods. This approach provides lots of valuable geochemical data which documents the natural geochemical processes which occur in the peat profiles and their possible effect on the trace metal profiles. The development, evaluation and use of peat cores from bogs as archives of high-resolution records of atmospheric deposition of mineral dust and trace

The purpose of this article is to present the properties of cylindrical lenses and provide some examples of their use in easy school physicsexperiments. Such experiments could be successfully conducted in the context of science education, in fun experiments that teach physics and in science fair projects, or used to entertain an audience by…

believe that combining visualization with physical models is a step further towards a better understanding of these relationships. We conducted a concept study using natural, artificial and 3D-printed soil cores. Eight natural soil cores (100 cm3) were sampled in a cultivated stagnic Luvisol at two depths...... cores were scanned in a micro x-ray CT scanner at a resolution of 35 μm. The reconstructed image of each soil core was printed with 3D multijet printing technology at a resolution of 29 μm. In some reconstructed digital volumes of the natural soil cores, pores of different sizes (equivalent diameter...... of 35, 70, 100, and 200 μm) were removed before additional 3D printing. Effective air-filled porosity, Darcian air permeability, and oxygen diffusion were measured on all natural, artificial and printed cores. The comparison of the natural and the artificial cores emphasized the difference in pore...

The U. S. Department of Energy's Source PhysicsExperiments (SPEs) comprise a series of small chemical explosions used to develop a better understanding of seismic energy generation and wave propagation for low-yield explosions. In particular, we anticipate improved understanding of the processes through which shear waves are generated by the explosion source. Three tests, 100, 1000 and 1000 kg yields respectively, were detonated in the same emplacement hole and recorded on the same networks of ground motion sensors in the granites of Climax Stock at the Nevada National Security Site. We present results for the analysis and modeling of seismic waveforms recorded close-in on five linear geophone lines extending radially from ground zero, having offsets from 100 to 2000 m and station spacing of 100 m. These records exhibit azimuthal variations of P-wave arrival times, and phase velocity, spreading and attenuation properties of high-frequency Rg waves. We construct a 1D seismic body-wave model starting from a refraction analysis of P-waves and adjusting to address time-domain and frequency-domain dispersion measurements of Rg waves between 2 and 9 Hz. The shallowest part of the structure we address using the arrival times recorded by near-field accelerometers residing within 200 m of the shot hole. We additionally perform a 2D modeling study with the Spectral Element Method (SEM) to investigate which structural features are most responsible for the observed variations, in particular anomalously weak amplitude decay in some directions of this topographically complicated locality. We find that a near-surface, thin, weathered layer of varying thickness and low wave speeds plays a major role on the observed waveforms. We anticipate performing full 3D modeling of the seismic near-field through analysis and validation of waveforms on the 5 radial receiver arrays.

I review single-molecule experiments (SMEs) in biological physics. Recent technological developments have provided the tools to design and build scientific instruments of high enough sensitivity and precision to manipulate and visualize individual molecules and measure microscopic forces. Using SMEs it is possible to manipulate molecules one at a time and measure distributions describing molecular properties, characterize the kinetics of biomolecular reactions and detect molecular intermediates. SMEs provide additional information about thermodynamics and kinetics of biomolecular processes. This complements information obtained in traditional bulk assays. In SMEs it is also possible to measure small energies and detect large Brownian deviations in biomolecular reactions, thereby offering new methods and systems to scrutinize the basic foundations of statistical mechanics. This review is written at a very introductory level, emphasizing the importance of SMEs to scientists interested in knowing the common playground of ideas and the interdisciplinary topics accessible by these techniques. The review discusses SMEs from an experimental perspective, first exposing the most common experimental methodologies and later presenting various molecular systems where such techniques have been applied. I briefly discuss experimental techniques such as atomic-force microscopy (AFM), laser optical tweezers (LOTs), magnetic tweezers (MTs), biomembrane force probes (BFPs) and single-molecule fluorescence (SMF). I then present several applications of SME to the study of nucleic acids (DNA, RNA and DNA condensation) and proteins (protein-protein interactions, protein folding and molecular motors). Finally, I discuss applications of SMEs to the study of the nonequilibrium thermodynamics of small systems and the experimental verification of fluctuation theorems. I conclude with a discussion of open questions and future perspectives.

The main characteristics of the neutron field formed within the massive (512 kg) natural uranium target assembly (TA) QUINTA irradiated by deuteron beam of JINR Nuclotron with energies 1,2,4, and 8 GeV as well as the spatial distributions and the integral numbers of (n,f), (n,γ) and (n,xn)- reactions were calculated and compared with experimental data [1] . The MCNPX 27e code with ISABEL/ABLA/FLUKA and INCL4/ABLA models of intra-nuclear cascade (INC) and experimental cross-sections of the corresponding reactions were used. Special attention was paid to the elucidation of the role of charged particles (protons and pions) in the fission of natural uranium of TA QUINTA. Extensive calculations have been done for quasi-infinite (with very small neutron leakage) depleted uranium TA BURAN having mass about 20 t which are intended to be used in experiments at Nuclotron in 2014-2016. As in the case of TA QUINTA which really models the central zone of TA BURAN the total numbers of fissions, produced 239Pu nuclei and total neutron multiplicities are predicted to be proportional to proton or deuteron energy up to 12 GeV. But obtained values of beam power gain are practically constant in studied incident energy range and are approximately four. These values are in contradiction with the experimental result [2] obtained for the depleted uranium core weighting three tons at incident proton energy 0.66 GeV.

The City of El Centro is proposing the development of a geothermal energy utility core field experiment to demonstrate the engineering and economic feasibility of utilizing moderate temperature geothermal heat, on a pilot scale, for space cooling, space heating, and domestic hot water. The proposed facility is located on part of a 2.48 acre (1 hectare) parcel owned in fee by the City in the southeastern sector of El Centro in Imperial County, California. Geothermal fluid at an anticipated temperature of about 250/sup 0/F (121/sup 0/C) will heat a secondary fluid (water) which will be utilized directly or processed through an absorption chiller, to provide space conditioning and water heating for the El Centro Community Center, a public recreational facility located approximately one-half mile north of the proposed well site. The geothermal production well will be drilled to 8500 feet (2590m) and an injection well to 4000 feet (1220m) at the industrially designated City property. Once all relevant permits are obtained it is estimated that site preparation, facility construction, the completion and testing of both wells would be finished in approximately 26 weeks. The environmental impacts are described.

Full Text Available Core shooting process is the most widely used technique to make sand cores and it plays an important role in the quality of sand cores as well as the manufacture of complicated castings in metal casting industry. In this paper, the flow behavior of sand particles in the core box was investigated synchronously with transparent core box, high-speed camera, and pressure measuring system. The flow pattern of sand particles in the shooting head of the core shooting machine was reproduced with various colored core sand layers. Taking both kinetic and frictional stress into account, a kinetic-frictional constitutive correlation was established to describe the internal momentum transfer in the solid phase. Two-fluid model (TFM simulations with turbulence model were then performed and good agreement was achieved between the experimental and simulation results on the flow behavior of sand particles in both the shooting head and the core box. Based on the experimental and simulation results, the flow behavior of sand particles in the core box, the formation of “dead zone” in the shooting head, and the effect of drag force were analyzed in terms of sand volume fraction (αs, sand velocity (Vs, and pressure variation (P.

The wide ranging interest in the development of heavy ion synchrotrons with electron beam cooling is evident from the number of projects presently under way. Although much of the initial motivation for these rings stemmed from nuclear and particle physics, a considerable amount of atomic physics experimentation is planned. This paper surveys some of the new opportunities in atomic physics which may be made available with storage ring systems. 25 refs., 3 tabs.

In this paper we study the gravitational collapse of a molecular hydrogen gas cloud composed of a core plus a gas envelope surrounding the core. We numerically simulate the collapse of four cloud models to take a glimpse to the time evolution of several dynamic variables, such as the angular momentum and the $aem$ ratio, as well as the ratios between the thermal and rotational energies with respect to the potential gravitational energy, denoted as $\\alpha$ and $\\beta$, respectively, among others. We re-take those models introduced by Arreaga et.al (Astronomy and Astrophysics, {\\bf Vol. 509}, (2010), pag. A96.) in the present paper in order to produce different outcomes of the collapsing cloud characterized in terms of the aforementioned dynamical variables. Such characterization was missing in the paper by Arreaga et.al (Astronomy and Astrophysics, {\\bf Vol. 509}, (2010), pag. A96.), and here we show that the gas envelope extension effects on the collapsing core can be quantitatively compared.

For a two-layer model of the Moon that consists of a solid nonspherical mantle and an ellipsoidal homogeneous liquid core, a theory of forced librations under the effect of gravitational Earth's moments has been developed. The motion of the Moon over its orbit has been described by the high-accuracy theory of DE/LE-4 orbital motion. Tables have been constructed that present forced librations of the Moon caused by the second harmonic of its force function, in the neighborhood of its motion according to the generalized Cassini laws. Disturbances of the first-order with respect to dynamic compressions of the Moon and its core are obtained in analytical form for Andoyer variables and Poincare variables and for the projection of the angular velocity vector of Moon's mantle rotation and the Poincare coordinate system (relative to which core's liquid accomplishes simple motion) on its major central axes of inertia, as well as for the classical variables in the Moon libration theory, etc. Constructed tables of the forced librations theory give the amplitudes and periods of librations and combinations of arguments of the orbital motion theory that correspond to libration parameters. The interpretation of basic variations has been given and a comparison with the previous theories has been carried out, in particular with the modern empirical theory constructed based on the laser observation data.

National guidelines recommend that healthy pregnant women take 30 minutes or more of moderate exercise a day. Most women reduce the level of physical activity during pregnancy but only a few studies of women's experiences of physical activity during pregnancy exist. The aim of the present study w...

National guidelines recommend that healthy pregnant women take 30 minutes or more of moderate exercise a day. Most women reduce the level of physical activity during pregnancy but only a few studies of women's experiences of physical activity during pregnancy exist. The aim of the present study...

In order to study the response of out-of-core detectors, 16 stainless steel plates, with 0.5 cm thickness, were placed at the core-reflector interface of the IPEN/MB-01 reactor. BF{sub 3}, {sup 10} B and Au foil detectors were localized beyond the stainless steel plates in 7 different positions, one of them outside the moderator tank of the reactor for simulating a true PWR out-of-core detector. Calculations were performed for comparison with the experimental results with the TORT code, a three-dimensional transport theory discrete ordinate code. The experiment model utilized 16 energy groups, X-Y{sub Z} geometry, S{sub 16} discrete ordinates and P{sub 3} cross-sections. The obtained results showed a good agreement between measured and calculated reaction rates in Au foils. The larger discrepancy occurred for the case with 16 stainless steel with a 2,2% deviation. For position 7, outside of the moderator tank, the neutron flux was so low that it could not active the Au foils for the reaction rate measurements. (author)

Investigation of the transport of reactive fluids in porous rocks is an intriguing but challenging task and relevant in several areas of science and engineering such as geology, hydrogeology, and petroleum engineering. We designed and constructed an experimental setup to investigate physical and chemical processes caused by the flow of reactive and volatile fluids such as supercritical CO(2) and/or H(2)S in geological formations. Potential applications are geological sequestration of CO(2) in the frame of carbon capture and storage and acid-gas injection for sulfur disposal and/or enhanced oil recovery. The present paper outlines the design criteria and the realization of reactive transport experiments on the laboratory scale. We focus on the spatial and time evolution of rock and fluid composition as a result of chemical rock fluid interaction and the coupling of chemistry and fluid flow in porous rocks.

The Common Core State Standards (CCSS) are a focus of state education policy today influencing curriculum implementation and assessment in public schools. The purpose of this narrative inquiry is to understand how high school mathematics teachers experience the transition period. Based on interviews with mathematics teachers in a high school in…

Seasonal storage of excess heat in hot deep aquifers is considered to optimize the usage of commonly available energy sources. The chemical effects of heating the Gassum Sandstone Formation to up to 150 degrees C is investigated by combining laboratory core flooding experiments with petrographic...

This mixed methods study explored elementary teachers' (n = 73) experiences with and perspectives on the recently implemented Common Core State Standards for Mathematics (CCSS-Mathematics) at a high-needs, urban school. Analysis of the survey, questionnaire, and interview data reveals the findings cluster around: familiarity with and preparation…

and relations revealed several key factors influencing their recess physical activity: perceived classroom safety, indoor cosiness, lack of attractive outdoor facilities, bodily dissatisfaction, bodily complaints, tiredness, feeling bored, and peer influence. CONCLUSION: We found that the four existential...... the classroom as a space for physical activity, designing schoolyards with smaller secluded spaces and varied facilities, improving children's self-esteem and body image, e.g., during physical education, and creating teacher organised play activities during recess.......BACKGROUND: Increasing recess physical activity has been the aim of several interventions, as this setting can provide numerous physical activity opportunities. However, it is unclear if these interventions are equally effective for all children, or if they only appeal to children who are already...

We will present the first results of an innovative program at Texas A&M University that aims to enhance the learning and research experiences of undergraduate and graduate students through their participation in high-profile outreach activities: principally the Texas A&M Physics and Engineering Festival and the Physics Shows. The goals are to enhance students' knowledge of fundamental physics concepts through collaborative hands-on research and educational activities, to teach them effective communication skills and responsibility, and to enhance their opportunities for interactions with their peers and professors outside the classroom. The program activities include (i) students working side-by-side with their peers and professors on research, concept, design, and fabrication of physics demonstration experiments, (ii) presentation of these exhibits during the Festival and Shows in teams of several students and faculty members, (iii) assessment of students teamwork, and (iv) incorporation of new demonstrations in core curriculum classes. Texas A&M Physics and Engineering Festival is a major annual outreach event at TAMU attracting over 4000 visitors and featuring over 100 interactive exhibits, public lectures by prominent scientists, and various hands-on activities. This program is supported by Tier One Grant from Texas A&M University.

This report describes an experimental intensive core French program for grades 5 and 6 at Churchill Alternative School in Ottawa (Canada). The aim was to improve the oral French skills of core French students by providing a period of intensive exposure to French and by increasing the total number of hours in French during one program year from 120…

The long-range goal of the Numerical Tokamak Project (NTP) is the reliable prediction of tokamak performance using physics-based numerical tools describing tokamak physics. The NTP is accomplishing the development of the most advanced particle and extended fluid model`s on massively parallel processing (MPP) environments as part of a multi-institutional, multi-disciplinary numerical study of tokamak core fluctuations. The NTP is a continuing focus of the Office of Fusion Energy`s theory and computation program. Near-term HPCC work concentrates on developing a predictive numerical description of the core plasma transport in tokamaks driven by low-frequency collective fluctuations. This work addresses one of the greatest intellectual challenges to our understanding of the physics of tokamak performance and needs the most advanced computational resources to progress. We are conducting detailed comparisons of kinetic and fluid numerical models of tokamak turbulence. These comparisons are stimulating the improvement of each and the development of hybrid models which embody aspects of both. The combination of emerging massively parallel processing hardware and algorithmic improvements will result in an estimated 10**2--10**6 performance increase. Development of information processing and visualization tools is accelerating our comparison of computational models to one another, to experimental data, and to analytical theory, providing a bootstrap effect in our understanding of the target physics. The measure of success is the degree to which the experimentally observed scaling of fluctuation-driven transport may be predicted numerically. The NTP is advancing the HPCC Initiative through its state-of-the-art computational work. We are pushing the capability of high performance computing through our efforts which are strongly leveraged by OFE support.

Previous geochemical and geophysical experiments have proposed the presence of a small, metallic lunar core, but its composition is still being investigated. Knowledge of core composition can have a significant effect on understanding the thermal history of the Moon, the conditions surrounding the liquid-solid or liquid-liquid field, and siderophile element partitioning between mantle and core. However, experiments on complex bulk core compositions are very limited. One limitation comes from numerous studies that have only considered two or three element systems such as Fe-S or Fe-C, which do not supply a comprehensive understanding for complex systems such as Fe-Ni-S-Si-C. Recent geophysical data suggests the presence of up to 6% lighter elements. Reassessments of Apollo seismological analyses and samples have also shown the need to acquire more data for a broader range of pressures, temperatures, and compositions. This study considers a complex multi-element system (Fe-Ni-S-C) for a relevant pressure and temperature range to the Moon's core conditions.

This article assesses the understanding of and impact by the hysteresis in transport relation. The rapid changes of fluxes compared to slow changes of plasma parameters are overviewed for both edge barrier and core plasmas. The theoretical approaches to understanding the direct influence of heating power on turbulent transport are addressed. Based on this new theoretical framework, the ‘isotope effect’ of plasma confinement time is discussed. A trial explanation is given for this unresolved mystery in plasma confinement. The advanced data analysis method to research the hysteresis in gradient-flux relation is explained.

This study describes the use of the double-slit thought experiment as a diagnostic tool for probing physics teachers' understanding. A total of 9 pre-service teachers and 18 in-service teachers with a variety of different experience in modern physics teaching at the upper secondary level responded in a paper-and-pencil test and three of these teachers were interviewed. The results showed that the physics teachers' thought experiments with classical particles, light, and electrons were often partial. Many teachers also suffered a lack of the basic ideas and principles of physics, which probably hindered thought experimenting. In particular, understanding the ontological nature of classical particles, light and electrons seemed to be essential in performing the double-slit experiment in an appropriate way. However, the in-service physics teachers who had teaching experience in modern physics were more prepared for the double-slit thought experiment than the pre-service teachers. The results suggest that both thought experiments and the double-slit experiment should be given more weight in physics teacher education, even if experience in modern physics teaching at upper secondary school seems to some extent to develop teachers' abilities.

A detailed study is presented of the expected performance of the ATLAS detector. The reconstruction of tracks, leptons, photons, missing energy and jets is investigated, together with the performance of b-tagging and the trigger. The physics potential for a variety of interesting physics processes, within the Standard Model and beyond, is examined. The study comprises a series of notes based on simulations of the detector and physics processes, with particular emphasis given to the data expected from the first years of operation of the LHC at CERN.

This book describes the basic mechanisms, theory, simulations and technological aspects of Laser processing techniques. It covers the principles of laser quenching, welding, cutting, alloying, selective sintering, ablation, etc. The main attention is paid to the quantitative description. The diversity and complexity of technological and physical processes is discussed using a unitary approach. The book aims on understanding the cause-and-effect relations in physical processes in Laser technologies. It will help researchers and engineers to improve the existing and develop new Laser machining techniques. The book addresses readers with a certain background in general physics and mathematical analysis: graduate students, researchers and engineers practicing laser applications.

This paper explores the factors that are associated in England with 15-year-old students' intentions to study physics after the age of 16, when it is no longer compulsory. Survey responses were collated from 5,034 year 10 students as learners of physics during the academic year 2008-2009 from 137 England secondary schools. Our analysis uses individual items from the survey rather than constructs (aggregates of items) to explore what it is about physics teachers, physics lessons and physics itself that is most correlated with intended participation in physics after the age of 16. Our findings indicate that extrinsic material gain motivation in physics was the most important factor associated with intended participation. In addition, an item-level analysis helped to uncover issues around gender inequality in physics educational experiences which were masked by the use of construct-based analyses. Girls' perceptions of their physics teachers were similar to those of boys on many fronts. However, despite the encouragement individual students receive from their teachers being a key factor associated with aspirations to continue with physics, girls were statistically significantly less likely to receive such encouragement. We also found that girls had less positive experiences of their physics lessons and physics education than did boys.

Full Text Available The heavy water zero power reactor (HWZPR, which is a critical assembly with a maximum power of 100 W, can be used in different lattice pitches. The last change of core configuration was from a lattice pitch of 18–20 cm. Based on regulations, prior to the first operation of the reactor, a new core was simulated with MCNP (Monte Carlo N-Particle-4C and WIMS (Winfrith Improved Multigroup Scheme–CITATON codes. To investigate the criticality of this core, the effective multiplication factor (Keff versus heavy water level, and the critical water level were calculated. Then, for safety considerations, the reactivity worth of D2O, the reactivity worth of safety and control rods, and temperature reactivity coefficients for the fuel and the moderator, were calculated. The results show that the relevant criteria in the safety analysis report were satisfied in the new core. Therefore, with the permission of the reactor safety committee, the first criticality operation was conducted, and important physical parameters were measured experimentally. The results were compared with the corresponding values in the original core.

Presents a physical chemistry experiment demonstrating the differences between thermodynamics and kinetics. The experiment used the formation of phenol and acetone from cumene hydroperoxide, also providing an example of an industrially significant process. (CS)

Vertically aligned core/shell nanowire (nanorod) arrays are favorable candidates in many nano-scale devices such as solar cells, detectors, and integrated circuits. The quality of the shell coating around nanowire arrays is as crucial as the quality of the nanowires in device applications. For this reason, we worked on different physical vapor deposition (PVD) techniques and conducted Monte Carlo simulations to estimate the best deposition technique for a conformal shell coating. Our results show that a small angle (≤ 45°) between incoming flux of particles and the substrate surface normal is necessary for PVD techniques with a directional incoming flux (e.g. thermal or e-beam evaporation) for a reasonable conformal coating. On the other hand, PVD techniques with an angular flux distribution (e.g. sputtering) can provide a fairly conformal shell coating around nanowire arrays without a need of small angle deposition. We also studied the shape effect of the arrays on the conformality of the coating and discovered that arrays of the tapered-top nanorods and the pyramids can be coated with a more conformal and thicker coating compared to the coating on the arrays of flat-top nanowires due to their larger openings in between structures. Our results indicate that conventional PVD techniques, which offer low cost and large scale thin film fabrication, can be utilized for highly conformal and uniform shell coating formation in core/shell nanowire device applications. - Highlights: • We examined the shell coating growth in core/shell nanostructures. • We investigated the effect of physical vapor deposition method on the conformality of the shell. • We used Monte Carlo simulations to simulate the shell growth on nanowire templates. • Angular atomic flux (i.e., sputtering at high pressure) leads to conformal and uniform coatings. • A small angle (< 45°) to the directional flux needs to be introduced for conformal coatings.

Base stations developed according to the 3GPP Long Term Evolution (LTE) standard require unprecedented processing power. 3GPP LTE enables data rates beyond hundreds of Mbits/s by using advanced technologies, necessitating a highly complex LTE physical layer. The operating power of base stations is a significant cost for operators, and is currently optimized using state-of-the-art hardware solutions, such as heterogeneous distributed systems. The traditional system design method of porting algorithms to heterogeneous distributed systems based on test-and-refine methods is a manual, thus time-expensive, task. Physical Layer Multi-Core Prototyping: A Dataflow-Based Approach for LTE eNodeB provides a clear introduction to the 3GPP LTE physical layer and to dataflow-based prototyping and programming. The difficulties in the process of 3GPP LTE physical layer porting are outlined, with particular focus on automatic partitioning and scheduling, load balancing and computation latency reduction, specifically in sys...

Direct examination of atomic interactions is difficult. One powerful approach to visualizing atomic interactions is to study near-index-matched colloidal dispersions of microscopic plastic spheres, which can be probed by visible light. Such spheres interact through hydrodynamic and Brownian forces, but they feel no direct force before an infinite repulsion at contact. Through the microgravity flight of the Physics of Hard Spheres Experiment (PHaSE), researchers have sought a more complete understanding of the entropically driven disorder-order transition in hard-sphere colloidal dispersions. The experiment was conceived by Professors Paul M. Chaikin and William B. Russel of Princeton University. Microgravity was required because, on Earth, index-matched colloidal dispersions often cannot be density matched, resulting in significant settling over the crystallization period. This settling makes them a poor model of the equilibrium atomic system, where the effect of gravity is truly negligible. For this purpose, a customized light-scattering instrument was designed, built, and flown by the NASA Glenn Research Center at Lewis Field on the space shuttle (shuttle missions STS 83 and STS 94). This instrument performed both static and dynamic light scattering, with sample oscillation for determining rheological properties. Scattered light from a 532- nm laser was recorded either by a 10-bit charge-coupled discharge (CCD) camera from a concentric screen covering angles of 0 to 60 or by sensitive avalanche photodiode detectors, which convert the photons into binary data from which two correlators compute autocorrelation functions. The sample cell was driven by a direct-current servomotor to allow sinusoidal oscillation for the measurement of rheological properties. Significant microgravity research findings include the observation of beautiful dendritic crystals, the crystallization of a "glassy phase" sample in microgravity that did not crystallize for over 1 year in 1g

JUNO is a 20 kt Liquid Scintillator Antineutrino Detector currently under construction in the south of China. This report reviews JUNO's physics programme related to all neutrino sources but reactor antineutrinos, namely neutrinos from supernova burst, solar neutrinos and geoneutrinos.

Attributing keywords can assist in the classification and retrieval of documents in the particle physics literature. As information services face a future with less available manpower and more and more documents being written, the possibility of keyword attribution being assisted by automatic classification software is explored. A project being carried out at CERN (the European Laboratory for Particle Physics) for the development and integration of automatic keywording is described.

Full Text Available The results of the testing of personality-oriented physical education. In the experiment involved 640 students. Found that the greatest increase in indicators of physical fitness in young men in the experimental group revealed a flexibility test (6.67% and flexion extension Hand-ups (5.75. The girls showed improvement in the flexibility test (7.09% flexion and extension of hand-ups (6.14%. Clarified the nature and content of the personal-oriented physical education, especially its use in physical education students. Pedagogical conditions of effective application of personal-oriented physical education students in self-movement towards a healthy lifestyle. The data on the importance of physical culture for the prevention of self destructive behavior (drug addiction, alcoholism, smoking.

Nonspherical mass motions are a generic feature of core-collapse supernovae, and hydrodynamic instabilities play a crucial role for the explosion mechanism. First successful neutrino-driven explosions could be obtained with self-consistent, first-principle simulations in three spatial dimensions (3D). But 3D models tend to be less prone to explosion than corresponding axisymmetric (2D) ones. This has been explained by 3D turbulence leading to energy cascading from large to small spatial scales, inversely to the 2D case, thus disfavoring the growth of buoyant plumes on the largest scales. Unless the inertia to explode simply reflects a lack of sufficient resolution in relevant regions, it suggests that some important aspect may still be missing for robust and sufficiently energetic neutrino-powered explosions. Such deficits could be associated with progenitor properties like rotation, magnetic fields or pre-collapse perturbations, or with microphysics that could lead to an enhancement of neutrino heating behin...

The reconstruction of the core of a neutrino interaction at OPERA. The neutrino arriving from the left of the image has interacted with the lead of a brick, producing various particles identifiable by their tracks visible in the emulsion.

Substantial experimental and theoretical efforts worldwide are devoted to explore the phase diagram of strongly interacting matter. At LHC and top RHIC energies, QCD matter is studied at very high temperatures and nearly vanishing net-baryon densities. There is evidence that a Quark-Gluon-Plasma (QGP) was created at experiments at RHIC and LHC. The transition from the QGP back to the hadron gas is found to be a smooth cross over. For larger net-baryon densities and lower temperatures, it is expected that the QCD phase diagram exhibits a rich structure, such as a first-order phase transition between hadronic and partonic matter which terminates in a critical point, or exotic phases like quarkyonic matter. The discovery of these landmarks would be a breakthrough in our understanding of the strong interaction and is therefore in the focus of various high-energy heavy-ion research programs. The Compressed Baryonic Matter (CBM) experiment at FAIR will play a unique role in the exploration of the QCD phase diagram in the region of high net-baryon densities, because it is designed to run at unprecedented interaction rates. High-rate operation is the key prerequisite for high-precision measurements of multi-differential observables and of rare diagnostic probes which are sensitive to the dense phase of the nuclear fireball. The goal of the CBM experiment at SIS100 (√{s_{NN}}= 2.7-4.9 GeV) is to discover fundamental properties of QCD matter: the phase structure at large baryon-chemical potentials ( μ_B > 500 MeV), effects of chiral symmetry, and the equation of state at high density as it is expected to occur in the core of neutron stars. In this article, we review the motivation for and the physics programme of CBM, including activities before the start of data taking in 2024, in the context of the worldwide efforts to explore high-density QCD matter.

Overweight and obese students are often socially and instructionally excluded from physical education and school physical activity opportunities. This article describes teaching strategies from a study of middle school physical education teachers who are committed to providing effective teaching and positive experiences for overweight and obese…

Full Text Available This study takes a stratified random sample of articles published in 2014 from the top 10 journals in the disciplines of biology, chemistry, mathematics, and physics, as ranked by impact factor. Sampled articles were examined for their reporting of original data or reuse of prior data, and were coded for whether the data was publicly shared or otherwise made available to readers. Other characteristics such as the sharing of software code used for analysis and use of data citation and DOIs for data were examined. The study finds that data sharing practices are still relatively rare in these disciplines' top journals, but that the disciplines have markedly different practices. Biology top journals share original data at the highest rate, and physics top journals share at the lowest rate. Overall, the study finds that within the top journals, only 13% of articles with original data published in 2014 make the data available to others.

This study takes a stratified random sample of articles published in 2014 from the top 10 journals in the disciplines of biology, chemistry, mathematics, and physics, as ranked by impact factor. Sampled articles were examined for their reporting of original data or reuse of prior data, and were coded for whether the data was publicly shared or otherwise made available to readers. Other characteristics such as the sharing of software code used for analysis and use of data citation and DOIs for data were examined. The study finds that data sharing practices are still relatively rare in these disciplines’ top journals, but that the disciplines have markedly different practices. Biology top journals share original data at the highest rate, and physics top journals share at the lowest rate. Overall, the study finds that within the top journals, only 13% of articles with original data published in 2014 make the data available to others. PMID:26636676

This study takes a stratified random sample of articles published in 2014 from the top 10 journals in the disciplines of biology, chemistry, mathematics, and physics, as ranked by impact factor. Sampled articles were examined for their reporting of original data or reuse of prior data, and were coded for whether the data was publicly shared or otherwise made available to readers. Other characteristics such as the sharing of software code used for analysis and use of data citation and DOIs for data were examined. The study finds that data sharing practices are still relatively rare in these disciplines' top journals, but that the disciplines have markedly different practices. Biology top journals share original data at the highest rate, and physics top journals share at the lowest rate. Overall, the study finds that within the top journals, only 13% of articles with original data published in 2014 make the data available to others.

The experiment program definition and preliminary laboratory concept studies on the zero G cloud physics laboratory are reported. This program involves the definition and development of an atmospheric cloud physics laboratory and the selection and delineations of a set of candidate experiments that must utilize the unique environment of zero gravity or near zero gravity.

The purpose of this research is to review the literature about young people's meaningful experiences in physical education and youth sport. We reviewed 50 empirical peer-reviewed articles published in English since 1987. Five themes were identified as central influences to young people's meaningful experiences in physical education and sport:…

The POGIL-PCL project implements the principles of process-oriented, guided-inquiry learning (POGIL) in order to improve student learning in the physical chemistry laboratory (PCL) course. The inquiry-based physical chemistry experiments being developed emphasize modeling of chemical phenomena. In each experiment, students work through at least…

The POGIL-PCL project implements the principles of process-oriented, guided-inquiry learning (POGIL) in order to improve student learning in the physical chemistry laboratory (PCL) course. The inquiry-based physical chemistry experiments being developed emphasize modeling of chemical phenomena. In each experiment, students work through at least…

The Argonne National Laboratory--East (ANL-E) Health Physics Section provides direct and/or oversight support to various D&D projects at ANL-E. The health physics problems encountered have been challenging, primarily because they involved the potential for high internal exposures as well as actual high external exposures. The lessons learned are applicable to other radiological facilities. A number of D&D projects being conducted concurrently at ANL-E are described. The problems encountered are then categorized, and lessons learned and recommendations are provided. The main focus will be limited to the support and technical assistance provided by personnel from the ANL Health Physics Section during the course of the work activities.

Soft materials like liquids and polymers are part of everyday life, yet at school, this topic is rarely touched. Within the priority program SPP 1064 'Nano- and Microfluidics' of the German Science Foundation, we designed an outreach project that allows pupils (age 14 to 18) to perform hands-on experiments (www.labinabox.de). The experiments allow them e.g. to feel viscosity and viscoelasticity, experience surface tension or see structure formation. We call the modus operandi 'subjective experiments' to contrast them with the scientifically objective experiments, which pupils often describe as being boring. Over a dozen different experiments under the topic 'physics of fluids' are collected in a big box that travels to the school. Three other topics of boxes are available, 'physics of light, 'physics of liquid crystals', and 'physics of adhesion and friction'. Each experiment can be performed by 1-3 pupils within 10 - 20 min. That way, each scholar can perform 6 to 8 different small experiments within one topic. 'Subjective experiments' especially catch the attention of girls without disadvantaging boys. Both are fascinated by the hands-on physicsexperience and are therefore eager to perform also 'boring' objective experiments. Morover, before/after polls reveal that their interest in physics has greatly advanced. The project can easily be taken over and/or adapted to other topics in the natural sciences. Financial support of the German Science Foundation DFG is acknowledged.

We present the screenplay of a physics show on particle physics, by the Physikshow of Bonn University. The show is addressed at non-physicists aged 14+ and communicates basic concepts of elementary particle physics including the discovery of the Higgs boson in an entertaining fashion. It is also demonstrates a successful outreach activity heavily relying on the university physics students. This paper is addressed at anybody interested in particle physics and/or show physics. This paper is also addressed at fellow physicists working in outreach, maybe the experiments and our choice of simple explanations will be helpful. Furthermore, we are very interested in related activities elsewhere, in particular also demonstration experiments relevant to particle physics, as often little of this work is published. Our show involves 28 live demonstration experiments. These are presented in an extensive appendix, including photos and technical details. The show is set up as a quest, where 2 students from Bonn with the aid...

The article presents layout of experiments for observatiion of polarizing effects on light passage through crossed polarizers. Addition of the third polarizer leads to appearance of light on the screen. Experiment is for cases with a laser light source, reflection of light under the Brewster's angle. Photos of the installations realizing the given effects are resulted.

Measurements of the Joule energy deposition into exploding wire and its relation with condition of the expanding wire core are presented. Wires of nine different metals with diameters of 10-30 microns, have been exploded by fast 150A/ns and slow 20A/ns pulses, in vacuum and in air. It has been shown by interferometry and light emission that expanding wire core has different conditions. The substances with small atomization enthalpy (Ag, Al, Cu, Au) demonstrate full vaporization of the wire core. The refractory metals (Ti, Pt, Mo, W) demonstrates that core consists from vapor and small and hot microparticles. In this case we observe "firework effect" when large radiation from the wire exceed the energy deposition time in a three order of magnitude. For non-refractory metals radiation dropping fast in 100 ns time scale due to effective adiabatic cooling. It is possible if main part of the metal core was vaporized. The interferometrical investigation of the refraction coefficient of expanding metal core is proof this conclusion. It has been shown that energy deposition before surface breakdown dependent strongly from current rate, surface coatings, environment, wire diameter and radial electric field. The regime of wire explosion in vacuum without shunting plasma shell has been realized for fast exploding mode. In this case we observe anomaly high energy deposition in to the wire core exceeding regular value in almost 20 times. The experimental results for Al wire have been compared with ALEGRA 2D MHD simulations. *Sandia is a multi-program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under Contract DE-AC04-94AL8500.

Full Text Available Scaffolds play a critical role in the practical realization of bone tissue engineering. The purpose of this study was to assess whether a core-sheath structure composite scaffold possesses admirable physical properties and biocompatibility in vitro. A novel scaffold composed of poly(lactic-co-glycolic acid/β-tricalcium phosphate (PLGA/β-TCP skeleton wrapped with Type I collagen via low-temperature deposition manufacturing (LDM was prepared, and bone mesenchymal stem cells (BMSCs were used to evaluate cell behavior on the scaffold. PLGA/β-TCP skeleton was chosen as the control group. Physical properties were evaluated by pority ratio, compressive strength, and Young’s modulus. Scanning electron microscope (SEM was used to study morphology of cells. Hydrophilicity was evaluated by water absorption ratio. Cell proliferation was tested by 3-(4,5-dimethylthiazol-2-yl-2,5-diphenyl tetrazolium bromide assay (MTT. Osteogenic differentiation of BMSCs was evaluated by alkaline phosphates activity (ALP. The results indicated that physical properties of the novel scaffold were as good as those of the control group, hydrophilicity was observably better (P<0.01 than that of control group, and abilities of proliferation and osteogenic differentiation of BMSCs on novel scaffold were significantly greater (P<0.05 than those of control group, which suggests that the novel scaffold possesses preferable characteristics and have high value in bone tissue engineering.

All high school students that wish to continue onto college are seeking opportunities to be competitive in the college market. They participate in extra-curricular activities which are seen to foster creativity and the skills necessary to do well in the college environment. In the case of students with an interest in physics, participating in a…

CMS has addressed the challenge of identifying in real time different kinds of physics at the LHC – from the "bread and butter" of Standard Model processes to signals of new particles – with triggers served up according to a carefully designed menu.

The purpose of the current study was to describe and explain the views on teaching English Language Learners (ELLs) held by six elementary physical education (PE) teachers in the Midwest region of the United States. Situated in positioning theory, the research approach was descriptive-qualitative. The primary sources of data were face-to-face…

Experiential education is a teaching methodology employed to facilitate learning. All forms of experiential education require students to "learn by doing" as they participate in activities outside the classroom. Common forms of experiential education utilized by sport and physical educators include field and laboratory activities, service learning…

Background: This paper presents a case study from a physics course at a Norwegian university college, investigating key aspects of a group-work project, so-called learning labs, from the participating students' perspective. Purpose: In order to develop these learning labs further, the students' perspective is important. Which aspects are essential…

The purpose of the current study was to describe and explain the views on teaching English Language Learners (ELLs) held by six elementary physical education (PE) teachers in the Midwest region of the United States. Situated in positioning theory, the research approach was descriptive-qualitative. The primary sources of data were face-to-face…

Nonspherical mass motions are a generic feature of core-collapse supernovae, and hydrodynamic instabilities play a crucial role in the explosion mechanism. The first successful neutrino-driven explosions could be obtained with self-consistent, first-principles simulations in three spatial dimensions. But three-dimensional (3D) models tend to be less prone to explosion than the corresponding axisymmetric two-dimensional (2D) ones. The reason is that 3D turbulence leads to energy cascading from large to small spatial scales, the inverse of the 2D case, thus disfavoring the growth of buoyant plumes on the largest scales. Unless the inertia to explode simply reflects a lack of sufficient resolution in relevant regions, some important component of robust and sufficiently energetic neutrino-powered explosions may still be missing. Such a deficit could be associated with progenitor properties such as rotation, magnetic fields, or precollapse perturbations, or with microphysics that could cause enhancement of neutrino heating behind the shock. 3D simulations have also revealed new phenomena that are not present in 2D ones, such as spiral modes of the standing accretion shock instability (SASI) and a stunning dipolar lepton-number emission self-sustained asymmetry (LESA). Both impose time- and direction-dependent variations on the detectable neutrino signal. The understanding of these effects and of their consequences is still in its infancy.

The Godson-3B processor is a powerful processor designed for high performance servers including Dawning Servers. It offers significantly improved performance over previous Godsono3 series CPUs by incorporating eight CPU cores and vector computing units. It contains 582.6 M transistors within 300 mm2 area in 65 nm technology and is implemented in parallel with full hierarchical design flows. In Godson-3B, advanced clock distribution mechanisms including GALS (Globally Asynchronous Locally Synchronous) and clock mesh are adopted to obtain an OCV tolerable clock network. Custom-designed de-skew modules are also implemented to afford further latency balance after fabrication. The power reduction of Godson-3B is maintained by MLMM (Multi Level Multi Mode) clock gating and multi-threshold-voltage cells substitution schemes. The highest frequency of Godson-3B is 1.05 GHz and the peak performance is 128 GFlops (double-precision) or 256 GFlops (single-precision) with 40 W power consumption.

The hydrologic system beneath the Antarctic Ice Sheet is thought to influence both the dynamics and distribution of fast flowing ice streams, which discharge most of the ice lost by the ice sheet. Despite considerable interest in understanding this subglacial network and its affect on ice flow, in situ observations from the ice sheet bed are exceedingly rare. Here we describe the first sediment cores recovered from an active subglacial lake. The lake, known as Subglacial Lake Whillans, is part of a broader, dynamic hydrologic network beneath the Whillans Ice Stream in West Antarctica. Even though "floods" pass through the lake, the lake floor shows no evidence of erosion or deposition by flowing water. By inference, these floods must have insufficient energy to erode or transport significant volumes of sediment coarser than silt. Consequently, water flow beneath the region is probably incapable of incising continuous channels into the bed and instead follows preexisting subglacial topography and surface slope. Sediment on the lake floor consists of till deposited during intermittent grounding of the ice stream following flood events. The fabrics within the till are weaker than those thought to develop in thick deforming beds suggesting subglacial sediment fluxes across the ice plain are currently low and unlikely to have a large stabilizing effect on the ice stream's grounding zone.

The cornerstone of the Chinese experimental particle physics program consists of a series of experiments performed in the tau-charm energy region. China began building e+e- colliders at the Institute for High Energy Physics in Beijing more than three decades ago. Beijing Electron Spectrometer, BES, is the common root name for the particle physics detectors operated at these machines. The development of the BES program is summarized and highlights of the physics results across several topical areas are presented.

The purpose of this study is to explore the lived experiences of practicing physical education teachers on the integration of technology in a physical education. This study arose from my current experiences as a physical educator and the current inculcation of technology in education and more specifically physical education. As a current physical…

More than 100 years after their discoveries, cosmic rays have been extensively studied, both with balloon experiments and with ground observatories. More recently, the possibility of mounting detectors on satellites or on the International Space Station has allowed for a long duration (several years) continuous observation of primary cosmic rays, i.e. before their interaction with the earth atmosphere, thus opening a new regime of precision measurements. In this review, recent results from major space experiments, as Pamela, AMS02 and Fermi, as well as next generation experiments proposed for the International Space Station, for standalone satellites or for the yet to come Chinese Space Station, will be presented. The impact of these experiment on the knowledge of Cosmic Ray propagation will also be discussed.

In the case of a hypothetical core disruptive accident (HCDA) in a liquid metal fast breeder reactor (LMFBR), it is assumed that the core of the nuclear reactor has melted partially and that the chemical interaction between the molten fuel and the liquid sodium has created a high-pressure gas bubble in the core. The violent expansion of this bubble loads and deforms the reactor vessel, thus endangering the safety of the nuclear plant. The experimental test MARA 8 simulates the explosive phenomenon in a mock-up included in a flexible vessel with a flexible roof. This paper presents a numerical simulation of the test and a comparison of the computed results with the experimental results and previous numerical ones.

The present study aims to characterise the physical and chemical properties of the protostellar core Orion B9-SMM3. The APEX telescope was used to perform a follow-up molecular line survey of SMM3. The following species were identified from the frequency range 218.2-222.2 GHz: $^{13}$CO, C$^{18}$O, SO, para-H$_2$CO, and E$_1$-type CH$_3$OH. The on-the-fly mapping observations at 215.1-219.1 GHz revealed that SMM3 is associated with a dense gas core as traced by DCO$^+$ and p-H$_2$CO. Altogether three different p-H$_2$CO transitions were detected with clearly broadened linewidths (8.2-11 km s$^{-1}$ in FWHM). The derived p-H$_2$CO rotational temperature, $64\\pm15$ K, indicates the presence of warm gas. We also detected a narrow p-H$_2$CO line (FWHM=0.42 km s$^{-1}$) at the systemic velocity. The p-H$_2$CO abundance for the broad component appears to be enhanced by two orders of magnitude with respect to the narrow line value ($\\sim3\\times10^{-9}$ versus $\\sim2\\times10^{-11}$). The detected methanol line shows ...

The non-perturbative nature of the strong interaction leads to spectacular phenomena, such as the formation of hadronic matter, color confinement, and the generation of the mass of visible matter. To get deeper insight into the underlying mechanisms remains one of the most challenging tasks within the field of subatomic physics. The antiProton ANnihilations at DArmstadt (PANDA) collaboration has the ambition to address key questions in this field by exploiting a cooled beam of antiprotons at the High Energy Storage Ring (HESR) at the future Facility for Antiproton and Ion Research (FAIR) combined with a state-of-the-art and versatile detector. This contribution will address some of the unique features of PANDA that give rise to a promising physics program together with state-of-the-art technological developments.

FEL pulse intensities using the transition metal dichalcogenides 1T-TiTe{sub 2} and 1T-TaS{sub 2} as reference systems. ARPES at FLASH is in principle feasible below the SCE limit and triangular structured radiation damages (in accordance with the crystal structure) occured only at highest FEL pulse intensities not usable for PES. With increasing photoelectron densities, increasing energetic shifts and broadenings in the range of several eV were observed. Intensity dependent XPS measurements on 1T-TaS{sub 2} could be reproduced by the simulations with the Treecode Algorithm and a linear behavior of the energetic shift and broadening as a function of the electron number was found. Finally, the results of first time.resolved XPS measurements on the Ta 4f core levels of 1T-TaS{sub 2} in the CDW-insulating phase using a - in comparison with HHG sources - high photon energy of {approx}175 eV are presented for the first time. A time-dependent evolution of the low binding energy edge of the Ta 4f core levels was observed. This effect could almost certainly be attributed to varying FEL intensities indicating that induced SCEs interfere with possible physical effects. However, with the same setup, our research group repeated the experiment with significantly better temporal resolution and succeeded in measuring directly the charge order dynamics in the complex material 1T-TaS{sub 2} with a temporal resolution of 700 fs and atomic-site sensitivity for the first time.

The physics department at Texas State University has implemented a Learning Assistant (LA) program with reform-based instructional changes in our introductory course sequences. We are interested in how participation in the LA program influences LAs' identity both as physics students and as physics teachers; in particular, how being part of the LA community changes participants' self-concepts and their day-to-day practice. We analyze video of weekly LA preparation sessions and interviews with LAs as well as written artifacts from program applications, pedagogy course reflections, and evaluations. Our analysis of self-concepts is informed by the identity framework developed by Hazari et al., and our analysis of practice is informed by Lave and Wenger's theory of Communities of Practice. Regression models from quantitative studies show that the physics identity construct strongly predicts intended choice of a career in physics; the goal of our current project is to understand the details of the impacts of participation in the LA experience on participants' practice and self-concept, in order to identify critical elements of LA program structure that positively influence physics identity and physics career intentions for students. Our analysis suggests that participation in the LA program impacts LAs in ways that support both stronger ``physics student'' identity and stronger ``physics instructor'' identity, and that these identities are reconciled into a coherent integrated physics identity. In addition to becoming more confident and competent in physics, LAs perceive themselves to have increased competence in communication and a stronger sense of belonging to a supportive and collaborative community; participation in the LA program also changes their ways of learning and of being students, both within and beyond physics. This research and the TXST LA program are supported by NSF DUE-1240036, NSF DUE-1431578, and the Halliburton Foundation.

As a promising and typical semiconductor heterostructure at the nanoscale, the radial Ge/Si NW heterostructure, that is, the Ge-core/Si-shell NW structure, has been widely investigated and used in various nanodevices such as solar cells, lasers, and sensors because of the strong changes in the band structure and increased charge carrier mobility. Therefore, to attain high quality radial semiconductor NW heterostructures, controllable and stable epitaxial growth of core-shell NW structures has become a major challenge for both experimental and theoretical evaluation. Surface roughening is usually undesirable for the epitaxial growth of high quality radial semiconductor NW heterostructures, because it would destroy the core-shell NW structures. For example, the surface of the Ge-core/Si-shell NWs always exhibits a periodic modulation with island-like morphologies, that is, surface roughening, during epitaxial growth. Therefore, the physical understanding of the surface roughening behavior during the epitaxial growth of core-shell NW structures is essential and urgent for theoretical design and experimentally controlling the growth of high quality radial semiconductor NW heterostructures. Here, we proposed a quantitative thermodynamic theory to address the physical process of epitaxial growth of core-shell NW structures and surface roughening. We showed that the transformation from the Frank-van der Merwe mode to the Stranski-Krastanow mode during the epitaxial growth of radial semiconductor NW heterostructures is the physical origin of surface roughening. We deduced the thermodynamic criterion for the formation of the surface roughening and the phase diagram of growth and showed that the radius of the NWs and the thickness of the shell layer can not only determine the formation of the surface roughening in a core-shell NW structure, but also control the periodicity and amplitude of the surface roughness. The agreement between the theoretical results and the

In 2011, Daryl Bem published a report of nine parapsychological experiments showing evidence of retrocausal information transfer. Earlier in 2016, the team of Bem, Tressoldi, Rabeyron, and Duggan published the results of a meta-analysis containing 81 independent replications of the original Bem experiments (total of 90 with the originals).[1] This much larger database continues to show positive results of generally comparable effect size, thus demonstrating that the effects claimed by Bem can be replicated by independent researchers and greatly strengthening the case for empirically observed retrocausation. Earlier (2011) work by this author showed how a modification of one of Bem's original experiments could be used to test the mechanism implicitly proposed by Echeverria, Klinkhammer, and Thorne to explain how retrocausal phenomena can exist without any risk of self-contradictory event sequences (time paradoxes). In light of the new publication and new evidence, the current work generalizes the previous analysis which was restricted to only one of Bem's experimental genres (precognitive approach and avoidance). The current analysis shows how minor modifications can be made in Bem's other experimental genres of retroactive priming, retroactive habituation, and retroactive facilitation of recall to test the EKT anti-paradox mechanism. If the EKT hypothesis is correct, the modified experiments, while continuing to show replicable retrocausal phenomena, will also show a characteristic pattern of distortion in the statistics of the random selections used to drive the experiments.

The Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) near Geneva, Switzerland, is now the highest energy accelerator in the world, colliding protons with protons. On July 4, 2012, the two general-purpose experiments, ATLAS and the Compact Muon Solenoid (CMS) experiment, announced the observation of a particle consistent with the world’s most sought-after particle, the Higgs boson, at a mass of about 125 GeV (approximately 125 times the mass of the proton). The Higgs boson is the final missing ingredient of the standard model, in which it is needed to allow most other particles to acquire mass through the mechanism of electroweak symmetry breaking. We are members of the team in the CMS experiment that found evidence for the Higgs boson through its decay to two photons, the most sensitive channel at the LHC. We are proposing to carry out studies to determine whether the new particle has the properties expected for the standard model Higgs boson or whether it is something else. The new particle can still carry out its role in electroweak symmetry breaking but have other properties as well. Most theorists think that a single standard model Higgs boson cannot be the complete solution – there are other particles needed to answer some of the remaining questions, such as the hierarchy problem. The particle that has been observed could be one of several Higgs bosons, for example, or it could be composite. One model of physics beyond the standard model is supersymmetry, in which every ordinary particle has a superpartner with opposite spin properties. In supersymmetric models, there must be at least five Higgs bosons. In the most popular versions of supersymmetry, the lightest supersymmetric particle does not decay and is a candidate for dark matter. This proposal covers the period from June 1, 2013, to March 31, 2016. During this period the LHC will finally reach its design energy, almost twice the energy at which it now runs. We will

This report on progress explores recent advances in our theoretical and experimental understanding of the physics of open quantum systems (OQSs). The study of such systems represents a core problem in modern physics that has evolved to assume an unprecedented interdisciplinary character. OQSs consist of some localized, microscopic, region that is coupled to an external environment by means of an appropriate interaction. Examples of such systems may be found in numerous areas of physics, including atomic and nuclear physics, photonics, biophysics, and mesoscopic physics. It is the latter area that provides the main focus of this review, an emphasis that is driven by the capacity that exists to subject mesoscopic devices to unprecedented control. We thus provide a detailed discussion of the behavior of mesoscopic devices (and other OQSs) in terms of the projection-operator formalism, according to which the system under study is considered to be comprised of a localized region (Q), embedded into a well-defined environment (P) of scattering wavefunctions (with Q + P = 1). The Q subspace must be treated using the concepts of non-Hermitian physics, and of particular interest here is: the capacity of the environment to mediate a coupling between the different states of Q; the role played by the presence of exceptional points (EPs) in the spectra of OQSs; the influence of EPs on the rigidity of the wavefunction phases, and; the ability of EPs to initiate a dynamical phase transition (DPT). EPs are singular points in the continuum, at which two resonance states coalesce, that is where they exhibit a non-avoided crossing. DPTs occur when the quantum dynamics of the open system causes transitions between non-analytically connected states, as a function of some external control parameter. Much like conventional phase transitions, the behavior of the system on one side of the DPT does not serve as a reliable indicator of that on the other. In

This report on progress explores recent advances in our theoretical and experimental understanding of the physics of open quantum systems (OQSs). The study of such systems represents a core problem in modern physics that has evolved to assume an unprecedented interdisciplinary character. OQSs consist of some localized, microscopic, region that is coupled to an external environment by means of an appropriate interaction. Examples of such systems may be found in numerous areas of physics, including atomic and nuclear physics, photonics, biophysics, and mesoscopic physics. It is the latter area that provides the main focus of this review, an emphasis that is driven by the capacity that exists to subject mesoscopic devices to unprecedented control. We thus provide a detailed discussion of the behavior of mesoscopic devices (and other OQSs) in terms of the projection-operator formalism, according to which the system under study is considered to be comprised of a localized region (Q), embedded into a well-defined environment (P) of scattering wavefunctions (with Q + P = 1). The Q subspace must be treated using the concepts of non-Hermitian physics, and of particular interest here is: the capacity of the environment to mediate a coupling between the different states of Q; the role played by the presence of exceptional points (EPs) in the spectra of OQSs; the influence of EPs on the rigidity of the wavefunction phases, and; the ability of EPs to initiate a dynamical phase transition (DPT). EPs are singular points in the continuum, at which two resonance states coalesce, that is where they exhibit a non-avoided crossing. DPTs occur when the quantum dynamics of the open system causes transitions between non-analytically connected states, as a function of some external control parameter. Much like conventional phase transitions, the behavior of the system on one side of the DPT does not serve as a reliable indicator of that on the other. In

The Mu2e experiment at Fermilab will search for a signature of charged lepton flavor violation, an effect prohibitively too small to be observed within the Standard Model of particle physics. Therefore, its observation is a signal of new physics. The signature that Mu2e will search for is the ratio of the rate of neutrinoless coherent conversion of muons into electrons in the field of a nucleus, relative to the muon capture rate by the nucleus. The conversion process is an example of charged lepton flavor violation. This experiment aims at a sensitivity of four orders of magnitude higher than previous related experiments. The desired sensitivity implies highly demanding requirements of accuracy in the design and conduct of the experiment. It is therefore important to investigate the tolerance of the experiment to instrumental uncertainties and provide specifications that the design and construction must meet. This is the core of the work reported in this thesis. The design of the experiment is based on three superconducting solenoid magnets. The most important uncertainties in the magnetic field of the solenoids can arise from misalignments of the Transport Solenoid, which transfers the beam from the muon production area to the detector area and eliminates beam-originating backgrounds. In this thesis, the field uncertainties induced by possible misalignments and their impact on the physics parameters of the experiment are examined. The physics parameters include the muon and pion stopping rates and the scattering of beam electrons off the capture target, which determine the signal, intrinsic background and late-arriving background yields, respectively. Additionally, a possible test of the Transport Solenoid alignment with low momentum electrons is examined, as an alternative option to measure its field with conventional probes, which is technically difficult due to mechanical interference. Misalignments of the Transport Solenoid were simulated using standard

The best estimate method of safety analysis involves choosing a realistic set of input parameters for a proposed safety case and evaluating the uncertainty in the results. Determining the uncertainty in code outputs remains a challenge and is the subject of a benchmarking exercise proposed by the Organization for Economic Cooperation and Development. The work proposed in this paper will contribute to this benchmark by assessing the uncertainty in a depletion calculation of the final nuclide concentrations for an experiment performed in the Fukushima-2 reactor. This will be done using lattice transport code DRAGON and a tool known as DINOSAUR. (author)

Kerr index and dispersion parameter of a small core chalcogenide photonic crystal fiber are estimated via four-wave mixing near 2μm. From these values, new fiber design is proposed to efficiently generate idlers in mid-infrared.

A device which provides an inexpensive means for making precise studies of the kinetics of gas molecules is discussed. Two experiments are described: (1) Measurement of the vibrational relaxation time of gas molecules and (2) determination of intermolecular forces in molecules. (Author/DF)

Studies the polarization effect on water by cations and anions. Describes an experiment to illustrate the polarization effect of sodium, lithium, calcium, and strontium ions on the water molecule in the hydration spheres of the ions. Analysis is performed by proton NMR. (MVL)

We review the status of the Baikal neutrino telescope, which is operating in Lake Baikal since 1998 and has been upgraded to the 10 Mton detector NT200+ in 2005. We present selected physics results on searches for upward going neutrinos, relativistic magnetic monopoles and for very high-energy neutrinos. We describe the strategy of creating a detector on the Gigaton (km{sup 3}) scale at Lake Baikal. First steps of activities towards a km{sup 3} Baikal neutrino telescope are discussed.

month post-intervention. We analyzed interview data using Systematic Text Condensation. Findings: Participants learned to use their bodies in new ways. Group training permitted social breaks from work, enforcing colleague unity. Participants did not perceive training as stressful, although working...... for implementation seem to be important for sustained effects of health-promotion interventions in the workplace. Originality: The social character of the physical training facilitated a community of practice, which potentially supported the learning of new competencies, and how to improve the organization...

[Purpose] The aim of the present study was to investigate the effects of core stability exercise (CSE) on the physical and psychological functions of elderly women while negotiating general obstacles. [Subjects and Methods] After allocating 10 elderly women each to the core stability training group and the control group, we carried out Performance-Oriented Mobility Assessment (POMA) and measured crossing velocity (CV), maximum vertical heel clearance (MVHC), and knee flexion angle for assessing physical performances. We evaluated depression and fear of falling for assessing psychological functions. [Results] Relative to the control group, the core stability training group showed statistically significant overall changes after the training session: an increase in POMA scores, faster CV, lower MVHC, and a decrease in knee flexion angle. Furthermore, depression and fear of falling decreased significantly. [Conclusion] CSE can have a positive effect on the improvement of physical and psychological performances of older women who are vulnerable to falls as they negotiate everyday obstacles. PMID:25435680

[Purpose] The aim of the present study was to investigate the effects of core stability exercise (CSE) on the physical and psychological functions of elderly women while negotiating general obstacles. [Subjects and Methods] After allocating 10 elderly women each to the core stability training group and the control group, we carried out Performance-Oriented Mobility Assessment (POMA) and measured crossing velocity (CV), maximum vertical heel clearance (MVHC), and knee flexion angle for assessing physical performances. We evaluated depression and fear of falling for assessing psychological functions. [Results] Relative to the control group, the core stability training group showed statistically significant overall changes after the training session: an increase in POMA scores, faster CV, lower MVHC, and a decrease in knee flexion angle. Furthermore, depression and fear of falling decreased significantly. [Conclusion] CSE can have a positive effect on the improvement of physical and psychological performances of older women who are vulnerable to falls as they negotiate everyday obstacles.

What follows is a description of the procedure for and results of a simple experiment on the formation of impact craters designed for the laboratory portions of lower mathematical-level general education science courses such as conceptual physics or descriptive astronomy. The experiment provides necessary experience with data collection and…

The health physics activities related to the removal and disposal of a thermal shield at a nuclear power plant and subsequent repairs to the core support barrel required increased planning relative to a normal refueling/maintenance outage. The repair of the core support barrel was a "first" in the nuclear power industry. Pre-job planning was of great concern because of extremely high radiation levels associated with the irradiated stainless steel thermal shield and core support barrel. ALARA techniques used in the preparation of the thermal shield for removal and shipment to the disposal site are discussed.

In this paper we present an overview of the results and conclusions of our most recent divertor physics and development work. Using an array of new divertor diagnostics we have measured the plasma parameters over the entire divertor volume and gained new insights into several divertor physics issues. We present direct experimental evidence for momentum loss along the field lines, large heat convection, and copious volume recombination during detachment. These observations are supported by improved UEDGE modeling incorporating impurity radiation. We have demonstrated divertor exhaust enrichment of neon and argon by action of a forced scrape off layer (SOL) flow and demonstrated divertor pumping as a substitute for conventional wall conditioning. We have observed a divertor radiation zone with a parallel extent that is an order of magnitude larger than that estimated from a 1-D conduction limited model of plasma at coronal equilibrium. Using density profile control by divertor pumping and pellet injection we have attained H-mode confinement at densities above the Greenwald limit. Erosion rates of several candidate ITER plasma facing materials are measured and compared with predictions of a numerical model.

Insects and spiders rely on gas-filled airways for respiration in air. However, some diving species take a tiny air-store bubble from the surface that acts as a primary O(2) source and also as a physical gill to obtain dissolved O(2) from the water. After a long history of modelling, recent work with O(2)-sensitive optodes has tested the models and extended our understanding of physical gill function. Models predict that compressible gas gills can extend dives up to more than eightfold, but this is never reached, because the animals surface long before the bubble is exhausted. Incompressible gas gills are theoretically permanent. However, neither compressible nor incompressible gas gills can support even resting metabolic rate unless the animal is very small, has a low metabolic rate or ventilates the bubble's surface, because the volume of gas required to produce an adequate surface area is too large to permit diving. Diving-bell spiders appear to be the only large aquatic arthropods that can have gas gill surface areas large enough to supply resting metabolic demands in stagnant, oxygenated water, because they suspend a large bubble in a submerged web.

An inductively heated experiment, SURC-2, using prototypic U0{sub 2}-ZrO{sub 2} materials was executed as part of the Integral Core-Concrete Interactions Experiments Program. The purpose of this experimental program was to measure and assess the variety of source terms produced during core debris/concrete interactions. These source terms include thermal energy released to both the reactor basemat and the containment environment, as well as flammable gas, condensable vapor and toxic or radioactive aerosols generated during the course of a severe reactor accident. The SURC-2 experiment eroded a total of 35 cm of basaltic concrete during 160 minutes of sustained interaction using 203.9 kg of prototypic U0{sub 2}-ZrO{sub 2} core debris material that included 18 kg of Zr metal and 3.4 kg of fission product simulants. The meltpool temperature ranged from 2400--1900{degrees}C during the first 50 minutes of the test followed by steady temperatures of 1750--1800{degrees}C during the middle portion of the test and increased temperatures of 1800--1900{degrees}C during the final 50 minutes of testing. The total erosion during the first 50 minutes was 15 cm with an additional 7 cm during the middle part of the test and 13 cm of ablation during the final 50 minutes. Comprehensive gas flowrates, gas compositions, and aerosol release rates were also measured during the SURC-2 test. When combined with the SURC-1 results, SURC-2 forms a complete data base for prototypic U0{sub 2}-ZrO{sub 2} core debris interactions with concrete.

Full Text Available Objective: Family medicine, epidemiology, health management and health promotion are the core disciplines of community medicine. In this paper, we discuss the development of a community posting program within the framework of community medicine core disciplines at a primary health centre attached to a teaching hospital in Puducherry, India. Methods: This is a process documentation of our experience. Results: There were some shortcomings which revolved around the central theme that postings were conducted with department in the teaching hospital as the focal point, not the primary health centre (PHC. To address the shortcomings, we made some changes in the existing community posting program in 2013. Student feedback aimed at Kirkpatrick level 1 (satisfaction evaluation revealed that they appreciated the benefits of having the posting with PHC as the focal point. Feedback recommended some further changes in the community posting which could be addressed through complete administrative control of the primary health centre as urban health and training center of the teaching hospital; and also through practice of core disciplines of community medicine by faculty of community medicine. Conclusion: It is important to introduce the medical undergraduates to the core disciplines of community medicine early through community postings. Community postings should be conducted with primary health centre or urban health and training centre as the focal point.

Full Text Available This paper explores recent advances in experimental methodology to analyze elite behavior. Using an email experiment conducted in the context of the Brazilian 2008 municipal elections, we studied whether candidates target "swing" or "core" voters during campaigns. Candidates from all parties – 1,000 candidates in all – were contacted by randomly generated citizens who identified themselves as either core or swing voters. Additionally, we randomized senders' past voting behavior and their gender. To identify the baseline answer rate, we employed a placebo treatment with no reference to the elections. Our results show that Brazilian candidates target any sender as long as she identifies herself as a potential voter. Within this general finding, models with city-specific fixed effects indicate that Brazilian politicians tend to target core voters. The paper contributes to the general experimental literature by providing an easily replicable design that can test the behavior of elite interaction with the public. At the same time, the paper extends the literature on core versus swing voters by providing an empirical test that can shed light on the effects of a specific political environment (type of election, voting rule, and party structure, and how it affects the relationship between candidates and voters during elections.

The Facility for Antiproton and Ion Research (FAIR) is under construction at Darmstadt, Germany. It will deliver high intensity beams of ions and antiprotons for experiments in the fields of atomic physics, plasma physics, nuclear physics, hadron physics, nuclear matter physics, material physics and biophysics. One of the scientific pillars of FAIR is the Compressed Baryonic Matter (CBM) experiment which is designed for the study of high density nuclear matter as it exists in the core of neutron stars. In this article the scientific program of FAIR will be reviewed with emphasis on the CBM experiment.

Studies of attrition in science education show that students who leave are often extrinsically motivated, whereas students who stay are often intrinsically motivated. Furthermore, students (in Scandinavia) tend to use an introvert discourse when explaining their motives for leaving. A longitudinal...... study of 26 first year students of physics, who were interviewed on two to seven occasions over a year, show that even the intrinsically motivated students struggle with their studies. They experience a pressure for using a surface approach to studying, which they find inappropriate. Although students...... use an introspective discourse, analysis of interviews show that they experience a conflict between their intrinsic interest in physics and the curriculum. This can be interpreted as a problem with the didactical transposition; the ‘physics taught’ is too distant from ‘research physics’....

Intense beams of heavy ions are capable of heating volumetric samples of matter to high energy density. Experiments are performed on the resulting warm dense matter (WDM) at the NDCX-I ion beam accelerator. The 0.3 MeV, 30 mA K(+) beam from NDCX-I heats foil targets by combined longitudinal and transverse neutralized drift compression of the ion beam. Both the compressed and uncompressed parts of the NDCX-I beam heat targets. The exotic state of matter (WDM) in these experiments requires specialized diagnostic techniques. We have developed a target chamber and fielded target diagnostics including a fast multichannel optical pyrometer, optical streak camera, laser Doppler-shift interferometer (Velocity Interferometer System for Any Reflector), beam transmission diagnostics, and high-speed gated cameras. We also present plans and opportunities for diagnostic development and a new target chamber for NDCX-II.

Two experiments of I. K. Kikoin—the correlation between superconductivity and the galvanomagnetic properties of metals (1933), and the gyromagnetic effect in superconductors (1938)—which were carried out long before the appearance of the microscopic theory of superconductivity, anticipated two of its principal conclusions. Established were: 1) the determining role of electron-phonon interaction; 2) the orbital nature of diamagnetism in superconductors.

The Extreme Energy Events Project is an experiment for the detection of Extensive Air Showers which exploits the Multigap Resistive Plate Chamber technology. At the moment 40 EEE muon telescopes, distributed all over the Italian territory, are taking data, allowing the relative analysis to produce the first interesting results, which are reported here. Moreover, this Project has a strong added value thanks to its effectiveness in terms of scientific communication, which derives from the peculiar way it was planned and carried on.

For all the time and frustration that high energy physicists expend interacting with computers, it is surprising that more attention is not paid to the critical role computers play in the science. With large, expensive colliding beam experiments now dependent on complex programs working at startup, questions of reliability -- the trustworthiness of software -- need to be addressed. This issue is most acute in triggers, used to select data to record -- and data to discard -- in the real time environment of an experiment. High level triggers are built on codes that now exceed 2 million source lines -- and for the first time experiments are truly dependent on them. This dependency will increase at the accelerators planned for the new millennium (SSC and LHC), where cost and other pressures will reduce tolerance for first run problems, and the high luminosities will make this on-line data selection essential. A sense of this incipient crisis motivated the unusual juxtaposition to topics in these lectures. 37 refs., 1 fig.

Full Text Available People with mental illness have higher rates of physical health problems and consequently live significantly shorter lives. This issue is not yet viewed as a national health priority and research about mental health consumer views on accessing physical health care is lacking. The aim of this study is to explore the experience of mental health consumers in utilizing health services for physical health needs. Qualitative exploratory design was utilized. Semistructured focus groups were held with 31 consumer participants. Thematic analysis revealed that three main themes emerged: scarcity of physical health care, with problems accessing diagnosis, advice or treatment for physical health problems; disempowerment due to scarcity of physical health care; and tenuous empowerment describing survival resistance strategies utilized. Mental health consumers were concerned about physical health and the nonresponsive health system. A specialist physical health nurse consultant within mental health services should potentially redress this gap in health care provision.

The educational use of video and multimedia is increasing rapidly in secondary and higher education across all disciplines. Videos for physics education can be found in many universities and other educational institutions websites all over the world. In the area of experimental physics, the available videos demonstrate mainly physical phenomena or physicsexperiments and only few of them allow for the quantitative estimation of physical parameters. In this work, we present characteristic videos of an ongoing project aiming at the development of a collection of educational videos that guide students to measure data and to analyze them in order to calculate physical quantities. These videos can be used for physics teaching, as a demonstration, as a supplementary educational tool for the students pre lab preparation and also in the physics lab, if the necessary equipment is not available or in case of time consuming measurements. The pilot use of a video related to the measurement of the lead attenuation coeffic...

This paper introduces the results of selecting and breeding a micro-organism, Strain I, and its core model experiment investigation for microbial enhanced oil recovery (MEOR). Strain I was separated from the formation water of the Dagang oil field, with analytical results showing that Strain I is a gram-positive bacillus. A further study revealed that this strain has an excellent tolerance of environmental stresses: It can survive in conditions of 70℃, 30 wt% salinity and pH3.5-9.4. Strain I can metabolize biosurfactants that could increase the oil recovery ratio, use crude oil as the single carbon source, and decompose long-chain paraffin with a large molecular weight into short-chain paraffin with a small molecular weight. The core model experiment shows that Strain I enhances oil recovery well. Using 2 vol% of the fermentation solution of Strain I to displace the crude oil in the synthetic plastic bonding core could increase the recovery ratio by 21.6%.

Purpose: In 2004, the American Society for Radiation Oncology (ASTRO) published its first physics education curriculum for residents, which was updated in 2007. A committee composed of physicists and physicians from various residency program teaching institutions was reconvened again to update the curriculum in 2009. Methods and Materials: Members of this committee have associations with ASTRO, the American Association of Physicists in Medicine, the Association of Residents in Radiation Oncology, the American Board of Radiology (ABR), and the American College of Radiology. Members reviewed and updated assigned subjects from the last curriculum. The updated curriculum was carefully reviewed by a representative from the ABR and other physics and clinical experts. Results: The new curriculum resulted in a recommended 56-h course, excluding initial orientation. Learning objectives are provided for each subject area, and a detailed outline of material to be covered is given for each lecture hour. Some recent changes in the curriculum include the addition of Radiation Incidents and Bioterrorism Response Training as a subject and updates that reflect new treatment techniques and modalities in a number of core subjects. The new curriculum was approved by the ASTRO board in April 2010. We anticipate that physicists will use this curriculum for structuring their teaching programs, and subsequently the ABR will adopt this educational program for its written examination. Currently, the American College of Radiology uses the ASTRO curriculum for their training examination topics. In addition to the curriculum, the committee updated suggested references and the glossary. Conclusions: The ASTRO physics education curriculum for radiation oncology residents has been updated. To ensure continued commitment to a current and relevant curriculum, the subject matter will be updated again in 2 years.

Gold nanoshells (GNSs) on silica cores are widely used in various biomedical applications that need the spectral tunability and controlled absorption/scattering ratio. However, the plasmonic quality of experimental extinction spectra of GNS colloids differs from that predicted by Mie theory. In this work, we fabricated highly monodisperse silica nanospheres to use them further as cores for synthesis of silica/gold nanoshells. Four GNS samples with 116-nm core and gold shell thickness ranging from 16 to 34 nm (116/16, 18, 25, 34) were additionally separated in glycerol gradient solutions to obtain fractions with dominant percentage of single particles or aggregates of various sizes. The separated samples demonstrated extinction spectra with a high extinction maximum to minimum ratio about 3. Optical properties of GNS monomers and aggregates with fixed and random orientations were calculated by Mie theory for polydisperse GNSs, by a generalized multiparticle Mie (GMM) theory for aggregates of separated GNSs, and by the finite-difference time-domain (FDTD) method for aggregates of overlapped GNSs. The extinction spectra of upper fractions from 116/25 and 116/34 samples are shown to be well described by Mie theory for GNSs with polydisperse shell thickness. However, for as prepared 116/16 sample this approach fails because of strong near infrared (NIR) contribution from GNS dimers and trimers. The formation of such aggregates is due to coupling of silica cores at early stages of nanoshell synthesis, thus leading to peanut structures with overlapped gold shells. We suggested TEM-based ensemble model with single particles and small dimer and trimer aggregates, which gives satisfactory agreement between measured and FDTD simulated spectra in the vis-NIR region. Thus, the proposed synthetic technology produces high quality gold nanoshells, which remarkable optical properties are in good agreement with electromagnetic simulations based on TEM data.

The explosion of core-collapse supernova depends on a sequence of events taking place in less than a second in a region of a few hundred kilometers at the center of a supergiant star, after the stellar core approaches the Chandrasekhar mass and collapses into a proto-neutron star, and before a shock wave is launched across the stellar envelope. Theoretical efforts to understand stellar death focus on the mechanism which transforms the collapse into an explosion. Progress in understanding this mechanism is reviewed with particular attention to its asymmetric character. We highlight a series of successful studies connecting observations of supernova remnants and pulsars properties to the theory of core-collapse using numerical simulations. The encouraging results from first principles models in axisymmetric simulations is tempered by new puzzles in 3D. The diversity of explosion paths and the dependence on the pre-collapse stellar structure is stressed, as well as the need to gain a better understanding of hydr...

The demonstration of efficient core heating is the main purpose of FIREX-I project, where Au cone-attached solid ball CD target is used. For the guiding of fast electron beam generated by relativistic laser plasma interactions, the kilo-Tesla-class longitudinal magnetic field is applied by a capacitor-coil target and kJ-class ns-durration high power laser. In addition, to reduce the collisional effect (energy loss and scattering of fast electrons) during propagation in the Au cone tip, we introduced opened-tip cone (tipless cone). To evaluate the core heating properties, we carried out the integrated simulations, which shows the enhancement of core heating efficiency due to the magnetic guiding and opened-tip cone by a factor of three. These simulation results will be shown and be compared with the experimental results. JSPS KAKENHI (26400532, 15H03758, 16H02245, 15K21767), NIFS Collaboration Research program (NIFS12KUGK05, NIFS14KNSS054), and FIREX project.

Hypernuclear research will be one of the main topics addressed by the PANDA experiment at the planned Facility for Antiproton and Ion Research FAIR at Darmstadt (Germany). http://www. gsi.de, http://www.gsi.de/fair/. Thanks to the use of stored overline {p} beams, copious production of double Λ hypernuclei is expected at the PANDA experiment, which will enable high precision γ spectroscopy of such nuclei for the first time, and consequently a unique chance to explore the hyperon-hyperon interaction. In particular, ambiguities of past experiments in determining the strength of the ΛΛ interaction will be avoided thanks to the excellent energy precision of a few keV (FWHM) achieved by germanium detectors. Such a resolution capability is particularly needed to resolve the small energy spacing of the order of (10-100) keV, which is characteristic from the spin doublet in hypernuclei the so -called "hypernuclear fine structure". In comparison to previous experiments, PANDA will benefit from a novel technique to assign the various observable γ-transitions in a unique way to specific double hypernuclei by exploring various light targets. Nevertheless, the ability to carry out unique assignments requires a devoted hypernuclear detector setup. This consists of a primary nuclear target for the production of {Ξ }-+overline {Ξ } pairs, a secondary active target for the hypernuclei formation and the identification of associated decay products and a germanium array detector to perform γ spectroscopy. Moreover, one of the most challenging issues of this project is the fact that all detector systems need to operate in the presence of a high magnetic field and a large hadronic background. Accordingly, the need of an innovative detector concept will require dramatic improvements to fulfil these conditions and that will likely lead to a new generation of detectors. In the present talk details concerning the current status of the activities related to the detector developments

The discovery of a Higgs-like boson with a mass of about 125 GeV has prompted the question of whether or not this particle is part of a much larger and more complex Higgs sector than that envisioned in the Standard Model. In this talk, we outline the current results from the ATLAS Experiment regarding Beyond the Standard Model (BSM) Higgs hypothesis tests. Searches for additional Higgs bosons are presented and interpreted in well motivated BSM Higgs frameworks, such as two Higgs doublet Models and the Minimal Supersymmetric Standard Model.

were obtained from medical records of all patients. RESULTS: Physical activity and aerobic fitness was significantly lower in patients with FES compared with healthy controls (p fitness. Patients with more severe....... AIM: The purpose of the study was to compare physical activity in patients with FES with healthy controls; to investigate changes in physical activity over 1 year of follow-up; and to explore the correlations of physical activity and anomalous bodily experiences reported by patients with FES. METHODS......: Both physical activity and aerobic fitness were measured. Anomalous bodily experiences were measured by selected items from the Examination of Anomalous Self-Experience and The Body Awareness Scale. Psychopathological data comprising negative and positive symptoms and data on psychotropic medication...

There is a growing concern about coupling among physical components in NWP models. The Physics package of the NCEP Global Forecast System (GFS) has been considerably turned and connection among various components is well considered. Thus, the full GFS physical package was implemented into the GRAPES-MESO and its single column version as well. Using the data collected at ARM Southern Great Plain site during the summer 1997 Intensive Observing Period, several experiments of single-column model (SCM) were conducted to test performance of a set of original physical processes of GRAPES(CTL experiment) and the GFS physics package implemented(GFS experiment). Temperature, moisture, radiation, surface heat flux, surface air temperature and precipitation are evaluated. It is found that potential temperature and vapor mixing ratio simulated by GFS experiment is more accurate than that of CTL experiment. Errors of surface downward solar and long-wave radiation simulated by GFS experiment are less than that of CTL experiment and upward latent and sensible heat flux are also better agreeing with observation. The maximum and minimum 2-m air temperatures of the GFS experiment are close to observation compared with that of CTL experiment. Analysis of precipitation simulated shows that both sets of physical processes well reproduce heavy rainfall events. Failure and delay of moderate rainfall events and over predictions of drizzle events are commonly found for two sets of experiments. For the case of three rainfall events, the errors of potential temperature and vapor mixing ratio simulated by GFS experiment were smaller than that of CTL experiment. It is shown that the late occurrences of rainfall are resulted from a more stable temperature profile and lower moisture simulated in boundary layer than those from the observation prior to rainfall. When the simulated rainfall occurs, the simulated temperature and moisture become more favorable to the precipitation than observation.

Despite the Standard Model (SM) has been strongly confirmed by the Higgs discovery, several experimental facts are still not explained. The SHiP experiment (Search for Hidden Particles), a beam dump experiment at CERN, aims at the observation of long lived particles very weakly coupled with ordinary matter. These particles of the GeV mass scale, foreseen in many extensions of the SM, might come from the decay of charmed hadrons produced in the collision of a 400 GeV proton beam on a target. High rates of all the three active neutrinos are also expected. For the first time the properties and the cross section of the ντ will be studied thanks to a detector based on nuclear emulsions, with the micrometric resolution needed to identify the tau lepton produced in neutrino interactions. Measuring the charge of the tau daughters, will enable the first observation of the ν ̄τ and the study of its cross section.

With the aid of modern electronics the speed of light was directly measured by timing the delay of a light pulse from a laser in reflecting from a mirror in experiment performed in educational Physics Laboratory.

Presents a physical inorganic experiment in which large single crystals of the alkali halides doped with divalent ion impurities are prepared easily. Demonstrates the ion pairing of inorganic ions in solid solution. (CS)

Full Text Available The accumulator is a passive safety injection device for emergency core cooling systems. As an important safety feature for providing a high-speed injection flow to the core by compressed nitrogen gas pressure during a loss-of-coolant accident (LOCA, the accumulator injects its precharged nitrogen into the system after its coolant has been emptied. Attention has been drawn to the possible negative effects caused by such a nitrogen injection in passive safety nuclear power plants. Although some experimental work on the nitrogen injection has been done, there have been no comparative tests in which the effects on the system responses and the core safety have been clearly assessed. In this study, a new thermal hydraulic integral test facility—the advanced core-cooling mechanism experiment (ACME—was designed and constructed to support the CAP1400 safety review. The ACME test facility was used to study the nitrogen injection effects on the system responses to the small break loss-of-coolant accident LOCA (SBLOCA transient. Two comparison test groups—a 2-inch cold leg break and a double-ended direct-vessel-injection (DEDVI line break—were conducted. Each group consists of a nitrogen injection test and a nitrogen isolation comparison test with the same break conditions. To assess the nitrogen injection effects, the experimental data that are representative of the system responses and the core safety were compared and analyzed. The results of the comparison show that the effects of nitrogen injection on system responses and core safety are significantly different between the 2-inch and DEDVI breaks. The mechanisms of the different effects on the transient were also investigated. The amount of nitrogen injected, along with its heat absorption, was likewise evaluated in order to assess its effect on the system depressurization process. The results of the comparison and analyses in this study are important for recognizing and understanding the

New Zealand physical education has a strong historical association with the Teaching Personal and Social Responsibility (TPSR) model. This paper looks at this history, the relationship of the model with the New Zealand Curriculum, and reports on the experiences of one secondary school physical education teacher who has been teaching personal and social responsibility through physical education for many years Históricamente, la educación física en Nueva Zelanda ha estado muy asociada al mod...

The processes $e^+e^-\\to e^+e^-X$, with $X$ being either the $\\eta$ meson or $\\pi^0\\pi^0$, are studied at DA$\\Phi$NE, with $e^+e^-$ beams colliding at $\\sqrt{s}\\simeq1$ GeV, below the $\\phi$ resonance peak. The data sample is from an integrated luminosity of 240 pb$^{-1}$, collected by the KLOE experiment without tagging of the outgoing $e^+e^-$. Preliminary results are presented on the observation of the $\\gamma\\gamma\\to\\eta$ process, with both $\\eta\\to\\pi^+\\pi^-\\pi^0$ and $\\eta\\to\\pi^0\\pi^0\\pi^0$ channels, and the evidence for $\\gamma\\gamma\\to\\pi^0\\pi^0$ production at low $\\pi^0\\pi^0$ invariant mass.

Human space flight requires protecting astronauts from the harmful effects of space radiation. The availability of measured nuclear cross section data needed for these studies is reviewed in the present paper. The energy range of interest for radiation protection is approximately 100 MeV/n to 10 GeV/n. The majority of data are for projectile fragmentation partial and total cross sections, including both charge changing and isotopic cross sections. The cross section data are organized into categories which include charge changing, elemental, isotopic for total, single and double differential with respect to momentum, energy and angle. Gaps in the data relevant to space radiation protection are discussed and recommendations for future experiments are made.

The Physics of Colloids in Space (PCS) experiment is a Microgravity Fluids Physics investigation that is presently located in an Expedite the Process of Experiments to Space Station (EXPRESS) Rack on the International Space Station. PCS was launched to the International Space Station on April 19, 2001, activated on May 31, 2001, and will continue to operate about 90 hr per week through May 2002.

There have long been concerns about the use of physical restraint in residential care. This article presents the findings of a qualitative study that explores the experiences of children, young people and residential workers of physical restraint. The research identifies the dilemmas and ambiguities for both staff and young people, and…

Following recent education policy and curriculum changes in England, the notion of inclusion of children with special educational needs in physical education has increasingly become a topic of research interest and concern. It was the aim of this study to explore personal experiences and perspectives of inclusion in physical education. To this end…

"The particle physics group at Liverpool University has purchased an LRXPlus singlecolumn materials testing machine from Lloyd Instruments, which will be used to help characterise the carbon-fibre support frames for detectors used for state-of-the-art particle physicsexperiments." (1 page)

This study focuses on 15 foreign-born students majoring in physics who are also racial/ethnic minorities. We address the research question: What are the acculturation experiences of foreign-born Students of Color majoring in physics? Berry's (2003) theory of acculturation and Bandura's (1994) theory of self-efficacy were substantive…

This study focuses on 15 foreign-born students majoring in physics who are also racial/ethnic minorities. We address the research question: What are the acculturation experiences of foreign-born Students of Color majoring in physics? Berry's (2003) theory of acculturation and Bandura's (1994) theory of self-efficacy were substantive…

The purpose of this research was to examine university supervisors' experiences and perceptions of a cooperating physical education teacher education (COPET) programme while on teaching practice. Teaching practice is a central tenet of physical education teacher education (PETE) preparation. The COPET programme was designed to support the triad…

This report discusses the following topics on the conceptual design of the Tokamak PhysicsExperiment: Role and mission of TPX; overview of design; physics design assessment; engineering design assessment; evaluation of cost, schedule, and management plans; and, environment safety and health.

This article provides an analysis of developmental discourses underpinning preschool physical education in Scotland's Curriculum for Excellence. Implementing a post-structural perspective, the article examines the preschool experiences and outcomes related to physical education as presented in the Curriculum for Excellence "health and…

Purpose: Single motherhood has been associated with negative health consequences such as depression and cardiovascular disease. Physical activity might reduce these consequences, but little is known about physical activity experiences and beliefs that might inform interventions and programs for single mothers. The present study used…

Purpose: Single motherhood has been associated with negative health consequences such as depression and cardiovascular disease. Physical activity might reduce these consequences, but little is known about physical activity experiences and beliefs that might inform interventions and programs for single mothers. The present study used…

The mobile acceleration sensor has been used to in Physicsexperiments on free and damped oscillations. Results for the period, frequency, spring constant and damping constant match very well to measurements obtained by other methods. The Accelerometer Monitor application for Android has been used to get the outputs of the sensor. Perspectives for the Physics laboratory have also been discussed.

We remodeled our sophomore curriculum extensively both in the laboratories and the lectures. Our Experimental Contemporary Physics laboratory (PHY293) was almost completely re-built both in curriculum and pedagogy. Among the new experiments that we introduced are Nanoparticle plasmon resonance, Saturated absorption and fluorescence in iodine molecules, Quantized conductance in atomic-scale constrictions, and Water droplets behavior and manipulation on metal surfaces. This presentation will focus on the last two experiments. Quantized conductance in a constriction in a gold wire being pulled slowly is a unique direct application of the one-dimensional potential wells. Unlike most experiments on quantum mechanics that use optics, this experiment is transport-based, conceptually simple, and robust in addition to being low-cost. The transport properties of the wire span multiple transport regimes while being pulled. It is quite valuable for students (a significant fraction of whom are biological physics and engineering physics majors) to understand the behavior of water droplets on different surfaces. Water is the medium in which biological activities occur and is important in many other applications like air conditioning and refrigeration. We design simple gradients in the hydrophobic/hydrophilic properties of metal surfaces in order to move water droplets in a controlled way, even against gravity. Students explore the effects of surface tension and metal roughness on droplets.

The partitioning of potassium between roedderite, K2Mg5Si12O30 and an Fe-FeS melt was investigated at temperatures about 40 C above the Fe-FeS eutectic. Roedderite was considered a prime candidate for one of the potassium-bearing phases in the primitive earth because roedderite and merrihueite are the only two silicates containing essential potassium which have been identified in stony meteorites. Application of the results to a primitive chondritic earth is discussed, and it is concluded that extraction of most of the earth's potassium into the Fe-FeS core would occur under the conditions in the early earth.-

The importance of laser driven indirect drive for high energy density physicsexperiments was recognised at A WE in 1971. The two beam 1TW HELEN laser was procured to work in this area and experiments with this system began in 1980. Early experiments in hohlraum coupling and performance scaling with both l.06μm and 0.53μm will be described together with experiments specifically designed to confirm the understanding of radiation wave propagation, hohlraum heating and hohlraum plasma filling. The use of indirect drive for early experiments to study spherical and cylindrical implosions, opacity, EOS, mix and planar radiation hydrodynamics experiments will also be described.

Direct measurements of charged cosmic radiation with instruments in Low Earth Orbit (LEO), or flying on balloons above the atmosphere, require the identification of the incident particle, the measurement of its energy and possibly the determination of its sign-of-charge. The latter information can be provided by a magnetic spectrometer together with a measurement of momentum. However, magnetic deflection in space experiments is at present limited to values of the Maximum Detectable Rigidity (MDR) hardly exceeding a few TV. Advanced calorimetric techniques are, at present, the only way to measure charged and neutral radiation at higher energies in the multi-TeV range. Despite their mass limitation, calorimeters may achieve a large geometric factor and provide an adequate proton background rejection factor, taking advantage of a fine granularity and imaging capabilities. In this lecture, after a brief introduction on electromagnetic and hadronic calorimetry, an innovative approach to the design of a space-borne, large acceptance, homogeneous calorimeter for the detection of high energy cosmic rays will be described.

Axions are expected to be produced in the sun via the Primakoff process. They may be detected through the inverse process in the laboratory, under the influence of a strong magnetic field, giving rise to X-rays of energies in the range of a few keV. Such an Axion detector is the CERN Axion Solar Telescope (CAST), collecting data since 2003. Results have been published, pushing the axion-photon coupling g$_{a\\gamma}$ below the 10$^{-10}$ GeV$^{-1}$ limit at 95% CL, for axion masses less than 0.02 eV. This limit is nearly an order of magnitude lower than previous experimental limits and surpassed for the first time limits set from astrophysical arguments based on the energy-loss concept. The experiment is currently exploring axion masses in the range of 0.02 eV $< m_a

In 2014, the practice of user experience design in academic libraries continues to evolve. It is typically applied in the context of interactions with digital interfaces. Some academic librarians are applying user experience approaches more broadly to design both environments and services with human-centered strategies. As the competition for the…

In 2014, the practice of user experience design in academic libraries continues to evolve. It is typically applied in the context of interactions with digital interfaces. Some academic librarians are applying user experience approaches more broadly to design both environments and services with human-centered strategies. As the competition for the…

An investigation was conducted using a 1.2 MW RF induction heater facility to aid in developing the technology necessary for designing a self critical fissioning uranium plasma core reactor. Pure, high temperature uranium hexafluoride (UF6) was injected into an argon fluid mechanically confined, steady state, RF heated plasma while employing different exhaust systems and diagnostic techniques to simulate and investigate some potential characteristics of uranium plasma core nuclear reactors. The development of techniques and equipment for fluid mechanical confinement of RF heated uranium plasmas with a high density of uranium vapor within the plasma, while simultaneously minimizing deposition of uranium and uranium compounds on the test chamber peripheral wall, endwall surfaces, and primary exhaust ducts, is discussed. The material tests and handling techniques suitable for use with high temperature, high pressure, gaseous UF6 are described and the development of complementary diagnostic instrumentation and measurement techniques to characterize the uranium plasma, effluent exhaust gases, and residue deposited on the test chamber and exhaust system components is reported.

CALET is a space mission of the Japanese Aerospace Agency (JAXA) in collaboration with the Italian Space Agency (ASI) and NASA. The CALET instrument (CALorimetric Electron Telescope) is planned for a long exposure on the JEM-EF, an external platform of the Japanese Experiment Module KIBO, aboard the International Space Station (ISS). The main science objectives include high precision measurements of the inclusive electron (+positron) spectrum below 1 TeV and the exploration of the energy region above 1 TeV, where the shape of the high end of the spectrum might reveal the presence of nearby sources of acceleration. With an excellent energy resolution and low background contamination CALET will search for possible spectral signatures of dark matter with both electrons and gamma rays. It will also measure the high energy spectra and relative abundance of cosmic nuclei from proton to iron and detect trans-iron elements up to Z ~ 40. With a large exposure and high energy resolution, CALET will be able to verify and complement the observations of CREAM, PAMELA and AMS-02 on a possible deviation from a pure power-law of proton and He spectra in the region of a few hundred GeV and to extend the study to the multi-TeV region. CALET will also contribute to clarify the present experimental picture on the energy dependence of the boron/carbon ratio, below and above 1 TeV/n, thereby providing valuable information on cosmic-ray propagation in the galaxy. Gamma-ray transients will be studied with a dedicated Gamma-ray Burst Monitor (GBM).

On July 4, 2005 the Deep Impact experiment produced an impact event on the surface of Comet 9P Tempel 1, using a 360 kg (primarily copper) impactor striking the comet at a velocity of 10.2 km/sec. In addition to images taken from the flyby spacecraft (500 km closest approach distance), images of the target were also returned from the impactor spacecraft, which show that the impactor hit the comet's surface at an oblique angle of roughly 60 degrees from the surface normal. The impactor struck the comet at an ideal location for viewing the cratering process by the flyby spacecraft both during the 800 second long post-impact imaging phase and during the ``look-back" imaging phase (beginning ˜ 45 minutes after impact). Within a fraction of a second of impact, an incandescent vapor plume emerged from the impact site, cooling rapidly and moving away from the comet at a speed of ˜ 5 km/sec. This vapor emission was followed by the emergence and rapid growth of a prominent, conical ejecta plume, indicating crater excavation flow. This ejecta plume was more opaque (composed of finer material) than predicted, obscuring clear observations of the impact crater itself (extraction efforts continue). However, the behavior of the plume during both it's growth and fallback stages is consistent with a gravity-scaled cratering event into a very weak (post-shock) target material. The expansion state of the plume during the look-back phase will also allow us to place constraints on the comet's gravity field (and by extension mass and density).

The first representatives of star-shaped molecules having 3-alkylrhodanine (alkyl-Rh) electron-withdrawing groups, linked through bithiophene pi-spacer with electron-donating either triphenylamine (TPA) or tris(2-methoxyphenyl)amine (m-TPA) core were synthesized. The physical properties and photovol

The search for New Physics in final states with an energetic jet and large missing transverse momentum plays a major role in the physics program of the LHC experiments. This experimental signature is sensitive to different New Physics models including different scenarios of supersymmetry, models that predict the existence of extra dimensions and the production of Weakly Interacting Dark Matter candidates. Results based on the LHC Run-1 dataset corrisponding to 20.3 fb$^{-1}$ and firsts performance plots based on the data collected at the center of mass energy of 13 TeV with the ATLAS experiment at the LHC are presented.

As a result, the method to wash uranium-contaminated gravels could not get satisfactory desalinization rate. During the long oxidization process it was judged that uranium penetrated inside the gravels, so we tried to increase the desalinization rate by fragmentizing them into pieces and then washing them. The desalinization rate after fragmentizing the gravels into pieces and washing them brought a satisfactory result.. However, we could obtain desired concentration for gravels with high uranium concentration by fragmentizing them and breaking them further into even smaller pieces. Likewise, desalinization using soil washing process is complicated and has to go through multiple washing steps, resulting in too much of waste fluid generated accordingly. The increase of waste fluid generated leads to the increase in by-products of the final disposal process later on, bringing a not good economic result. Furthermore, taking into account that the desalinization rate is 65% during soil washing process, it is expected that gravel washing will show a similar desalinization result; it is considered uneasy to have a perfect desalinization only by soil washing. The grinding method is actually used in the primary desalinization process in order to desalinize radioactivity-contaminated concrete. This method does desalinization by grinding the radioactivity-contaminated area of the concrete surface with desalinization equipment, which enables a near-to-perfect desalinization for relatively thinly contaminated surface. Likewise, this research verified the degree of desalinization by applying the grinding method and comparing it to the fragmentizing-washing method, and attempted to find a method to desalinize uranium-contaminated gravels more effectively. In order to desalinize uranium-contaminated gravels more effectively and compare to the existing washing-desalinization method, we conducted a desalinization experiment with grinding method that grinds gravel surface. As a

Pore system architecture is a key feature for understanding physical, biological and chemical processes in soils. Development of visualisation technics, especially x-ray CT, during recent years has been useful in describing the complex relationships between soil architecture and soil functions. We believe that combining visualization with physical models is a step further towards a better understanding of these relationships. We conducted a concept study using natural, artificial and 3D-printed soil cores. Eight natural soil cores (100 cm3) were sampled in a cultivated stagnic Luvisol at two depths (topsoil and subsoil), representing contrasting soil pore systems. Cylinders (100 cm3) were produced from plastic or from autoclaved aerated concrete. Holes of diameters 1.5 and 3 mm were drilled in the cylinder direction for the plastic cylinder and for one of the AAC cylinders. All natural and artificial cores were scanned in a micro x-ray CT scanner at a resolution of 35 µm. The reconstructed image of each soil core was printed with 3D multijet printing technology at a resolution of 29 µm. In some reconstructed digital volumes of the natural soil cores, pores of different sizes (equivalent diameter of 35, 70, 100, and 200 µm) were removed before additional 3D printing. Effective air-filled porosity, Darcian air permeability, and oxygen diffusion were measured on all natural, artificial and printed cores. The comparison of the natural and the artificial cores emphasized the difference in pore architecture between topsoil (sponge like) and subsoil (dominated by large vertical macropores). This study showed the high potential of using printed soil cores for understanding soil pore functions. The results confirm the suitability of the Ball model partitioning the pore system into arterial, marginal and remote pores to describe effects of soil structure on gas transport.

The area of energetic particle (EP) physics of fusion research has been actively and extensively researched in recent decades. The progress achieved in advancing and understanding EP physics has been substantial since the last comprehensive review on this topic by W.W. Heidbrink and G.J. Sadler [1]. That review coincided with the start of deuterium-tritium (DT) experiments on Tokamak Fusion Test reactor (TFTR) and full scale fusion alphas physics studies. Fusion research in recent years has been influenced by EP physics in many ways including the limitations imposed by the "sea" of Alfven eigenmodes (AE) in particular by the toroidicityinduced AEs (TAE) modes and reversed shear Alfven (RSAE). In present paper we attempt a broad review of EP physics progress in tokamaks and spherical tori since the first DT experiments on TFTR and JET (Joint European Torus) including helical/stellarator devices. Introductory discussions on basic ingredients of EP physics, i.e. particle orbits in STs, fundamental diagnostic techniques of EPs and instabilities, wave particle resonances and others are given to help understanding the advanced topics of EP physics. At the end we cover important and interesting physics issues toward the burning plasma experiments such as ITER (International Thermonuclear Experimental Reactor).

In this dissertation, I investigate the effects of a grounded learning experience on college students' mental models of physics systems. The grounded learning experience consisted of a priming stage and an instruction stage, and within each stage, one of two different types of visuo-haptic representation was applied: visuo-gestural simulation…

This article narrates the lived experiences of a Physical Science teacher named Thobani (pseudonym) in implementing a new curriculum in South Africa. Drawing on the work of Husserl and Heidegger, the article describes the objects of direct experience in Thobani's consciousness about his life as a learner and teacher as revealed during an in-depth…

This article narrates the lived experiences of a Physical Science teacher named Thobani (pseudonym) in implementing a new curriculum in South Africa. Drawing on the work of Husserl and Heidegger, the article describes the objects of direct experience in Thobani's consciousness about his life as a learner and teacher as revealed during an in-depth…

Recent years have seen an increase in scholarly attention to minority pupils and their experience of physical education (PE). UK research identifies specific challenges related to Muslim pupils' participation in PE. In Norway, little research has been undertaken on Muslim pupils' experiences in PE, something this paper hopes to redress in part. In…

The existence of gender-STEM (science, technology, engineering, and mathematics) stereotypes has been repeatedly documented. This article examines physics teachers' gender bias in grading and the influence of teaching experience in Switzerland, Austria, and Germany. In a 2?×?2 between-subjects design, with years of teaching experience included as…

Provides background information, procedures, and results of an undergraduate physical chemistry experiment on the polarization of absorption spectra of cyanine dyes in stretched polyvinyl alcohol films. The experiment gives a simple demonstration of the concept of linear dichromism and the validity of the TEM method used in the analyses. (JN)

Provides background information, procedures, and results of an undergraduate physical chemistry experiment on the polarization of absorption spectra of cyanine dyes in stretched polyvinyl alcohol films. The experiment gives a simple demonstration of the concept of linear dichromism and the validity of the TEM method used in the analyses. (JN)

This article presents the physics case for a new high-energy, ultra-high statistics neutrino scattering experiment, NuSOnG (Neutrino Scattering on Glass). This experiment uses a Tevatron-based neutrino beam to obtain over an order of magnitude higher statistics than presently available for the purely weak processes $\

As part of an effort to infuse our physical chemistry laboratory with biologically relevant, investigative experiments, we detail four integrated thermodynamic experiments that characterize the denaturation (or unfolding) and self-interaction of hen egg white lysozyme as a function of pH and ionic strength. Students first use Protein Explorer to…

We developed a modular pair of experiments for use in the undergraduate physical chemistry and biochemistry laboratories. Both experiments examine the thermodynamics of the binding of a small molecule, eosin Y, to the protein lysozyme. The assay for binding is the quenching of lysozyme fluorescence by eosin through resonant energy transfer. In…

Socio-scientific issues (SSI) are recommended by many science educators worldwide for learners to acquire first hand experience to apply what they learned in class. This investigated experiences of teacher-researcher and students in using SSI in Physical Science, Second Semester, School Year 2012-2013. Latest and controversial news articles on…

Many survivors of child sexual abuse who engage in psychotherapy also experiencephysical health problems. This article summarizes the findings of a multiphased qualitative study about survivors' experiences in healthcare settings. The study informed the development of the "Handbook on Sensitive Practice for Health Care Practitioners: Lessons…

Full Text Available The article investigates the state of the educational computer simulation and its modern features. It deals with psychological and didactic approaches to modeling in physics education and school physicalexperiment. It was considered the possible classification of computer models for distance learning system, as well as proposed the ways of implementing virtual experiment in distance education in physics. The main types of virtual modeling, the most widely used computer systems support in teaching physics, their possible application in teaching secondary school students were characterized. The peculiarities of distance education of future physics teachers by means of electronic teaching methods as a combination of integrated electronic educational resources and services were highlighted.

Fluorescent-core microcapillaries (FCMs) present a robust basis for the application of optical whispering gallery modes toward refractometric sensing. An important question concerns whether these devices can be rendered insensitive to local temperature fluctuations, which may otherwise limit their refractometric detection limits, mainly as a result of thermorefractive effects. Here, we first use a standard cylindrical cavity formalism to develop the refractometric and thermally limited detection limits for the FCM structure. We then measure the thermal response of a real device with different analytes in the channel and compare the result to the theory. Good stability against temperature fluctuations was obtained for an ethanol solvent, with a near-zero observed thermal shift for the transverse magnetic modes. Similarly good results could in principle be obtained for any other solvent (e.g., water), if the thickness of the fluorescent layer can be sufficiently well controlled.

This article is thoughts from the author on particle physics work from his perspective. It is not a summary of his work on the tau lepton, but rather a look at what makes good science, experimental and theoretical, from his experiences in the field. The section titles give a good summary on the topics the author chooses to touch upon. They are: the state of elementary particle physics; getting good ideas in experimental science; a difficult field; experiments and experimenting; 10% of the money and 30% of the time; the dictatorship of theory; technological dreams; last words.

Control-based continuation is technique for tracking the solutions and bifurcations of nonlinear experiments. The idea is to apply the method of numerical continuation to a feedback-controlled physicalexperiment such that the control becomes non-invasive. Since in an experiment it is not (generally) possible to set the state of the system directly, the control target becomes a proxy for the state. Control-based continuation enables the systematic investigation of the bifurcation structure of a physical system, much like if it was numerical model. However, stability information (and hence bifurcation detection and classification) is not readily available due to the presence of stabilising feedback control. This paper uses a periodic auto-regressive model with exogenous inputs (ARX) to approximate the time-varying linearisation of the experiment around a particular periodic orbit, thus providing the missing stability information. This method is demonstrated using a physical nonlinear tuned mass damper.

This report contains the results of reactor design and performance for conversion of the University of Missouri Research Reactor (MURR) from the use of highly-enriched uranium (HEU) fuel to the use of low-enriched uranium (LEU) fuel. The analyses were performed by staff members of the Global Threat Reduction Initiative (GTRI) Reactor Conversion Program at the Argonne National Laboratory (ANL) and the MURR Facility. The core conversion to LEU is being performed with financial support of the U. S. government.

Numerous geophysical and geochemical studies have suggested the existence of a small metallic lunar core, but the composition of that core is not known. Knowledge of the composition can have a large impact on the thermal evolution of the core, its possible early dynamo creation, and its overall size and fraction of solid and liquid. Thermal models predict that the current temperature at the core-mantle boundary of the Moon is near 1650 K. Re-evaluation of Apollo seismic data has highlighted the need for new data in a broader range of bulk core compositions in the PT range of the lunar core. Geochemical measurements have suggested a more volatile-rich Moon than previously thought. And GRAIL mission data may allow much better constraints on the physical nature of the lunar core. All of these factors have led us to determine new phase equilibria experimental studies in the Fe-Ni-S-C-Si system in the relevant PT range of the lunar core that will help constrain the composition of Moon's core.

The purpose of the current study was to describe the maltreatment experiences of a sample of urban youths identified as physically abused using the Maltreatment Case Record Abstraction Instrument (MCRAI). The sample (n=303) of 9-12 year old youths was recruited from active child protective services (CPS) cases in 2002-2005, and five years of child protective service records were reviewed. The demographic and maltreatment experiences of MCRAI-identified youths with physical abuse were compared to maltreated youths who were not physically abused and youths who were identified as physically abused by CPS when they entered this longitudinal study. T-tests and chi-square tests were used to compare the demographics and maltreatment experiences of the sample MCRAI-identified physically abused to the sample MCRAI-identified as nonphysically abused maltreated by gender. Of the total sample, 156 (51%) were identified by MCRAI as physically abused and 96.8% of these youth also experienced other types of maltreatment. Whereas youth with the initial CPS identification of physical abuse showed little co-occurrence (37.7%) with other forms of maltreatment. The MCRAI-identified physically abused youths had a significantly higher mean number of CPS reports and higher mean number of incidents of maltreatment than MCRAI-identified nonphysically maltreated youths. Lifeline plots of case record history from the time of first report to CPS to entry into the study found substantial individual variability in maltreatment experiences for both boys and girls. Thus, obtaining maltreatment information from a single report vastly underestimates the prevalence of physical abuse and the co-occurrence of other maltreatment types.

Objective Delusional-like experiences (DLE) are prevalent in the community. Recent community based studies have found that DLE are more common in those with depression and anxiety disorders, and in those with subclinical symptoms of depression and anxiety. Chronic physical disorders are associated with comorbid depression and anxiety; however, there is a lack of evidence about the association of DLE with common physical conditions. The aim of this study was to explore associations between the common physical disorders and DLE using a large population sample. Methods Subjects were drawn from the Australian National Survey of Mental Health and Wellbeing 2007, a national household survey of 8841 residents aged between 16 and 85 years. The presence of DLE, selected common physical disorders and symptoms were assessed using a modified World Mental Health Composite International Diagnostic Interview (CIDI) schedule. We examined the relationship between DLE, and physical health-related variables using logistic regression, with adjustments for potential confounding factors. Results Of the 8771, 776 (8.4%) subjects positively endorsed one or more DLE. Of the six physical disorders examined, only diabetes and arthritis were significantly associated with the endorsement of DLE. Of the seven broad physical symptoms explored, only hearing problems were consistently associated with DLE. Conclusion Delusional-like experiences are common in the Australian community, and are associated with selected chronic physical disorders and with impaired hearing. The direction of causality between these variables warrants closer research scrutiny. PMID:21541344

This grant funds a research program to use infrared extinction measurements to probe the detailed structure of dark molecular cloud cores and investigate the physical conditions which give rise to star and planet formation. The goals of this program are to acquire, reduce and analyze deep infrared and molecular-line observations of a carefully selected sample of nearby dark clouds in order to internal structure of starless cloud cores and to quantitatively investigate the evolution of such structure through the star and planet formation process. During the second year of this grant, progress toward these goals is discussed.

This grant funds a research program to use infrared extinction measurements to probe the detailed structure of dark molecular cloud cores and investigate the physical conditions which give rise to star and planet formation. The goals of this program are to acquire, reduce and analyze deep infrared and molecular-line observations of a carefully selected sample of nearby dark clouds in order to determine the detailed initial conditions for star formation from quantitative measurements of the internal structure of starless cloud cores and to quantitatively investigate the evolution of such structure through the star and planet formation process.

With the advent of space experiments it was demonstrated that cosmic sources emit energy practically across all the electromagnetic spectrum via different physical processes. Several physical quantities give witness to these processes which usually are not stationary; those physical observable quantities are then generally variable. Therefore simultaneous multifrequency observations are strictly necessary in order to understand the actual behaviour of cosmic sources. Space experiments have opened practically all the electromagnetic windows on the Universe. A discussion of the most important results coming from multifrequency photonic astrophysics experiments will provide new inputs for the advance of the knowledge of the physics, very often in its more extreme conditions. A multitude of high quality data across practically the whole electromagnetic spectrum came at the scientific community's disposal a few years after the beginning of the Space Era. With these data we are attempting to explain the physics governing the Universe and, moreover, its origin, which has been and still is a matter of the greatest curiosity for humanity. In this paper we will try to describe the last steps of the investigation born with the advent of space experiments, to note upon the most important results and open problems still existing, and to comment upon the perspectives we can reasonably expect. Once the idea of this paper was well accepted by ourselves, we had the problem of how to plan the exposition. Indeed, the exposition of the results can be made in different ways, following several points of view, according to: - a division in diffuse and discrete sources; - different classes of cosmic sources; - different spectral ranges, which implies in turn a sub-classification in accordance with different techniques of observations; - different physical emission mechanisms of electromagnetic radiation; - different vehicles used for launching the experiments (aircraft, balloons, rockets

An availability of data and method for a design of metallic-fueled LMFBR is examined by using the experiment results of FCA assembly XVI-1. Experiment included criticality and reactivity coefficients such as Doppler, sodium void, fuel shifting and fuel expansion. Reaction rate ratios, sample worth and control rod worth were also measured. Analysis was made by using three-dimensional diffusion calculations and JENDL-2 cross sections. Predictions of assembly XVI-1 reactor physics parameters agree reasonably well with the measured values, but for some reactivity coefficients such as Doppler, large zone sodium void and fuel shifting further improvement of calculation method was need. (author).

Experiments in particle physics produce enormous quantities of data that must be analyzed and interpreted by teams of physicists. This analysis is often exploratory, where scientists are unable to enumerate the possible types of signal prior to performing the experiment. Thus, tools for summarizing, clustering, visualizing and classifying high-dimensional data are essential. In this work, we show that meaningful physical content can be revealed by transforming the raw data into a learned high-level representation using deep neural networks, with measurements taken at the Daya Bay Neutrino Experiment as a case study. We further show how convolutional deep neural networks can provide an effective classification filter with greater than 97% accuracy across different classes of physics events, significantly better than other machine learning approaches.

SNiPER (Software for Non-collider PhysicsExpeRiments) has been developed based on common requirements from both nuclear reactor neutrino and cosmic ray experiments. The design and implementation of SNiPER is described in this proceeding. Compared to the existing offline software frameworks in the high energy physics domain, the design of SNiPER is more focused on execution efficiency and flexibility. SNiPER has an open structure. User applications are executed as plug-ins based on it. The framework contains a compact kernel for software components management, event execution control, job configuration, common services, etc. Some specific features are attractive to non-collider physicsexperiments.