Sample records for masterless charge-control scheme

A new approach to masterless, distributed, digital-chargecontrol for batteries requiring chargecontrol has been developed and implemented. This approach is required in battery chemistries that need cell-level chargecontrol for safety and is characterized by the use of one controller per cell, resulting in redundant sensors for critical components, such as voltage, temperature, and current. The chargecontrollers in a given battery interact in a masterless fashion for the purpose of cell balancing, chargecontrol, and state-of-charge estimation. This makes the battery system invariably fault-tolerant. The solution to the single-fault failure, due to the use of a single chargecontroller (CC), was solved by implementing one CC per cell and linking them via an isolated communication bus [e.g., controller area network (CAN)] in a masterless fashion so that the failure of one or more CCs will not impact the remaining functional CCs. Each micro-controller-based CC digitizes the cell voltage (V(sub cell)), two cell temperatures, and the voltage across the switch (V); the latter variable is used in conjunction with V(sub cell) to estimate the bypass current for a given bypass resistor. Furthermore, CC1 digitizes the battery current (I1) and battery voltage (V(sub batt) and CC5 digitizes a second battery current (I2). As a result, redundant readings are taken for temperature, battery current, and battery voltage through the summation of the individual cell voltages given that each CC knows the voltage of the other cells. For the purpose of cell balancing, each CC periodically and independently transmits its cell voltage and stores the received cell voltage of the other cells in an array. The position in the array depends on the identifier (ID) of the transmitting CC. After eight cell voltage receptions, the array is checked to see if one or more cells did not transmit. If one or more transmissions are missing, the missing cell(s) is (are) eliminated from cell

Transparent conductive thin films of indium oxide and indium-tin oxide were evaluated for their properties to control charge buildup on satellite materials. Both oxide coatings were evaluated for their uniformity, stability, reproducibility and characteristics on various substrate materials such as FEP Teflon, Kapton, and glass. The process development toward optimization and characterization of these thin semiconductor oxide coatings and the evaluation on large sizes performed for qualification for use on thermal control satellite materials is described. The materials have been characterized in multiple energy electron plasma environment and at low temperatures. All radiation measurements of the coatings under simulated substorm conditions have exhibited the characteristics of stable chargecontrol. Measurement of surface potential during and after irradiation by electrons up to 30 KeV and ionizing gamma radiation show an effective stable grounding surface.

Life-test data of Lithium-Ion battery cells is critical in order to establish their performance capabilities for NASA missions and Exploration goals. Lithium-ion cells have the potential to replace rechargeable alkaline cells in aerospace applications, but they require a more complex charging scheme than is typically required for alkaline cells. To address these requirements in our Lithium-Ion Cell Test Verification Program, a Lithium-Ion Cell ChargeControl Unit was developed by NASA Glenn Research Center (GRC). This unit gives researchers the ability to test cells together as a pack, while allowing each cell to charge individually. This allows the inherent cell-to-cell variations to be addressed on a series string of cells and results in a substantial reduction in test costs as compared to individual cell testing. The Naval Surface Warfare Center at Crane, Indiana developed a power reduction scheme that works in conjunction with the Lithium-Ion Cell ChargeControl Unit. This scheme minimizes the power dissipation required by the circuitry to prolong circuit life and improve its reliability.

NASA and the Columbia Scientific Balloon Facility are interested in updating the design of the chargecontroller on their long duration balloon (LDB) in order to enable the chargecontrollers to be directly interfaced via RS232 serial communication by a ground testing computers and the balloon's flight computer without the need to have an external electronics stack. The design involves creating a board that will interface with the existing boards in the chargecontroller in order to receive telemetry from and send commands to those boards, and interface with a computer through serial communication. The inputs to the board are digital status inputs indicating things like whether the photovoltaic panels are connected or disconnected; and analog inputs with information such as the battery voltage and temperature. The outputs of the board are 100ms duration command pulses that will switch relays that do things like connect the photovoltaic panels. The main component of this design is a PIC microcontroller which translates the outputs of the existing chargecontroller into serial data when interrogated by a ground testing or flight computer. Other components involved in the design are an AD7888 12-bit analog to digital converter, a MAX3232 serial transceiver, various other ICs, capacitors, resistors, and connectors.

A charge-control unit was developed as part of a program to validate Li-ion cells packaged together in batteries for aerospace use. The lithium-ion cell charge-control unit will be useful to anyone who performs testing of battery cells for aerospace and non-aerospace uses and to anyone who manufacturers battery test equipment. This technology reduces the quantity of costly power supplies and independent channels that are needed for test programs in which multiple cells are tested. Battery test equipment manufacturers can integrate the technology into their battery test equipment as a method to manage charging of multiple cells in series. The unit manages a complex scheme that is required for charging Li-ion cells electrically connected in series. The unit makes it possible to evaluate cells together as a pack using a single primary test channel, while also making it possible to charge each cell individually. Hence, inherent cell-to-cell variations in a series string of cells can be addressed, and yet the cost of testing is reduced substantially below the cost of testing each cell as a separate entity. The unit consists of electronic circuits and thermal-management devices housed in a common package. It also includes isolated annunciators to signal when the cells are being actively bypassed. These annunciators can be used by external charge managers or can be connected in series to signal that all cells have reached maximum charge. The charge-control circuitry for each cell amounts to regulator circuitry and is powered by that cell, eliminating the need for an external power source or controller. A 110-VAC source of electricity is required to power the thermal-management portion of the unit. A small direct-current source can be used to supply power for an annunciator signal, if desired.

The effect of chargecontrol on the performance of Nickel-Cadmium batteries is very important. The results of three tests performed in the Battery Test Centre of ESTEC are described. Two techniques were employed: (1) the tapering method well known for space applications, and (2) the temperature derivative technique (TDT) developed by ESTEC. In addition, a comparative study has been made between the behavior of a group of 3 batteries charged and discharged in parallel compared to an identical group discharged in parallel, but charged individually. An approach of evolution laws for the main electrical characteristics of cells is presented.

Battery systems coupled to photovoltaic (PV) modules for example fulfill one major function: they locally decouple PV generation and consumption of electrical power leading to two major effects. First, they reduce the grid load, especially at peak times and therewith reduce the necessity of a network expansion. And second, they increase the self-consumption in households and therewith help to reduce energy expenses. For the management of PV batteries chargecontrol strategies need to be developed to reach the goals of both the distribution system operators and the local power producer. In this work optimal control strategies regarding various optimization goals are developed on the basis of the predicted household loads and PV generation profiles using the method of dynamic programming. The resulting charge curves are compared and essential differences discussed. Finally, a multi-objective optimization shows that chargecontrol strategies can be derived that take all optimization goals into account.

Battery chargecontrol for orbiting spacecraft with mission durations from three to ten years, is a critical design feature that is discussed. Starting in 1974, the General Electric Space Systems Division designed, manufactured and tested battery systems for six different space programs. Three of these are geosynchronous missions, two are medium altitude missions and one is a near-earth mission. All six power subsystems contain nickel cadmium batteries which are charged using a temperature compensated voltage limit. This charging method was found to be successful in extending the life of nickel cadmium batteries in all three types of earth orbits. Test data and flight data are presented for each type of orbit.

A dc model for MODFET's accounting for two-dimensional effects is proposed. In this model, chargecontrol is realized by solving the two-dimensional Poisson equation in the depleted AlGaAs region. The transport picture used for the two-dimensional electron gas (2-DEG) in the AlGaAs/GaAs heterojunction relies on the quasi-Fermi level together with a field-dependent mobility and therefore includes 2-DEG diffusion effects. The approach reduces the analysis to a single integral equation. I-V curves, which provide a good fitting to the reported experimental data, are obtained using a smooth velocity-field curve. The channel voltage, 2-DEG concentration, parallel electric-field, and drift velocity along the channel are given in this study and provide a clear picture of current saturation. The model is consistent with the approximate two-region saturation picture but provides a smoother transition.

We consider a single Josephson junction in the presence of time varying gate charge, and examine the nonequilibrium work done by the chargecontrol in the framework of fluctuation theorems. Assuming first a high quality junction with negligible Ohmic current, we obtain the probability distribution functions of the work and confirm the Crooks relation to give the estimation of the free energy changes ΔF=0. The reliability of ΔF estimated from the Jarzynksi equality is crucially dependent on protocol parameters, while the Bennett's acceptance ratio method yields consistently ΔF=0. We examine the behaviors of the work average and point out its relation to heat and entropy production associated with the circuit control. Finally considering finite tunnel resistance we discuss dissipation effects on the work statistics. PMID:24032811

Test Masses inside the LISA Gravitational Reference Sensor must maintain almost pure geodesic motion for gravitational waves to be successfully detected. LISA requires residual test mass accelerations below 3 fm/s2/√Hz at all frequencies between 0.1 and 3 mHz. One of the well-known noise sources is associated with the charges on the test masses which couple to stray electrical potentials and external electromagnetic fields. LISA Pathfinder will use Hg-discharge lamps emitting mostly around 254 nm to discharge the test masses via photoemission in its 2015/16 flight. A future LISA mission launched around 2030 will likely replace the lamps with newer UV-LEDs. Presented here is a preliminary study of the effectiveness of chargecontrol using latest generation UV-LEDs which produce light at 240 nm with energy above the work function of pure Au. Their lower mass, better power efficiency and small size make them an ideal replacement for Hg lamps.

The test masses inside the LISA gravitational reference sensors (GRS) must maintain almost pure geodesic motion for gravitational waves to be successfully detected. The residual accelerations have to stay below 3fm/s2/rtHz at all frequencies between 0.1 and 3 mHz. One of the well known noise sources is associated with the charges on the test masses which couple to stray electrical potentials and external electro-magnetic fields. The LISA pathfinder (LPF) will use Hg-discharge lamps emitting mostly around 253 nm to discharge the test masses via photoemission in its 2015/16 flight. A future LISA mission launched around 2030 will likely replace the lamps with newer UV-LEDs. UV-LEDs have a lower mass, a better power efficiency, and are smaller than their Hg counterparts. Furthermore, the latest generation produces light at 240 nm, with energy well above the work function of pure gold. I will describe a preliminary design for effective chargecontrol through photoelectric effect by using these LEDs. The effectiveness of this method is verified by taking Quantum Efficiency (QE) measurements which relate the number of electrons emitted to the number of photons incident on the Au test mass surface. This presentation addresses our initial results and future plans which includes implementation and testing in the UF torsion pendulum and space-qualification in a small satellite mission which will launch in the summer of 2014, through a collaboration with Stanford, KACST, and NASA Ames Research Center.

An amp-hour counting battery chargecontrol algorithm has been defined and tested using the Digital Solar Technologies MPR-9400 microprocessor based photovoltaic hybrid chargecontroller. This work included extensive laboratory and field testing of the charge algorithm on vented lead-antimony and valve regulated lead-acid batteries. The test results have shown that with proper setup amp-hour counting chargecontrol is more effective than conventional voltage regulated sub-array shedding in returning the lead-acid battery to a high state of charge.

Simple chargecontrollers connect photovoltaic modules directly to the battery bank resulting in a significant power loss if the battery bank voltage differs greatly from the PV Maximum Power Point (MPP) voltage. Recent modeling work at AES has shown that dc-dc converter type MPP tracking chargecontrollers can deliver more than 30% more energy from PV modules to the battery when the PV modules are cool and the battery state of charge is low--this is typically both the worst case condition (i.e., winter) and also the design condition that determines the PV array size. Economic modeling, based on typical telecom system installed costs shows benefits of more than $3/Wp for MPPT over conventional chargecontrollers in this application--a value that greatly exceeds the additional cost of the dc-dc converter.

A control regime for NiCad and lead acid batteries which can evaluate the available energy deliverable by the battery at any time is reported. The use of battery cell impedance, state of charge, incremental slope tests, a chargecontrol regime, discharge monitor, and chargecontrol circuit to monitor the battery is discussed. It is shown how the battery state of readiness can be established with reasonable accuracy for both types of batteries and how the control regime can be continually optimized for best performances.

Battery chargingcontrol methods, electric vehicle charging methods, battery charging apparatuses and rechargeable battery systems. According to one aspect, a battery chargingcontrol method includes accessing information regarding a presence of at least one of a surplus and a deficiency of electrical energy upon an electrical power distribution system at a plurality of different moments in time, and using the information, controlling an adjustment of an amount of the electrical energy provided from the electrical power distribution system to a rechargeable battery to charge the rechargeable battery.

Thermal, field emitters of lanthanum (or perhaps cerium) hexaboride (LaB6) with temperature variability up to about 1500K are suggested for spacecraft chargingcontrol. Such emitters operate at much lower voltages with considerably more control and add plasma-diagnostic versatility. These gains should outweigh the additional complexity of providing heat for the LaB6 thermal, field emitter.

In connection with existing theoretical concepts, it was difficult to explain the negative potentials found in sunlight, first on Applied Technology Satellite-5 (ATS-5) and then on ATS-6. The problem became important when an association between spacecraft charging and anomalies in spacecraft behavior was observed. A study of daylight charging phenomena on ATS-6 was conducted, and an investigation was performed with the objective to determine effective methods of chargecontrol, taking into account the feasibility to utilize the ATS-5 and ATS-6 ion engines as current sources. In the present paper, data and analysis for the ion engine experiments on ATS-5 and ATS-6 are presented. It is shown that electron emission from a satellite with insulating surfaces is not an effective method of chargecontrol because the increase in differential charging which results limits the effectiveness of electron emitters and increases the possibility of electrostatic discharges between surfaces at different potentials.

Under the sponsorship of the Department of Energy, Office of Utility Technologies, the Battery Analysis and Evaluation Department and the Photovoltaic System Assistance Center of Sandia National Laboratories (SNL) initiated a U.S. industry-wide PV Energy Storage System Survey. Arizona State University (ASU) was contracted by SNL in June 1995 to conduct the survey. The survey included three separate segments tailored to: (a) PV system integrators, (b) battery manufacturers, and (c) PV chargecontroller manufacturers. The overall purpose of the survey was to: (a) quantify the market for batteries shipped with (or for) PV systems in 1995, (b) quantify the PV market segments by battery type and application for PV batteries, (c) characterize and quantify the chargecontrollers used in PV systems, (d) characterize the operating environment for energy storage components in PV systems, and (e) estimate the PV battery market for the year 2000. All three segments of the survey were mailed in January 1996. This report discusses the purpose, methodology, results, and conclusions of the survey.

In this paper, a carbon nanotube-based charge-controlled speed-regulating nanoclutch (CNT-CC-SRNC), composed of an inner carbon nanotube (CNT), an outer CNT, and the water confined between the two CNT walls, is proposed by utilizing electrowetting-induced improvement of the friction at the interfaces between water and CNT walls. As the inner CNT is the driving axle, molecular dynamics simulation results demonstrate that CNT-CC-SRNC is in the disengaged state for the uncharged CNTs, whereas water confined in the two charged CNT walls can transmit the torque from the inner tube to the outer tube. Importantly, the proposed CNT-CC-SRNC can perform stepless speed-regulating function through changing the magnitude of the charge assigned on CNT atoms.

An updated version of the American Society for Testing and Materials (ASTM) guide E 1523 to the methods to chargecontrol and charge referencing techniques in x-ray photoelectron spectroscopy has been released by ASTM [Annual Book of ASTM Standards Surface Analysis (American Society for Testing and Materials, West Conshohocken, PA, 2004), Vol. 03.06]. The guide is meant to acquaint x-ray photoelectron spectroscopy (XPS) users with the various chargecontrol and charge referencing techniques that are and have been used in the acquisition and interpretation of XPS data from surfaces of insulating specimens. The current guide has been expanded to include new references as well as recommendations for reporting information on chargecontrol and charge referencing. The previous version of the document had been published in 1997 [D. R. Baer and K. D. Bomben, J. Vac. Sci. Technol. A 16, 754 (1998)].

This report presents the results of a development effort to design, test and begin production of a new class of small photovoltaic (PV) chargecontrollers. Sandia National Laboratories provided technical support, test data and financial support through a Balance-of-System Development contract. One of the objectives of the development was to increase user confidence in small PV systems by improving the reliability and operating life of the system controllers. Another equally important objective was to improve the economics of small PV systems by extending the battery lifetimes. Using new technology and advanced manufacturing techniques, these objectives were accomplished. Because small stand-alone PV systems account for over one third of all PV modules shipped, the positive impact of improving the reliability and economics of PV systems in this market segment will be felt throughout the industry. The results of verification testing of the new product are also included in this report. The initial design goals and specifications were very aggressive, but the extensive testing demonstrates that all the goals were achieved. Production of the product started in May at a rate of 2,000 units per month. Over 40 Morningstar distributors (5 US and 35 overseas) have taken delivery in the first 2 months of shipments. Initial customer reactions to the new controller have been very favorable.

Chargecontrol tests were carried out on a ground based, Marine Corps helicopter to determine if control of the electric fields acting on the engine exhaust gases could be used to reduce the electrification of the helicopter when it operated in a dusty atmosphere. The test aircraft was flown to a dusty, unpaved area and was then isolated electrically from the earth. When the helicopter engines were operated at ground idle with the rotor locked, the isolated aircraft charged positively, as had been observed previously. However, when the rotor brake was released and the turning rotor created a downdraft that raised dust clouds, the aircraft always became charged more positively, to potentials ranging form +30 to +45 kV. The dust clouds raised by the rotor downwash invariably carried negative space charges with concentrations of up to -100 nC/cu m and caused surface electric fields with strengths of up to 10 kV/m immediately down wind of the aircraft. The natural charging of the helicopter operating in these dust clouds was successfully opposed by control of the electric fields acting on the hot, electrically conductive exhaust gases. The control was achieved by placing electrostatic shield around the exhausts.

The lead-acid battery which is widely used in stand-alone solar system is easily damaged by a poor chargingcontrol which causes overcharging. The battery chargingcontrol is thus usually designed to stop charging after the overcharge point. This will reduce the storage energy capacity and reduce the service time in electricity supply. The design of chargingcontrol system however requires a good understanding of the system dynamic behaviour of the battery first. In the present study, a first-order system dynamics model of lead-acid battery at different operating points near the overcharge voltage was derived experimentally, from which a chargingcontrol system based on PI algorithm was developed using PWM charging technique. The feedback control system for battery charging after the overcharge point (14 V) was designed to compromise between the set-point response and the disturbance rejection. The experimental results show that the control system can suppress the battery voltage overshoot within 0.1 V when the solar irradiation is suddenly changed from 337 to 843 W/m{sup 2}. A long-term outdoor test for a solar LED lighting system shows that the battery voltage never exceeded 14.1 V for the set point 14 V and the control system can prevent the battery from overcharging. The test result also indicates that the control system is able to increase the charged energy by 78%, as compared to the case that the charging stops after the overcharge point (14 V). (author)

This paper proposes a decentralized chargingcontrol strategy for a large population of plug-in electric vehicles (PEVs) to neutralize wind power fluctuations so as to improve the regulation of system frequency. Without relying on a central control entity, each PEV autonomously adjusts its charging or discharging power in response to a communal virtual price signal and based on its own urgency level of charging. Simulation results show that under the proposed chargingcontrol, the aggregate PEV power can effectively neutralize wind power fluctuations in real-time while differential allocation of neutralization duties among the PEVs can be realized to meet the PEV users' charging requirements. Also, harmful wind-induced cyclic operations in thermal units can be mitigated. As shown in economic analysis, the proposed strategy can create cost saving opportunities for both PEV users and utility.

This study proposes a model for current conduction in metal-insulator-semiconductor (MIS) capacitors, assuming the presence of two sheets of charge in the insulator, and derives analytical formulae of field emission (FE) currents under both negative and positive bias. Since it is affected by the space charge in the insulator, this particular FE differs from the conventional FE and is accordingly named the space-charge-controlled (SCC) FE. The gate insulator of this study was a stack of atomic-layer-deposition Al2O3 and underlying chemical SiO2 formed on Si substrates. The current-voltage (I-V) characteristics simulated using the SCC-FE formulae quantitatively reproduced the experimental results obtained by measuring Au- and Al-gated Al2O3/SiO2 MIS capacitors under both biases. The two sheets of charge in the Al2O3 films were estimated to be positive and located at a depth of greater than 4 nm from the Al2O3/SiO2 interface and less than 2 nm from the gate. The density of the former is approximately 1 × 1013 cm-2 in units of electronic charge, regardless of the type of capacitor. The latter forms a sheet of dipoles together with image charges in the gate and hence causes potential jumps of 0.4 V and 1.1 V in the Au- and Al-gated capacitors, respectively. Within a margin of error, this sheet of dipoles is ideally located at the gate/Al2O3 interface and effectively reduces the work function of the gate by the magnitude of the potential jumps mentioned above. These facts indicate that the currents in the Al2O3/SiO2 MIS capacitors are enhanced as compared to those in ideal capacitors and that the currents in the Al-gated capacitors under negative bias (electron emission from the gate) are more markedly enhanced than those in the Au-gated capacitors. The larger number of gate-side dipoles in the Al-gated capacitors is possibly caused by the reaction between the Al and Al2O3, and therefore gate materials that do not react with underlying gate insulators should be chosen

A study consisting of electrochemical characterization and Low-Earth-Orbit (LEO) cycling of Li-Ion cells from three vendors was initiated in 1999 to determine the cycling performance and to infuse the new technology in the future NASA missions. The 8-cell batteries included in this evaluation are prismatic cells manufactured by Mine Safety Appliances Company (MSA), cylindrical cells manufactured by SAFT and prismatic cells manufactured by Yardney Technical Products, Inc. (YTP). The three batteries were cycle tested in the LEO regime at 40% depth of discharge, and under a chargecontrol technique that consists of battery voltage clamp with a current taper. The initial testing was conducted at 20 C; however, the batteries were cycled also intermittently at low temperatures. YTP 20 Ah cells consisted of mixed-oxide (Co and Ni) positive, graphitic carbon negative, LIPF6 salt mixed with organic carbonate solvents. The battery voltage clamp was 32 V. The low temperature cycling tests started after 4575 cycles at 20 C. The cells were not capable of cycling. at low temperature since the charge acceptance at battery level was poor. There was a cell in the battery that showed too high an end-of-charge (EOC) voltage thereby limiting the ability to charge the rest of the cells in the battery. The battery has completed 6714 cycles. SAFT 12 Ah cells consisted of mixed-oxide (Co and NO positive, graphitic carbon negative, LiPF6 salt mixed with organic carbonate solvents. The battery voltage clamp was for 30.8 V. The low temperature cycling tests started after 4594 cycles at 20 C. A cell that showed low end of discharge (EOD) and EOC voltages and three other cells that showed higher EOC voltages limited the charge acceptance at the selected voltage limit during charge. The cells were capable of cycling at 10 C and 0 C but the charge voltage limit had to be increased to 34.3 V (4.3 V per cell). The low temperature cycling may have induced poor chargeability since the voltage had to

We investigated blue fluorescent organic light-emitting diode (OLED) with a chargecontrol layer (CCL) to produce high efficiency and improve the half-decay lifetime. Three types of devices (device A, B, and C) were fabricated following the number of CCLs within the emitting layer (EML), maintaining the thickness of whole EML. The CCL and host material, 2-methyl-9,10-di(2-naphthyl)anthracene, which has a bipolar property, was able to control the carrier movement with ease inside the EML. Device B demonstrated a maximum luminous efficiency (LE) and external quantum efficiency (EQE) of 9.19 cd/A and 5.78%, respectively. It also showed that the enhancement of the half-decay lifetime, measured at an initial luminance of 1,000 cd/m2, was 1.5 times longer than that of the conventional structure. A hybrid white OLED (WOLED) was also fabricated using a phosphorescent red emitter, bis(2-phenylquinoline)-acetylacetonate iridium III doped in 4,4'-N,N'-dicarbazolyl-biphenyl. The property of the hybrid WOLED with CCL showed a maximum LE and an EQE of 13.46 cd/A and 8.32%, respectively. It also showed white emission with Commission International de L'Éclairage coordinates of (x = 0.41, y = 0.33) at 10 V. PMID:25936005

Tabled execution is a generalization of memorization developed by the logic programming community. It not only saves results from tabled predicates, but also stores the set of currently active calls to them; tabled execution can thus provide meaningful semantics for programs that seemingly contain infinite recursions with the same arguments. In logic programming, tabled execution is used for many purposes, both for improving the efficiency of programs, and making tasks simpler and more direct to express than with normal logic programs. However, tabled execution is only infrequently applied in mainstream functional languages such as Scheme. We demonstrate an elegant implementation of tabled execution in Scheme, using a mix of continuation-passing style and mutable data. We also show the use of tabled execution in Scheme for a problem in formal language and automata theory, demonstrating that tabled execution can be a valuable tool for Scheme users.

Visual cryptography (VC), a new cryptographic scheme for image. Here in encryption, image with message is encoded to be N sub-images and any K sub-images can decode the message in a special rules (N>=2, 2<=K<=N). Then any K of the N sub-images are printed on transparency and stacked exactly, the message of original image will be decrypted by human visual system, but any K-1 of them get no information about it. This cryptographic scheme can decode concealed images without any cryptographic computations, and it has high security. But this scheme lacks of hidden because of obvious feature of sub-images. In this paper, we introduce indirect visual cryptography scheme (IVCS), which encodes sub-images to be pure phase images without visible strength based on encoding of visual cryptography. The pure phase image is final ciphertexts. Indirect visual cryptography scheme not only inherits the merits of visual cryptography, but also raises indirection, hidden and security. Meanwhile, the accuracy alignment is not required any more, which leads to the strong anti-interference capacity and robust in this scheme. System of decryption can be integrated highly and operated conveniently, and its process of decryption is dynamic and fast, which all lead to the good potentials in practices.

The major research activities of this proposal center on the construction and analysis of nonstandard finite-difference schemes for ordinary and partial differential equations. In particular, we investigate schemes that either have zero truncation errors (exact schemes) or possess other significant features of importance for numerical integration. Our eventual goal is to bring these methods to bear on problems that arise in the modeling of various physical, engineering, and technological systems. At present, these efforts are extended in the direction of understanding the exact nature of these nonstandard procedures and extending their use to more complicated model equations. Our presentation will give a listing (obtained to date) of the nonstandard rules, their application to a number of linear and nonlinear, ordinary and partial differential equations. In certain cases, numerical results will be presented.

Discusses the study of identification codes and check-digit schemes as a way to show students a practical application of mathematics and introduce them to coding theory. Examples include postal service money orders, parcel tracking numbers, ISBN codes, bank identification numbers, and UPC codes. (MKR)

The concept of an optimum hybridization scheme for cluster compounds is developed with particular reference to electron counting. The prediction of electron counts for clusters and the interpretation of the bonding is shown to depend critically upon the presumed hybridization pattern of the cluster vertex atoms. This fact has not been properly appreciated in previous work, particularly in applications of Stone's tensor surface harmonic (TSH) theory, but is found to be a useful tool when dealt with directly. A quantitative definition is suggested for the optimum cluster hybridization pattern based directly upon the ease of interpretation of the molecular orbitals, and results are given for a range of species. The relationship of this scheme to the detailed cluster geometry is described using Löwdin's partitioned perturbation theory, and the success and range of application of TSH theory are discussed.

Traffic classification techniques were evaluated using data from a 1993 investigation of the traffic flow patterns on I-20 in Georgia. First we improved the data by sifting through the data base, checking against the original video for questionable events and removing and/or repairing questionable events. We used this data base to critique the performance quantitatively of a classification method known as Scheme F. As a context for improving the approach, we show in this paper that scheme F can be represented as a McCullogh-Pitts neural network, oar as an equivalent decomposition of the plane. We found that Scheme F, among other things, severely misrepresents the number of vehicles in Class 3 by labeling them as Class 2. After discussing the basic classification problem in terms of what is measured, and what is the desired prediction goal, we set forth desirable characteristics of the classification scheme and describe a recurrent neural network system that partitions the high dimensional space up into bins for each axle separation. the collection of bin numbers, one for each of the axle separations, specifies a region in the axle space called a hyper-bin. All the vehicles counted that have the same set of in numbers are in the same hyper-bin. The probability of the occurrence of a particular class in that hyper- bin is the relative frequency with which that class occurs in that set of bin numbers. This type of algorithm produces classification results that are much more balanced and uniform with respect to Classes 2 and 3 and Class 10. In particular, the cancellation of errors of classification that occurs is for many applications the ideal classification scenario. The neural network results are presented in the form of a primary classification network and a reclassification network, the performance matrices for which are presented.

Discusses the growth, survival and future of library classification schemes. Concludes that to survive, a scheme must constantly update its policies, and readily adapt itself to accommodate growing disciplines and changing terminology. (AEF)

In response to a United Nations resolution, the Mobile Training Scheme (MTS) was set up to provide training to the trainers of national cadres engaged in frontline and supervisory tasks in social welfare and rural development. The training is innovative in its being based on an analysis of field realities. The MTS team consisted of a leader, an expert on teaching methods and materials, and an expert on action research and evaluation. The country's trainers from different departments were sent to villages to work for a short period and to report their problems in fulfilling their roles. From these grass roots experiences, they made an analysis of the job, determining what knowledge, attitude and skills it required. Analysis of daily incidents and problems were used to produce indigenous teaching materials drawn from actual field practice. How to consider the problems encountered through government structures for policy making and decisions was also learned. Tasks of the students were to identify the skills needed for role performance by job analysis, daily diaries and project histories; to analyze the particular community by village profiles; to produce indigenous teaching materials; and to practice the role skills by actual role performance. The MTS scheme was tried in Nepal in 1974-75; 3 training programs trained 25 trainers and 51 frontline workers; indigenous teaching materials were created; technical papers written; and consultations were provided. In Afghanistan the scheme was used in 1975-76; 45 participants completed the training; seminars were held; and an ongoing Council was created. It is hoped that the training program will be expanded to other countries. PMID:12265562

Recognizing schemes, which are different from strategies, can help teachers understand their students' thinking about fractions. Using Steffe's advanced fraction schemes, the authors describe a progression of development that upper elementary and middle school students might follow in understanding fractions. Each scheme can be viewed as a…

A 50-ampere hour nickel cadmium cell test pack was operated in a power profile simulating the orbit of the Earth Radiation Budget Satellite (ERBS). The objective was to determine the ability of the temperature compensated voltage limit (V sub T) chargecontrol system to maintain energy balance in the half sine wave-type current profile expected of this mission. The four-cell pack (50 E) was tested at the Naval Weapons Support Center (NWSC) at Crane, Indiana. The ERBS evaluation test consisted of two distinct operating sequences, each having a specific purpose. The first phase was a parametric test involving the effect of V sub T level, temperature, and Beta angle on the charge/discharge (C/D) ratio, an indicator of the amount of overcharge. The second phase of testing made use of the C/D ratio limit to augment the V sub T charge limit control. When the C/D limit was reached, the current was switched from the taper mode to a C/67 (0.75 A) trickle charge. The use of an ampere hour integrator limiting the overcharge to a C/67 rate provided a fine tuning of the chargecontrol technique which eliminated the sensitivity problems noted in the initial operating sequence.

A 50-ampere hour nickel cadmium cell test pack was operated in a power profile simulating the orbit of the Earth Radiation Budget Satellite (ERBS). The objective was to determine the ability of the temperature compensated voltage limit (V sub T) chargecontrol system to maintain energy balance in the half sine wave-type current profile expected of this mission. The four-cell pack (50 E) was tested at the Naval Weapons Support Center (NWSC) at Crane, Indiana. The ERBS evaluation test consisted of two distinct operating sequences, each having a specific purpose. The first phase was a parametric test involving the effect of V sub T level, temperature, and Beta angle on the charge/discharge (C/D) ratio, an indicator of the amount of overcharge. The second phase of testing made use of the C/D ratio limit to augment the V sub T charge limit control. When the C/D limit was reached, the current was switched from the taper mode to a C/67 (0.75 A) trickle charge. The use of an ampere hour integrator limiting the overcharge to a C/67 rate provided a fine tuning of the chargecontrol technique which eliminated the sensitivity problems noted in the initial operating sequence.

It is known that conventional metal-oxide-silicon (MOS) devices will have gate tunneling related problems at very thin oxide thicknesses. Various high-dielectric-constant materials are being examined to suppress the gate currents. In this article we present theoretical results of a chargecontrol and gate tunneling model for a ferroelectric-oxide-silicon field effect transistor and compare them to results for a conventional MOS device. The potential of high polarization charge to induce inversion without doping and high dielectric constant to suppress tunneling current is explored. The model is based on a self-consistent solution of the quantum problem and includes the ferroelectric hysteresis response self-consistently. We show that the polarization charge associated with ferroelectrics can allow greater controllability of the inversion layer charge density. Also the high dielectric constant of ferroelectrics results in greatly suppressed gate current.

[figure removed for brevity, see original site] Figure 1 [figure removed for brevity, see original site] Figure 2 Click for larger view

These two graphics are planning tools used by Mars Exploration Rover engineers to plot and scheme the perfect location to place the rock abrasion tool on the rock collection dubbed 'El Capitan' near Opportunity's landing site. 'El Capitan' is located within a larger outcrop nicknamed 'Opportunity Ledge.'

The rover visualization team from NASA Ames Research Center, Moffett Field, Calif., initiated the graphics by putting two panoramic camera images of the 'El Capitan' area into their three-dimensional model. The rock abrasion tool team from Honeybee Robotics then used the visualization tool to help target and orient their instrument on the safest and most scientifically interesting locations. The blue circle represents one of two current targets of interest, chosen because of its size, lack of dust, and most of all its distinct and intriguing geologic features. To see the second target location, see the image titled 'Plotting and Scheming.'

The rock abrasion tool is sensitive to the shape and texture of a rock, and must safely sit within the 'footprint' indicated by the blue circles. The rock area must be large enough to fit the contact sensor and grounding mechanism within the area of the outer blue circle, and the rock must be smooth enough to get an even grind within the abrasion area of the inner blue circle. If the rock abrasion tool were not grounded by its support mechanism or if the surface were uneven, it could 'run away' from its target. The rock abrasion tool is location on the rover's instrument deployment device, or arm.

Over the next few martian days, or sols, the rover team will use these and newer, similar graphics created with more recent, higher-resolution panoramic camera images and super-spectral data from the miniature thermal emission spectrometer. These data will be used to pick the best

A sports counseling scheme for young people on criminal probation in Hampshire (England) was developed in the 1980s as a partnership between the Sports Council and the Probation Service. The scheme aims to encourage offenders, aged 14 and up, to make constructive use of their leisure time; to allow participants the opportunity to have positive…

A frame change data driving scheme (FCDDS) for ferroelectric LCD(FLCD) of matrix- addressing is developed which uses only positive voltages for the row and column waveforms to achieve bipolar driving waveforms on the FLCD pixels. Thus the required supply voltage for the driver chips is half that of the conventional driving scheme. Each scan line is addressed in only twice the switching time (tau) (minimum response time of FLC) so that this scheme is suitable for high duty ratio panels. In order to meet this bistable electro-optic effect of FLCD and zero net dc voltage across each pixel of the liquid crystal, turning on and turning off pixels are done at different time slots and frame slots. This driving scheme can be easily implemented using commercially available STN LCD drivers plus a small external circuit or by making an ASIC which is a slight modification of the STN driver. Both methods are discussed.

Following Quirk's analysis of Roe's scheme, general criteria are derived to predict the odd-even decoupling. This analysis is applied to Roe's scheme, EFM Pullin's scheme, EIM Macrossan's scheme and AUSM Liou's scheme. Strict stability is shown to be desirable to avoid most of these flaws. Finally, the link between marginal stability and accuracy on shear waves is established.

Two relaxation schemes for Chebyshev spectral multigrid methods are presented for elliptic equations with Dirichlet boundary conditions. The first scheme is a pointwise-preconditioned Richardson relaxation scheme and the second is a line relaxation scheme. The line relaxation scheme provides an efficient and relatively simple approach for solving two-dimensional spectral equations. Numerical examples and comparisons with other methods are given.

In this paper, we report a three orders-of-magnitude increase in the speed of a space-charge-controlled KTN beam deflector achieved by eliminating the electric field-induced phase transition (EFIPT) in a nanodisordered KTN crystal. Previously, to maximize the electro-optic effect, a KTN beam deflector was operated at a temperature slightly above the Curie temperature. The electric field could cause the KTN to undergo a phase transition from the paraelectric phase to the ferroelectric phase at this temperature, which causes the deflector to operate in the linear electro-optic regime. Since the deflection angle of the deflector is proportional to the space charge distribution but not the magnitude of the applied electric field, the scanning speed of the beam deflector is limited by the electron mobility within the KTN crystal. To overcome this speed limitation caused by the EFIPT, we propose to operate the deflector at a temperature above the critical end point. This results in a significant increase in the scanning speed from the microsecond to nanosecond regime, which represents a major technological advance in the field of fast speed beam scanners. This can be highly beneficial for many applications including high-speed imaging, broadband optical communications, and ultrafast laser display and printing. PMID:27610923

In this paper, we report a three orders-of-magnitude increase in the speed of a space-charge-controlled KTN beam deflector achieved by eliminating the electric field-induced phase transition (EFIPT) in a nanodisordered KTN crystal. Previously, to maximize the electro-optic effect, a KTN beam deflector was operated at a temperature slightly above the Curie temperature. The electric field could cause the KTN to undergo a phase transition from the paraelectric phase to the ferroelectric phase at this temperature, which causes the deflector to operate in the linear electro-optic regime. Since the deflection angle of the deflector is proportional to the space charge distribution but not the magnitude of the applied electric field, the scanning speed of the beam deflector is limited by the electron mobility within the KTN crystal. To overcome this speed limitation caused by the EFIPT, we propose to operate the deflector at a temperature above the critical end point. This results in a significant increase in the scanning speed from the microsecond to nanosecond regime, which represents a major technological advance in the field of fast speed beam scanners. This can be highly beneficial for many applications including high-speed imaging, broadband optical communications, and ultrafast laser display and printing. PMID:27610923

A class of new explicit second order accurate finite difference schemes for the computation of weak solutions of hyperbolic conservation laws is presented. These highly nonlinear schemes are obtained by applying a nonoscillatory first order accurate scheme to an appropriately modified flux function. The so-derived second order accurate schemes achieve high resolution while preserving the robustness of the original nonoscillatory first order accurate scheme. Numerical experiments are presented to demonstrate the performance of these new schemes.

Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2 m⌉ bit-per-pixel (bpp), respectively. PMID:25140334

Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2⁡m⌉ bit-per-pixel (bpp), respectively. PMID:25140334

Two closely related energy partitioning schemes, in which the total energy is presented as a sum of atomic and diatomic contributions by using the "atomic decomposition of identity", are compared on the example of N,N-dimethylformamide, a simple but chemically rich molecule. Both schemes account for different intramolecular interactions, for instance they identify the weak C-H...O intramolecular interactions, but give completely different numbers. (The energy decomposition scheme based on the virial theorem is also considered.) The comparison of the two schemes resulted in a dilemma which is especially striking when these schemes are applied for molecules distorted from their equilibrium structures: one either gets numbers which are "on the chemical scale" and have quite appealing values at the equilibrium molecular geometries, but exhibiting a counter-intuitive distance dependence (the two-center energy components increase in absolute value with the increase of the interatomic distances)--or numbers with too large absolute values but "correct" distance behaviour. The problem is connected with the quick decay of the diatomic kinetic energy components. PMID:17328441

The problem of robot control is viewed at the level of communicating high-level commands produced by intelligent algorithms to the actuator/sensor controllers. Four topics are considered in the design of an integrated control and communications scheme for an intelligent robotic system: the use of abstraction spaces, hierarchical versus heterarchical control, distributed processing, and the interleaving of the steps of plan creation and plan execution. A scheme is presented for an n-level distributed hierarchical/heterarchical control system that effectively interleaves intelligent planning, execution, and sensory feedback. A three-level version of this scheme has been successfully implemented in the Intelligent Systems Research Lab at NASA Langley Research Center. This implementation forms the control structure for DAISIE (Distributed Artificially Intelligent System for Interacting with the Environment), a testbed system integrating AI software with robotics hardware.

Fundamental limitations in acceleration gradient, emittance, alignment and polarization in acceleration schemes are considered in application for novel schemes of acceleration, including laser-plasma and structure-based schemes. Problems for each method are underlined whenever it is possible. Main attention is paid to the scheme with a tilted laser bunch.

We propose a scheme for symmetrization verification in two-particle systems, based on one-particle detection and state determination. In contrast to previous proposals, it does not follow a Hong-Ou-Mandel-type approach. Moreover, the technique can be used to generate superposition states of single particles.

A digital signature does not allow any alteration of the document to which it is attached. Appropriate alteration of some signed documents, however, should be allowed because there are security requirements other than the integrity of the document. In the disclosure of official information, for example, sensitive information such as personal information or national secrets is masked when an official document is sanitized so that its nonsensitive information can be disclosed when it is requested by a citizen. If this disclosure is done digitally by using the current digital signature schemes, the citizen cannot verify the disclosed information because it has been altered to prevent the leakage of sensitive information. The confidentiality of official information is thus incompatible with the integrity of that information, and this is called the digital document sanitizing problem. Conventional solutions such as content extraction signatures and digitally signed document sanitizing schemes with disclosure condition control can either let the sanitizer assign disclosure conditions or hide the number of sanitized portions. The digitally signed document sanitizing scheme we propose here is based on the aggregate signature derived from bilinear maps and can do both. Moreover, the proposed scheme can sanitize a signed document invisibly, that is, no one can distinguish whether the signed document has been sanitized or not.

Geophysical investigation is a powerful tool that allows non-invasive and non-destructive mapping of subsurface states and properties. However, non-uniqueness associated with the inversion process prevents the quantitative use of these methods. One major direction researchers are going is constraining the inverse problem by hydrological observations and models. An alternative to the commonly used direct inversion methods are global optimization schemes (such as genetic algorithms and Monte Carlo Markov Chain methods). However, the major limitation here is the desired high resolution of the tomographic image, which leads to a large number of parameters and an unreasonably high computational effort when using global optimization schemes. Two innovative schemes are presented here. First, a hierarchical approach is used to reduce the computational effort for the global optimization. Solution is achieved for coarse spatial resolution, and this solution is used as the starting point for finer scheme. We show that the computational effort is reduced in this way dramatically. Second, we use a direct ERT inversion as the starting point for global optimization. In this case preliminary results show that the outcome is not necessarily beneficial, probably because of spatial mismatch between the results of the direct inversion and the true resistivity field.

A class of explicit and implicit total variation diminishing (TVD) schemes for the compressible Euler and Navier-Stokes equations was developed. They do not generate spurious oscillations across shocks and contact discontinuities. In general, shocks can be captured within 1 to 2 grid points. For the inviscid case, these schemes are divided into upwind TVD schemes and symmetric (nonupwind) TVD schemes. The upwind TVD scheme is based on the second-order TVD scheme. The symmetric TVD scheme is a generalization of Roe's and Davis' TVD Lax-Wendroff scheme. The performance of these schemes on some viscous and inviscid airfoil steady-state calculations is investigated. The symmetric and upwind TVD schemes are compared.

A readout scheme for measuring the output from a SQUID-based sensor-array using an improved subranging architecture that includes multiple resolution channels (such as a coarse resolution channel and a fine resolution channel). The scheme employs a flux sensing circuit with a sensing coil connected in series to multiple input coils, each input coil being coupled to a corresponding SQUID detection circuit having a high-resolution SQUID device with independent linearizing feedback. A two-resolution configuration (course and fine) is illustrated with a primary SQUID detection circuit for generating a fine readout, and a secondary SQUID detection circuit for generating a course readout, both having feedback current coupled to the respective SQUID devices via feedback/modulation coils. The primary and secondary SQUID detection circuits function and derive independent feedback. Thus, the SQUID devices may be monitored independently of each other (and read simultaneously) to dramatically increase slew rates and dynamic range.

In this paper we study the relationship between two different compactifications of the space of vector bundle quotients of an arbitrary vector bundle on a curve. One is Grothendieck's Quot scheme, while the other is a moduli space of stable maps to the relative Grassmannian. We establish an essentially optimal upper bound on the dimension of the two compactifications. Based on that, we prove that for an arbitrary vector bundle, the Quot schemes of quotients of large degree are irreducible and generically smooth. We precisely describe all the vector bundles for which the same thing holds in the case of the moduli spaces of stable maps. We show that there are in general no natural morphisms between the two compactifications. Finally, as an application, we obtain new cases of a conjecture on effective base point freeness for pluritheta linear series on moduli spaces of vector bundles.

How to apply the entropy in biometrics into the encryption and remote authentication schemes to simplify the management of keys is a hot research area. Utilizing Dodis's fuzzy extractor method and Liu's original signcryption scheme, a biometric identity based signcryption scheme is proposed in this paper. The proposed scheme is more efficient than most of the previous proposed biometric signcryption schemes for that it does not need bilinear pairing computation and modular exponentiation computation which is time consuming largely. The analysis results show that under the CDH and DL hard problem assumption, the proposed scheme has the features of confidentiality and unforgeability simultaneously.

Oxidation schemes for the in-situ destruction of chlorinated solvents, using potassium permanganate, are receiving considerable attention. Indication from field studies and from our own work are that permanganate oxidation schemes have inherent problems that could severely limit...

This paper studies the preferences among healthcare workers towards pay schemes involving different levels of risk. It identifies which pay scheme individuals would prefer for themselves, and which they think is best in furthering health policy objectives. The paper adds, methodologically, a way of defining pay schemes that include different levels of risk. A questionnaire was mailed to a random sample of 1111 dentists. Respondents provided information about their current and preferred pay schemes, and indicated which pay scheme, in their opinion, would best further overall health policy objectives. A total of 504 dentists (45%) returned the questionnaire, and there was no indication of systematic non-response bias. All public dentists had a current pay scheme based on a fixed salary and the majority of individuals preferred a pay scheme with more income risk. Their preferred pay schemes coincided with the ones believed to further stabilise healthcare personnel. The predominant current pay scheme among private dentists was based solely on individual output, and the majority of respondents preferred this pay scheme. In addition, their preferred pay schemes coincided with the ones believed to further efficiency objectives. Both public and private dentists believed that pay schemes, furthering efficiency objectives, had to include more performance-related pay than the ones believed to further stability and quality objectives. PMID:20565995

In this paper, we propose one quantum signature scheme with a weak arbitrator to sign classical messages. This scheme can preserve the merits in the original arbitrated scheme with some entanglement resources, and provide a higher efficiency in transmission and reduction the complexity of implementation. The arbitrator is costless and only involved in the disagreement case.

Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.

This paper analyses current bandwidth schemes and proposes a novel dynamic bandwidth allocation scheme for EPON. According the scheme, we define four kinds of multimedia services such as Unsolicited Request Service (URS), Realtime Service (rt-S), Non-Real-time Service (nrt-S) and Best Effort (BE). Different kinds of services have different Quality of Service (QoS) requirements. Our scheme considers the diverse QoS request, e.g., delay for rt-S, throughput for nrt-S and fairness for BE. The simulation results show this novel scheme can ensure the quality of service (QoS) and improve bandwidth utilization.

A new third-order Energy Stable Weighted Essentially NonOscillatory (ESWENO) finite difference scheme for scalar and vector linear hyperbolic equations with piecewise continuous initial conditions is developed. The new scheme is proven to be stable in the energy norm for both continuous and discontinuous solutions. In contrast to the existing high-resolution shock-capturing schemes, no assumption that the reconstruction should be total variation bounded (TVB) is explicitly required to prove stability of the new scheme. A rigorous truncation error analysis is presented showing that the accuracy of the 3rd-order ESWENO scheme is drastically improved if the tuning parameters of the weight functions satisfy certain criteria. Numerical results show that the new ESWENO scheme is stable and significantly outperforms the conventional third-order WENO finite difference scheme of Jiang and Shu in terms of accuracy, while providing essentially nonoscillatory solutions near strong discontinuities.

In this paper, we further analyze, test, modify and improve the high order WENO (weighted essentially non-oscillatory) finite difference schemes of Liu, Osher and Chan. It was shown by Liu et al. that WENO schemes constructed from the r-th order (in L1 norm) ENO schemes are (r+1)-th order accurate. We propose a new way of measuring the smoothness of a numerical solution, emulating the idea of minimizing the total variation of the approximation, which results in a 5-th order WENO scheme for the case r = 3, instead of the 4-th order with the original smoothness measurement by Liu et al. This 5-th order WENO scheme is as fast as the 4-th order WENO scheme of Liu et al., and both schemes are about twice as fast as the 4-th order ENO schemes on vector supercomputers and as fast on serial and parallel computers. For Euler systems of gas dynamics, we suggest computing the weights from pressure and entropy instead of the characteristic values to simplify the costly characteristic procedure. The resulting WENO schemes are about twice as fast as the WENO schemes using the characteristic decompositions to compute weights, and work well for problems which do not contain strong shocks or strong reflected waves. We also prove that, for conservation laws with smooth solutions, all WENO schemes are convergent. Many numerical tests, including the 1D steady state nozzle flow problem and 2D shock entropy wave interaction problem, are presented to demonstrate the remarkable capability of the WENO schemes, especially the WENO scheme using the new smoothness measurement, in resolving complicated shock and flow structures. We have also applied Yang's artificial compression method to the WENO schemes to sharpen contact discontinuities.

A general approach describing quantum decision procedures is developed. The approach can be applied to quantum information processing, quantum computing, creation of artificial quantum intelligence, as well as to analyzing decision processes of human decision makers. Our basic point is to consider an active quantum system possessing its own strategic state. Processing information by such a system is analogous to the cognitive processes associated to decision making by humans. The algebra of probability operators, associated with the possible options available to the decision maker, plays the role of the algebra of observables in quantum theory of measurements. A scheme is advanced for a practical realization of decision procedures by thinking quantum systems. Such thinking quantum systems can be realized by using spin lattices, systems of magnetic molecules, cold atoms trapped in optical lattices, ensembles of quantum dots, or multilevel atomic systems interacting with electromagnetic field.

A secret-sharing scheme is a cryptographic protocol to distribute a secret state in an encoded form among a group of players such that only authorized subsets of the players can reconstruct the secret. Classically, efficient secret-sharing schemes have been shown to be induced by matroids. Furthermore, access structures of such schemes can be characterized by an excluded minor relation. No such relations are known for quantum secret-sharing schemes. In this paper we take the first steps toward a matroidal characterization of quantum-secret-sharing schemes. In addition to providing a new perspective on quantum-secret-sharing schemes, this characterization has important benefits. While previous work has shown how to construct quantum-secret-sharing schemes for general access structures, these schemes are not claimed to be efficient. In this context the present results prove to be useful; they enable us to construct efficient quantum-secret-sharing schemes for many general access structures. More precisely, we show that an identically self-dual matroid that is representable over a finite field induces a pure-state quantum-secret-sharing scheme with information rate 1.

Lundqvist, Almqvist and Östman describe a teacher's manner of teaching and the possible consequences it may have for students' meaning making. In doing this the article examines a teacher's classroom practice by systematizing the teacher's transactions with the students in terms of certain conceptual schemes, namely the epistemological moves, educational philosophies and the selective traditions of this practice. In connection to their study one may ask how conceptual schemes could change teaching. This article examines how the relationship of the conceptual schemes produced by educational researchers to educational praxis has developed from the middle of the last century to today. The relationship is described as having been transformed in three steps: (1) teacher deficit and social engineering, where conceptual schemes are little acknowledged, (2) reflecting practitioners, where conceptual schemes are mangled through teacher practice to aid the choices of already knowledgeable teachers, and (3) the mangling of the conceptual schemes by researchers through practice with the purpose of revising theory.

A rapid shape parameterization tool called PROTEUS is developed for aircraft shape optimization. This tool can be applied directly to any aircraft geometry that has been defined in PLOT3D format, with the restriction that each aircraft component must be defined by only one data block. PROTEUS has eight types of parameterization schemes: planform, wing surface, twist, body surface, body scaling, body camber line, shifting/scaling, and linear morphing. These parametric schemes can be applied to two types of components: wing-type surfaces (e.g., wing, canard, horizontal tail, vertical tail, and pylon) and body-type surfaces (e.g., fuselage, pod, and nacelle). These schemes permit the easy setup of commonly used shape modification methods, and each customized parametric scheme can be applied to the same type of component for any configuration. This paper explains the mathematics for these parametric schemes and uses two supersonic configurations to demonstrate the application of these schemes.

A cascaded coding scheme for error control was investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are studied which seem to be quite suitable for satellite down-link error control.

The proposed Australian Mobilesat system will provide a range of circuit switched voice/data services using the B-series satellites. The reliability of the signalling scheme between the Network Management Station (NMS) and the mobile terminal (MT) is of critical importance to the performance of the overall system. Simulation results of the performance of the signalling scheme under various channel conditions and coding schemes are presented.

Based on the entanglement swapping, a quantum authentication scheme with a trusted- party is proposed in this paper. With this scheme, two users can perform mutual identity authentication to confirm each other's validity. In addition, the scheme is proved to be secure under circumstances where a malicious attacker is capable of monitoring the classical and quantum channels and has the power to forge all information on the public channel.

A new method to determine localized complex-valued one-electron functions in the occupied space is presented. The approach allows the calculation of localized orbitals regardless of their structure and of the entries in the spinor coefficient matrix, i.e., one-, two-, and four-component Kramers-restricted or unrestricted one-electron functions with real or complex expansion coefficients. The method is applicable to localization schemes that maximize (or minimize) a functional of the occupied spinors and that use a localization operator for which a matrix representation is available. The approach relies on the approximate joint diagonalization (AJD) of several Hermitian (symmetric) matrices which is utilized in electronic signal processing. The use of AJD in this approach has the advantage that it allows a reformulation of the localization criterion on an iterative 2 × 2 pair rotating basis in an analytical closed form which has not yet been described in the literature for multi-component (complex-valued) spinors. For the one-component case, the approach delivers the same Foster-Boys or Pipek-Mezey localized orbitals that one obtains from standard quantum chemical software, whereas in the multi-component case complex-valued spinors satisfying the selected localization criterion are obtained. These localized spinors allow the formulation of local correlation methods in a multi-component relativistic framework, which was not yet available. As an example, several heavy and super-heavy element systems are calculated using a Kramers-restricted self-consistent field and relativistic two-component pseudopotentials in order to investigate the effect of spin-orbit coupling on localization.

We propose a new finite volume renormalization scheme. Our scheme is based on the Gradient Flow applied to both fermion and gauge fields and, much like the Schr\\"odinger functional method, allows for a nonperturbative determination of the scale dependence of operators using a step-scaling approach. We give some preliminary results for the pseudo-scalar density in the quenched approximation.

Key management is one of the most important issues in cryptographic systems. Several important challenges in such a context are represented by secure and efficient key generation, key distribution, as well as key revocation. Addressing such challenges requires a comprehensive solution which is robust, secure and efficient. Compared to traditional key management schemes, key management using biometrics requires the presence of the user, which can reduce fraud and protect the key better. In this paper, we propose a novel key management scheme using iris based biometrics. Our newly proposed scheme outperforms traditional key management schemes as well as some existing key-binding biometric schemes in terms of security, diversity and/or efficiency.

Boundary value problems in thermoelasticity and poroelasticity (filtration consolidation) are solved numerically. The underlying system of equations consists of the Lamé stationary equations for displacements and nonstationary equations for temperature or pressure in the porous medium. The numerical algorithm is based on a finite-element approximation in space. Standard stability conditions are formulated for two-level schemes with weights. Such schemes are numerically implemented by solving a system of coupled equations for displacements and temperature (pressure). Splitting schemes with respect to physical processes are constructed, in which the transition to a new time level is associated with solving separate elliptic problems for the desired displacements and temperature (pressure). Unconditionally stable additive schemes are constructed by choosing a weight of a three-level scheme.

The goal of this work is to determine classes of traveling solitary wave solutions for Lattice Boltzmann schemes by means of a hyperbolic ansatz. It is shown that spurious solitary waves can occur in finite-difference solutions of nonlinear wave equation. The occurrence of such a spurious solitary wave, which exhibits a very long life time, results in a non-vanishing numerical error for arbitrary time in unbounded numerical domain. Such a behavior is referred here to have a structural instability of the scheme, since the space of solutions spanned by the numerical scheme encompasses types of solutions (solitary waves in the present case) that are not solutions of the original continuous equations. This paper extends our previous work about classical schemes to Lattice Boltzmann schemes (David and Sagaut 2011; 2009a,b; David et al. 2007).

An overview is presented of some of the promising washout schemes which have been devised. The four schemes presented fall into two basic configurations; crossfeed and crossproduct. Various nonlinear modifications further differentiate the four schemes. One nonlinear scheme is discussed in detail. This washout scheme takes advantage of subliminal motions to speed up simulator cab centering. It exploits so-called perceptual indifference thresholds to center the simulator cab at a faster rate whenever the input to the simulator is below the perceptual indifference level. The effect is to reduce the angular and translational simulation motion by comparison with that for the linear washout case. Finally, the conclusions and implications for further research in the area of nonlinear washout filters are presented.

In recent years, with the development of quantum cryptography, quantum signature has also made great achievement. However, the effectiveness of all the quantum signature schemes reported in the literature can only be verified by a designated person. Therefore, its wide applications are limited. For solving this problem, a new quantum proxy signature scheme using EPR quantum entanglement state and unitary transformation to generate proxy signature is presented. Proxy signer announces his public key when he generates the final signature. According to the property of unitary transformation and quantum one-way function, everyone can verify whether the signature is effective or not by the public key. So the quantum proxy signature scheme in our paper can be public verified. The quantum key distribution and one-time pad encryption algorithm guarantee the unconditional security of this scheme. Analysis results show that this new scheme satisfies strong non-counterfeit and strong non-disavowal.

The effect of collision-partner selection schemes on the accuracy and the efficiency of the Direct Simulation Monte Carlo (DSMC) method of Bird is investigated. Several schemes to reduce the total discretization error as a function of the mean collision separation and the mean collision time are examined. These include the historically first sub-cell scheme, the more recent nearest-neighbor scheme, and various near-neighbor schemes, which are evaluated for their effect on the thermal conductivity for Fourier flow. Their convergence characteristics as a function of spatial and temporal discretization and the number of simulators per cell are compared to the convergence characteristics of the sophisticated and standard DSMC algorithms. Improved performance is obtained if the population from which possible collision partners are selected is an appropriate fraction of the population of the cell.

The effect of collision-partner selection schemes on the accuracy and the efficiency of the Direct Simulation Monte Carlo (DSMC) method of Bird is investigated. Several schemes to reduce the total discretization error as a function of the mean collision separation and the mean collision time are examined. These include the historically first sub-cell scheme, the more recent nearest-neighbor scheme, and various near-neighbor schemes, which are evaluated for their effect on the thermal conductivity for Fourier flow. Their convergence characteristics as a function of spatial and temporal discretization and the number of simulators per cell are compared to the convergence characteristics of the sophisticated and standard DSMC algorithms. Improved performance is obtained if the population from which possible collision partners are selected is an appropriate fraction of the population of the cell.

A program for building level schemes from γ-spectroscopy coincidence data has been developed. The scheme builder was equipped with two different algorithms: a statistical one based on the Metropolis method and a more logical one, called REMP (REcurse, Merge and Permute), developed from scratch. These two methods are compared both on ideal cases and on experimental γ-ray data sets. The REMP algorithm is based on coincidences and transition energies. Using correct and complete coincidence data, it has solved approximately half a million schemes without failures. Also, for incomplete data and data with minor errors, the algorithm produces consistent sub-schemes when it is not possible to obtain a complete scheme from the provided data.

A new numerical method---Basic Function Method is proposed. This method can directly discrete differential operator on unstructured grids. By using the expansion of basic function to approach the exact function, the central and upwind schemes of derivative are constructed. By using the second-order polynomial as basic function and applying the technique of flux splitting method and the combination of central and upwind schemes to suppress the non-physical fluctuation near the shock wave, the second-order basic function scheme of polynomial type for solving inviscid compressible flow numerically is constructed in this paper. Several numerical results of many typical examples for two dimensional inviscid compressible transonic and supersonic steady flow illustrate that it is a new scheme with high accuracy and high resolution for shock wave. Especially, combining with the adaptive remeshing technique, the satisfactory results can be obtained by these schemes.

Based on a ternary quantum logic circuit, four symmetric weak ternary quantum homomorphic encryption (QHE) schemes were proposed. First, for a one-qutrit rotation gate, a QHE scheme was constructed. Second, in view of the synthesis of a general 3 × 3 unitary transformation, another one-qutrit QHE scheme was proposed. Third, according to the one-qutrit scheme, the two-qutrit QHE scheme about generalized controlled X (GCX(m,n)) gate was constructed and further generalized to the n-qutrit unitary matrix case. Finally, the security of these schemes was analyzed in two respects. It can be concluded that the attacker can correctly guess the encryption key with a maximum probability pk = 1/33n, thus it can better protect the privacy of users’ data. Moreover, these schemes can be well integrated into the future quantum remote server architecture, and thus the computational security of the users’ private quantum information can be well protected in a distributed computing environment.

The effectiveness of most quantum signature schemes reported in the literature can be verified by a designated person, however, those quantum signature schemes aren't the real traditional designated verifier signature schemes, because the designated person hasn't the capability to efficiently simulate a signature which is indistinguishable from a signer, which cannot satisfy the requirements in some special environments such as E-voting, call for tenders and software licensing. For solving this problem, a real quantum designated verifier signature scheme is proposed in this paper. According to the property of unitary transformation and quantum one-way function, only a verifier designated by a signer can verify the "validity of a signature" and the designated verifier cannot prove to a third party that the signature was produced by the signer or by himself through a transcript simulation algorithm. Moreover, the quantum key distribution and quantum encryption algorithm guarantee the unconditional security of this scheme. Analysis results show that this new scheme satisfies the main security requirements of designated verifier signature scheme and the major attack strategies.

In the authors previous studies [1], a time-accurate, upwind finite volume method (ETAU scheme) for computing compressible flows on unstructured grids was proposed. The scheme is second order accurate in space and time and yields high resolution in the presence of discontinuities. The scheme features a multidimensional limiter and multidimensional numerical dissipation. These help to stabilize the numerical process and to overcome the annoying pathological behaviors of upwind schemes. In the present paper, it will be further shown that such multidimensional treatments also lead to a nearly all-speed or Mach number insensitive upwind scheme. For flows at very high Mach number, e.g., 10, local numerical instabilities or the pathological behaviors are suppressed, while for flows at very low Mach number, e.g., 0.02, computation can be directly carried out without invoking preconditioning. For flows in different Mach number regimes, i.e., low, medium, and high Mach numbers, one only needs to adjust one or two parameters in the scheme. Several examples with low and high Mach numbers are demonstrated in this paper. Thus, the ETAU scheme is applicable to a broad spectrum of flow regimes ranging from high supersonic to low subsonic, appropriate for both CFD (computational fluid dynamics) and CAA (computational aeroacoustics).

In this paper, we construct a second-order nonconservative for the system of isentropic gas dynamics to capture the physical invariant regions for preventing negative density, to treat the vacuum singularity, and to control the local entropy from dramatically increasing near shock waves. The main difference in the construction of the scheme discussed here is that we use piecewise linear functions to approximate the Riemann invariants w and z instead of the physical variables {rho} and m. Our scheme is a natural extension of the schemes for scalar conservation laws and it can be numerical implemented easily because the system is diagonalized in this coordinate system. Another advantage of using Riemann invariants is that the Hessian matrix of any weak entropy has no singularity in the Riemann invariant plane w-z, whereas the Hessian matrices of the weak entropies have singularity at the vacuum points in the physical plane p-m. We prove that this scheme converges to an entropy solution for the Cauchy problem with L{sup {infinity}} initial data. By convergence here we mean that there is a subsequent convergence to a generalized solution satisfying the entrophy condition. As long as the entropy solution is unique, the whole sequence converges to a physical solution. This shows that this kind of scheme is quite reliable from theoretical view of point. In addition to being interested in the scheme itself, we wish to provide an approach to rigorously analyze nonconservative finite difference schemes.

Methods of incorporating multi-dimensional ideas into algorithms for the solution of Euler equations are presented. Three schemes are developed and tested: a scheme based on a downwind distribution, a scheme based on a rotated Riemann solver and a scheme based on a generalized Riemann solver. The schemes show an improvement over first-order, grid-aligned upwind schemes, but the higher-order performance is less impressive. An outlook for the future of multi-dimensional upwind schemes is given.

This paper presents a spatial domain quantum watermarking scheme. For a quantum watermarking scheme, a feasible quantum circuit is a key to achieve it. This paper gives a feasible quantum circuit for the presented scheme. In order to give the quantum circuit, a new quantum multi-control rotation gate, which can be achieved with quantum basic gates, is designed. With this quantum circuit, our scheme can arbitrarily control the embedding position of watermark images on carrier images with the aid of auxiliary qubits. Besides reversely acting the given quantum circuit, the paper gives another watermark extracting algorithm based on quantum measurements. Moreover, this paper also gives a new quantum image scrambling method and its quantum circuit. Differ from other quantum watermarking schemes, all given quantum circuits can be implemented with basic quantum gates. Moreover, the scheme is a spatial domain watermarking scheme, and is not based on any transform algorithm on quantum images. Meanwhile, it can make sure the watermark be secure even though the watermark has been found. With the given quantum circuit, this paper implements simulation experiments for the presented scheme. The experimental result shows that the scheme does well in the visual quality and the embedding capacity. Supported by the National Natural Science Foundation of China under Grant Nos. 61272514, 61170272, 61373131, 61121061, 61411146001, the program for New Century Excellent Talents under Grant No. NCET-13-0681, the National Development Foundation for Cryptological Research (Grant No. MMJJ201401012) and the Fok Ying Tung Education Foundation under Grant No. 131067, and the Shandong Provincial Natural Science Foundation of China under Grant No. ZR2013FM025

Fuzzy vault scheme (FVS) is one of the most popular biometric cryptosystems for biometric template protection. However, error correcting code (ECC) proposed in FVS is not appropriate to deal with real-valued biometric intraclass variances. In this paper, we propose a multidimensional fuzzy vault scheme (MDFVS) in which a general subspace error-tolerant mechanism is designed and embedded into FVS to handle intraclass variances. Palmprint is one of the most important biometrics; to protect palmprint templates; a palmprint based MDFVS implementation is also presented. Experimental results show that the proposed scheme not only can deal with intraclass variances effectively but also could maintain the accuracy and meanwhile enhance security. PMID:24892094

Due to their tight memory constraints, small microcontroller based embedded systems have traditionally been implemented using low-level languages. This paper shows that the Scheme programming language can also be used for such applications, with less than 7 kB of total memory. We present PICOBIT, a very compact implementation of Scheme suitable for memory constrained embedded systems. To achieve a compact system we have tackled the space issue in three ways: the design of a Scheme compiler generating compact bytecode, a small virtual machine, and an optimizing C compiler suited to the compilation of the virtual machine.

We present a universal characterization scheme for chimera states applicable to both numerical and experimental data sets. The scheme is based on two correlation measures that enable a meaningful definition of chimera states as well as their classification into three categories: stationary, turbulent, and breathing. In addition, these categories can be further subdivided according to the time-stationarity of these two measures. We demonstrate that this approach is both consistent with previously recognized chimera states and enables us to classify states as chimeras which have not been categorized as such before. Furthermore, the scheme allows for a qualitative and quantitative comparison of experimental chimeras with chimeras obtained through numerical simulations.

In this paper, we find a man-in-the-middle attack on the quantum signature scheme with a weak arbitrator (Luo et al., Int. J. Theor. Phys., 51:2135, 2012). In that scheme, the authors proposed a quantum signature based on quantum one way function which contains both verifying the signer phase and verifying the signed message phase. However, after our analysis we will show that Eve can adopt different strategies in respective phases to forge the signature without being detected. Then we present an improved scheme to increase the security.

In this paper, we apply a special finite-volume scheme, limited to smooth temperature distributions and Cartesian grids, to test the importance of connectivity of the finite volumes. The area of application is nuclear fusion plasma with field line aligned temperature gradients and extreme anisotropy. We apply the scheme to the anisotropic heat-conduction equation, and compare its results with those of existing finite-volume schemes for anisotropic diffusion. Also, we introduce a general model adaptation of the steady diffusion equation for extremely anisotropic diffusion problems with closed field lines.

Fuzzy vault scheme (FVS) is one of the most popular biometric cryptosystems for biometric template protection. However, error correcting code (ECC) proposed in FVS is not appropriate to deal with real-valued biometric intraclass variances. In this paper, we propose a multidimensional fuzzy vault scheme (MDFVS) in which a general subspace error-tolerant mechanism is designed and embedded into FVS to handle intraclass variances. Palmprint is one of the most important biometrics; to protect palmprint templates; a palmprint based MDFVS implementation is also presented. Experimental results show that the proposed scheme not only can deal with intraclass variances effectively but also could maintain the accuracy and meanwhile enhance security. PMID:24892094

A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.

GEMPAK, an interactive computer software system developed for the purpose of assimilating, analyzing, and displaying various conventional and satellite meteorological data types is discussed. The objective map analysis scheme possesses certain characteristics that allowed it to be adapted to meet the analysis needs GEMPAK. Those characteristics and the specific adaptation of the scheme to GEMPAK are described. A step-by-step guide for using the GEMPAK Barnes scheme on an interactive computer (in real time) to analyze various types of meteorological datasets is also presented.

In this paper we construct higher order Godunov schemes for isothermal flow. Isothermal hydrodynamics serves as a good representation for several systems of astrophysical interest. The schemes designed here have second-order accuracy in space and time and some are third-order accurate for advection. Moreover, several ingredients of these schemes are essential components of even higher order. The methods designed here have excellent ability to represent smooth flow yet capture shocks with high resolution. Several test problems are presented. The algorithms presented here are compared with other algorithms having a comparable formal order of accuracy.

A new Diagonally Inverted LU Implicit scheme is developed within the framework of the multigrid method for the 3-D unsteady Euler equations. The matrix systems that are to be inverted in the LU scheme are treated by local diagonalizing transformations that decouple them into systems of scalar equations. Unlike the Diagonalized ADI method, the time accuracy of the LU scheme is not reduced since the diagonalization procedure does not destroy time conservation. Even more importantly, this diagonalization significantly reduces the computational effort required to solve the LU approximation and therefore transforms it into a more efficient method of numerically solving the 3-D Euler equations.

We present a layered packet video coding algorithm based on a progressive transmission scheme. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

Network life time and hence device life time is one of the fundamental metrics in wireless body area networks (WBAN). To prolong it, especially those of implanted sensors, each node must conserve its energy as much as possible. While a variety of wake-up/sleep mechanisms have been proposed, the wake-up radio potentially serves as a vehicle to introduce vulnerabilities and attacks to WBAN, eventually resulting in its malfunctions. In this paper, we propose a novel secure wake-up scheme, in which a wake-up authentication code (WAC) is employed to ensure that a BAN Node (BN) is woken up by the correct BAN Network Controller (BNC) rather than unintended users or malicious attackers. The scheme is thus particularly implemented by a two-radio architecture. We show that our scheme provides higher security while consuming less energy than the existing schemes.

In this paper, a fast lossless compression scheme is presented for the medical image. This scheme consists of two stages. In the first stage, a Differential Pulse Code Modulation (DPCM) is used to decorrelate the raw image data, therefore increasing the compressibility of the medical image. In the second stage, an effective scheme based on the Huffman coding method is developed to encode the residual image. This newly proposed scheme could reduce the cost for the Huffman coding table while achieving high compression ratio. With this algorithm, a compression ratio higher than that of the lossless JPEG method for image can be obtained. At the same time, this method is quicker than the lossless JPEG2000. In other words, the newly proposed algorithm provides a good means for lossless medical image compression. PMID:17280962

In this paper, we present two definitions of the orthogonality and orthogonal rate of an encryption operator, and we provide a verification process for the former. Then, four improved ternary quantum encryption schemes are constructed. Compared with Scheme 1 (see Section 2.3), these four schemes demonstrate significant improvements in term of calculation and execution efficiency. Especially, under the premise of the orthogonal rate ɛ as secure parameter, Scheme 3 (see Section 4.1) shows the highest level of security among them. Through custom interpolation functions, the ternary secret key source, which is composed of the digits 0, 1 and 2, is constructed. Finally, we discuss the security of both the ternary encryption operator and the secret key source, and both of them show a high level of security and high performance in execution efficiency.

PON (Passive Optical Network) achieves FTTH (Fiber To The Home) economically, by sharing an optical fiber among plural subscribers. Recently, global climate change has been recognized as a serious near term problem. Power saving techniques for electronic devices are important. In PON system, the ONU (Optical Network Unit) power saving scheme has been studied and defined in XG-PON. In this paper, we propose an ONU power saving scheme for EPON. Then, we present an analysis of the power reduction effect and the data transmission delay caused by the ONU power saving scheme. According to the analysis, we propose an efficient provisioning method for the ONU power saving scheme which is applicable to both of XG-PON and EPON.

A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

A third-order Energy Stable Weighted Essentially Non-Oscillatory (ESWENO) finite difference scheme developed by Yamaleev and Carpenter was proven to be stable in the energy norm for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, a systematic approach is presented that enables 'energy stable' modifications for existing WENO schemes of any order. The technique is demonstrated by developing a one-parameter family of fifth-order upwind-biased ESWENO schemes; ESWENO schemes up to eighth order are presented in the appendix. New weight functions are also developed that provide (1) formal consistency, (2) much faster convergence for smooth solutions with an arbitrary number of vanishing derivatives, and (3) improved resolution near strong discontinuities.

Scheme devised for asynchronous-message communication system for Mark III hypercube concurrent-processor network. Network consists of up to 1,024 processing elements connected electrically as though were at corners of 10-dimensional cube. Each node contains two Motorola 68020 processors along with Motorola 68881 floating-point processor utilizing up to 4 megabytes of shared dynamic random-access memory. Scheme intended to support applications requiring passage of both polled or solicited and unsolicited messages.

The oscillations of a centered second order finite difference scheme and the excessive diffusion of a first order centered scheme can be overcome by global composition of the two, that is by performing cycles consisting of several time steps of the second order method followed by one step of the diffusive method. The authors show the effectiveness of this approach on some test problems in two and three dimensions.

In studies of restarted Davidson method, a dynamic thick-restart scheme was found to be excellent in improving the overall effectiveness of the eigen value method. This paper extends the study of the dynamic thick-restart scheme to the Lanczos method for symmetric eigen value problems and systematically explore a range of heuristics and strategies. We conduct a series of numerical tests to determine their relative strength and weakness on a class of electronic structure calculation problems.

An identity-based signature scheme is proposed by using bilinear pairs technology. The scheme uses user's identity information as public key such as email address, IP address, telephone number so that it erases the cost of forming and managing public key infrastructure and avoids the problem of user private generating center generating forgery signature by using CL-PKC framework to generate user's private key.

In this work, we generalize the quantum-secret-sharing scheme of Hillery, Buzek, and Berthiaume [Phys. Rev. A 59, 1829 (1999)] into arbitrary multiparties. Explicit expressions for the shared secret bit is given. It is shown that in the Hillery-Buzek-Berthiaume quantum-secret-sharing scheme the secret information is shared in the parity of binary strings formed by the measured outcomes of the participants. In addition, we have increased the efficiency of the quantum-secret-sharing scheme by generalizing two techniques from quantum key distribution. The favored-measuring-basis quantum-secret-sharing scheme is developed from the Lo-Chau-Ardehali technique [H. K. Lo, H. F. Chau, and M. Ardehali, e-print quant-ph/0011056] where all the participants choose their measuring-basis asymmetrically, and the measuring-basis-encrypted quantum-secret-sharing scheme is developed from the Hwang-Koh-Han technique [W. Y. Hwang, I. G. Koh, and Y. D. Han, Phys. Lett. A 244, 489 (1998)] where all participants choose their measuring basis according to a control key. Both schemes are asymptotically 100% in efficiency, hence nearly all the Greenberger-Horne-Zeilinger states in a quantum-secret-sharing process are used to generate shared secret information.

We have investigated adaptive mechanisms for high-volume transform-domain data hiding in MPEG-2 video which can be tuned to sustain varying levels of compression attacks. The data is hidden in the uncompressed domain by scalar quantization index modulation (QIM) on a selected set of low-frequency discrete cosine transform (DCT) coefficients. We propose an adaptive hiding scheme where the embedding rate is varied according to the type of frame and the reference quantization parameter (decided according to MPEG-2 rate control scheme) for that frame. For a 1.5 Mbps video and a frame-rate of 25 frames/sec, we are able to embed almost 7500 bits/sec. Also, the adaptive scheme hides 20% more data and incurs significantly less frame errors (frames for which the embedded data is not fully recovered) than the non-adaptive scheme. Our embedding scheme incurs insertions and deletions at the decoder which may cause de-synchronization and decoding failure. This problem is solved by the use of powerful turbo-like codes and erasures at the encoder. The channel capacity estimate gives an idea of the minimum code redundancy factor required for reliable decoding of hidden data transmitted through the channel. To that end, we have modeled the MPEG-2 video channel using the transition probability matrices given by the data hiding procedure, using which we compute the (hiding scheme dependent) channel capacity.

The unified gas kinetic scheme (UGKS) is an asymptotic preserving (AP) scheme for kinetic equations. It is superior for transition flow simulation and has been validated in the past years. However, compared to the well-known discrete ordinate method (DOM), which is a classical numerical method solving the kinetic equations, the UGKS needs more computational resources. In this study, we propose a simplification of the unified gas kinetic scheme. It allows almost identical numerical cost as the DOM, but predicts numerical results as accurate as the UGKS. In the simplified scheme, the numerical flux for the velocity distribution function and the numerical flux for the macroscopic conservative quantities are evaluated separately. The equilibrium part of the UGKS flux is calculated by analytical solution instead of the numerical quadrature in velocity space. The simplification is equivalent to a flux hybridization of the gas kinetic scheme for the Navier-Stokes (NS) equations and the conventional discrete ordinate method. Several simplification strategies are tested, through which we can identify the key ingredient of the Navier-Stokes asymptotic preserving property. Numerical tests show that, as long as the collision effect is built into the macroscopic numerical flux, the numerical scheme is Navier-Stokes asymptotic preserving, regardless the accuracy of the microscopic numerical flux for the velocity distribution function. PMID:27627418

The lateral and vertical Gaussian plume dispersion parameters are estimated and compared with field tracer data collected at 11 sites. The dispersion parameter schemes used in this analysis include Cramer's scheme, suggested for tall stack dispersion estimates, Draxler's scheme, ...

To ensure patients' privacy, such as telephone number, medical record number, health information, etc., authentication schemes for telecare medicine information systems (TMIS) have been studied widely. Recently, Wei et al. proposed an efficient authentication scheme for TMIS. They claimed their scheme could resist various attacks. However, in this paper, we will show their scheme is vulnerable to an off-line password guessing attack when user's smart card is lost. To improve the security, we propose a new authentication scheme for TMIS. The analysis shows our scheme could overcome the weaknesses in Wei et al.'s scheme and has better performance than their scheme. PMID:22527784

An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

This report presents a classification scheme for risk assessment methods. This scheme, like all classification schemes, provides meaning by imposing a structure that identifies relationships. Our scheme is based on two orthogonal aspects--level of detail, and approach. The resulting structure is shown in Table 1 and is explained in the body of the report. Each cell in the Table represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. This report imposes structure on the set of risk assessment methods in order to reveal their relationships and thus optimize their usage.We present a two-dimensional structure in the form of a matrix, using three abstraction levels for the rows and three approaches for the columns. For each of the nine cells in the matrix we identify the method type by name and example. The matrix helps the user understand: (1) what to expect from a given method, (2) how it relates to other methods, and (3) how best to use it. Each cell in the matrix represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. The matrix, with type names in the cells, is introduced in Table 2 on page 13 below. Unless otherwise stated we use the word 'method' in this report to refer to a 'risk assessment method', though often times we use the full phrase. The use of the terms 'risk assessment' and 'risk management' are close enough that we do not attempt to distinguish them in this report. The remainder of this report is organized as follows. In Section 2 we provide context for this report

Basic parameters governing the design of tidal power schemes are identified and converted to dimensionless form by reference to (i) the mean tidal range and (ii) the surface area of the enclosed basin. Optimum values for these dimensionless parameters are derived and comparison made with actual engineering designs. A theoretical framework is thus established which can be used (i) to make a rudimentary design at any specific location or (ii) to compare and evaluate designs for various locations. Both one-way (flood or ebb) and two-way (flood and ebb) schemes are examined and, theoretically, the two-way scheme is shown to be more efficient. However, in practice, two-way schemes suffer disadvantages arising from (i) two-way flow through both turbines and sluices and (ii) lower average turbine heads. An important dimensional aspect of tidal power schemes is that, while energy extracted is proportional to the tidal amplitude squared, the requisite sluicing area is proportional to the square root of the tidal amplitude. In consequence, sites with large tidal amplitudes are best suited to tidal power development whereas for sites with low tidal amplitudes sluicing costs may be prohibitive.

We propose a generic conversion from a key encapsulation mechanism (KEM) to an identification (ID) scheme. The conversion derives the security for ID schemes against concurrent man-in-the-middle (cMiM) attacks from the security for KEMs against adaptive chosen ciphertext attacks on one-wayness (one-way-CCA2). Then, regarding the derivation as a design principle of ID schemes, we develop a series of concrete one-way-CCA2 secure KEMs. We start with El Gamal KEM and prove it secure against non-adaptive chosen ciphertext attacks on one-wayness (one-way-CCA1) in the standard model. Then, we apply a tag framework with the algebraic trick of Boneh and Boyen to make it one-way-CCA2 secure based on the Gap-CDH assumption. Next, we apply the CHK transformation or a target collision resistant hash function to exit the tag framework. And finally, as it is better to rely on the CDH assumption rather than the Gap-CDH assumption, we apply the Twin DH technique of Cash, Kiltz and Shoup. The application is not “black box” and we do it by making the Twin DH technique compatible with the algebraic trick. The ID schemes obtained from our KEMs show the highest performance in both computational amount and message length compared with previously known ID schemes secure against concurrent man-in-the-middle attacks.

This paper describes a new control volume based finite difference scheme for petroleum reservoir simulation which can be used with unstructured grids. The numerical scheme to model fluid flow is shown to be easily used for Voronoi grids in 2D. It can also be used with certain geometrical limitations for 3D Voronoi grids. The scheme can be used without any significant limitations for triangle or tetrahedron based grids where control volumes are constructed around their vertices. It assumes uniform properties inside such control volumes. Full, anisotropic and asymmetric permeability tensor can be easily handled with the proposed method. The permeability tensor can vary from block to block. Thus it will be of great value in modeling fluid flow in reservoirs where principal directions of permeability varies between beds or within a bed. The paper also presents an analysis of some of the published flexible gridding schemes which use a control volume type algebraic approximation and demonstrate the advantages of the method presented here. The technique for grid construction is also discussed. Test results presented here demonstrate the need for proper representation of reservoir geometry to predict the correct flow behavior. The gridding scheme described in this paper achieves that purpose.

Background Partitioning involves estimating independent models of molecular evolution for different subsets of sites in a sequence alignment, and has been shown to improve phylogenetic inference. Current methods for estimating best-fit partitioning schemes, however, are only computationally feasible with datasets of fewer than 100 loci. This is a problem because datasets with thousands of loci are increasingly common in phylogenetics. Methods We develop two novel methods for estimating best-fit partitioning schemes on large phylogenomic datasets: strict and relaxed hierarchical clustering. These methods use information from the underlying data to cluster together similar subsets of sites in an alignment, and build on clustering approaches that have been proposed elsewhere. Results We compare the performance of our methods to each other, and to existing methods for selecting partitioning schemes. We demonstrate that while strict hierarchical clustering has the best computational efficiency on very large datasets, relaxed hierarchical clustering provides scalable efficiency and returns dramatically better partitioning schemes as assessed by common criteria such as AICc and BIC scores. Conclusions These two methods provide the best current approaches to inferring partitioning schemes for very large datasets. We provide free open-source implementations of the methods in the PartitionFinder software. We hope that the use of these methods will help to improve the inferences made from large phylogenomic datasets. PMID:24742000

The development of numerical methods for hyperbolic conservation laws has been a rapidly growing area for the last ten years. Many of the fundamental concepts and state-of-the-art developments can only be found in meeting proceedings or internal reports. This review paper attempts to give an overview and a unified formulation of a class of shock-capturing methods. Special emphasis is on the construction of the basic nonlinear scalar second-order schemes and the methods of extending these nonlinear scalar schemes to nonlinear systems via the extact Riemann solver, approximate Riemann solvers, and flux-vector splitting approaches. Generalization of these methods to efficiently include real gases and large systems of nonequilibrium flows is discussed. The performance of some of these schemes is illustrated by numerical examples for one-, two- and three-dimensional gas dynamics problems.

A compositionally based classification scheme for chondrules is proposed that will help in systematizing the wealth of data available and disentangling the effects of nebular and subsequent processes. The classification is not by texture or the composition of a single phase, or a mixture of these two, but rather is a comprehensive, systematic approach which uses the composition of the two main chondrule components. This scheme is applicable to over 95 percent of the chondrules and is easily applied using an electron microprobe. It stresses the original diversity of the chondrules and the complex yet facile way in which they respond to parent-body metamorphism. Results using this classification scheme suggest that arguments against an important role of chondrules in determining the compositional trends of the chondrites have been premature.

Traditional schemes for multistep resonance photoionization of atoms let every employed laser beam interact with the atoms simultaneously. In such a situation, analyses via time-dependent Schrödinger equation show that high ionization probability requires all the laser beams must be intense enough. In order to decrease laser intensity, we proposed a scheme that the laser beam used to pump the excited atoms (in a higher bound state) into an autoionization state does not interact with the atoms until all the population is transferred by the other lasers from a ground state to the bound state. As an interesting example, we examined three-step photoionization of 235U with our scheme, showing that the intensity of two laser beams can be lowered by two orders of magnitude without losing high ionization probability.

A adaptive control scheme for direct control of manipulator end-effectors to achieve trajectory tracking in Cartesian space is developed. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for online implementation with high sampling rates.

A closed-loop cooling scheme for cooling stationary combustion turbine components, such as vanes, ring segments and transitions, is provided. The cooling scheme comprises: (1) an annular coolant inlet chamber, situated between the cylinder and blade ring of a turbine, for housing coolant before being distributed to the turbine components; (2) an annular coolant exhaust chamber, situated between the cylinder and the blade ring and proximate the annular coolant inlet chamber, for collecting coolant exhaust from the turbine components; (3) a coolant inlet conduit for supplying the coolant to said coolant inlet chamber; (4) a coolant exhaust conduit for directing coolant from said coolant exhaust chamber; and (5) a piping arrangement for distributing the coolant to and directing coolant exhaust from the turbine components. In preferred embodiments of the invention, the cooling scheme further comprises static seals for sealing the blade ring to the cylinder and flexible joints for attaching the blade ring to the turbine components.

The Nonlinear Characteristic (NC) scheme for solving the discrete-ordinates form of the transport equation has recently been introduced and used to analyze one-dimensional slab transport problems. The purpose of this paper is to determine the accuracy and positivity of the NC scheme as extended to solve two-dimensional X-Y problems. We compare the results obtained using the NC scheme to those obtained using the Bilinear Discontinuous (BLD) scheme, the Bilinear Nodal (BLN) scheme, Linear Characteristic scheme, and the Diamond Difference with Fixup (DD/F) scheme. As was found in one-dimensional applications, the NC scheme is strictly positive and as accurate or more accurate than the other schemes for all meshes examined. The accuracy of the NC scheme for coarse meshes is particularity outstanding compared to that of the other schemes.

There has been a wide-ranging discussion on the issue of content copyright protection in digital content distribution systems. Fiat and Tassa proposed the framework of dynamic traitor tracing. Their framework requires dynamic computation transactions according to the real-time responses of the pirate, and it presumes real-time observation of content redistribution. Therefore, it cannot be simply utilized in an application where such an assumption is not valid. In this paper, we propose a new scheme that provides the advantages of dynamic traitor tracing schemes and also overcomes their problems.

A complete scheme for production, cooling, acceleration, and ring for a 1.5 TeV center of mass muon collider is presented, together with parameters for two higher energy machines. The schemes starts with the front end of a proposed neutrino factory that yields bunch trains of both muon signs. Six dimensional cooling in long-period helical lattices reduces the longitudinal emittance until it becomes possible to merge the trains into single bunches, one of each sign. Further cooling in all dimensions is applied to the single bunches in further helical lattices. Final transverse cooling to the required parameters is achieved in 50 T solenoids.

Many arbitrated quantum signature schemes implemented with the help of a trusted third party have been developed up to now. In order to guarantee unconditional security, most of them take advantage of the optimal quantum one-time encryption based on Pauli operators. However, in this paper we point out that the previous schemes provide security only against a total break attack and show in fact that there exists an existential forgery attack that can validly modify the transmitted pair of message and signature. In addition, we also provide a simple method to recover security against the proposed attack.

Many arbitrated quantum signature schemes implemented with the help of a trusted third party have been developed up to now. In order to guarantee unconditional security, most of them take advantage of the optimal quantum one-time encryption based on Pauli operators. However, in this paper we point out that the previous schemes provide security only against a total break attack and show in fact that there exists an existential forgery attack that can validly modify the transmitted pair of message and signature. In addition, we also provide a simple method to recover security against the proposed attack.

Cloud microphysical processes play an important role in non-hydrostatic high-resolution simulations. Over the past decade both research and operational numerical weather prediction models have started using more complex cloud microphysical schemes that were originally developed for high-resolution cloud-resolving models. An improved bulk microphysical parameterization (adopted from the Goddard microphysics schemes) has recently implemented into the Weather Research and Forecasting (WRF) model. This bulk microphysical scheme has three different options --- 2ICE (cloud ice & snow), 3ICE-graupel (cloud ice, snow & graupel) and 3ICE-hail (cloud ice, snow & hail). High-resolution model simulations are conducted to examine the impact of microphysical schemes on two different weather events (a midlatitude linear convective system and an Atlantic hurricane). In addition, this bulk microphysical parameterization is compared with WIRF's three other bulk microphysical schemes.

This study makes a comprehensive review of the situation of student loans schemes in Mauritius, and makes recommendations, based on best practices, for setting up a national scheme that attempts to avoid weaknesses identified in some of the loans schemes of other countries. It suggests that such a scheme would be cost-effective and beneficial both…

In most fluid phenomena, advection plays an important roll. A numerical scheme capable of making quantitative predictions and simulations must compute correctly the advection terms appearing in the equations governing fluid flow. Here we present a high order forward semi-Lagrangian numerical scheme specifically tailored to compute material derivatives. The scheme relies on the geometrical interpretation of material derivatives to compute the time evolution of fields on grids that deform with the material fluid domain, an interpolating procedure of arbitrary order that preserves the moments of the interpolated distributions, and a nonlinear mapping strategy to perform interpolations between undeformed and deformed grids. Additionally, a discontinuity criterion was implemented to deal with discontinuous fields and shocks. Tests of pure advection, shock formation and nonlinear phenomena are presented to show performance and convergence of the scheme. The high computational cost is considerably reduced when implemented on massively parallel architectures found in graphic cards. The authors acknowledge funding from Fondo Sectorial CONACYT-SENER Grant Number 42536 (DGAJ-SPI-34-170412-217).

With over 100 light water nuclear reactors operating nationwide, representing designs by four primary vendors, and with reload fuel manufactured by these vendors and additional suppliers, a wide variety of fuel assembly types are in existence. At Oak Ridge National Laboratory, both the Systems Integration Program and the Characteristics Data Base project required a classification scheme for these fuels. This scheme can be applied to other areas and is expected to be of value to many Office of Civilian Radioactive Waste Management programs. To develop the classification scheme, extensive information on the fuel assemblies that have been and are being manufactured by the various nuclear fuel vendors was compiled, reviewed, and evaluated. It was determined that it is possible to characterize assemblies in a systematic manner, using a combination of physical factors. A two-stage scheme was developed consisting of 79 assembly types, which are grouped into 22 assembly classes. The assembly classes are determined by the general design of the reactor cores in which the assemblies are, or were, used. The general BWR and PWR classes are divided differently but both are based on reactor core configuration. 2 refs., 15 tabs.

Evidence is emerging from across Europe that contemporary agri-environmental schemes are having only limited, if any, influence on farmers' long-term attitudes towards the environment. In this theoretical paper we argue that these approaches are not "culturally sustainable," i.e. the actions are not becoming embedded within farming cultures as…

Improved configuration-control scheme for robotic manipulator having redundant degrees of freedom suppresses large joint velocities near singularities, at expense of small trajectory errors. Provides means to enforce order of priority of tasks assigned to robot. Basic concept of configuration control of redundant robot described in "Increasing The Dexterity Of Redundant Robots" (NPO-17801).

Lundqvist, Almqvist and Ostman describe a teacher's manner of teaching and the possible consequences it may have for students' meaning making. In doing this the article examines a teacher's classroom practice by systematizing the teacher's transactions with the students in terms of certain conceptual schemes, namely the "epistemological moves",…

We develop an unstaggered central scheme for approximating the solution of general two-dimensional hyperbolic systems. In particular, we are interested in solving applied problems arising in hydrodynamics and astrophysics. In contrast with standard central schemes that evolve the numerical solution on two staggered grids at consecutive time steps, the method we propose evolves the numerical solution on a single grid, and avoids the resolution of the Riemann problems arising at the cell interfaces, thanks to a layer of ghost cells implicitly used. The numerical base scheme is used to solve shallow water equation problems and ideal magnetohydrodynamic problems. To satisfy the divergence-free constraint of the magnetic field in the numerical solution of ideal magnetohydrodynamic problems, we adapt Evans and Hawley's the constrained transport method to our unstaggered base scheme, and apply it to correct the magnetic field components at the end of each time step. The obtained results are in good agreement with corresponding ones appearing in the recent literature, thus confirming the efficiency and the potential of the proposed method.

Author contended that no broad-ranging study of the way sex-roles are presented in British reading schemes exists. In this article he described a preliminary study on sex-role content in readers in order to remedy this lack of information. (Author/RK)

Describes a pilot mentoring program at the Health Libraries and Information Network (HeLIN) at the University of Oxford that was designed to increase understanding of mentoring for continuing professional and personal development; to investigate existing mentoring schemes; to incorporate a program for accreditation of mentors; and to evaluate the…

In this paper the authors give a simple theoretical description of the basic physics of the single pass high gain free electron laser (FEL), describing in some detail the FEL bunching properties and the harmonic generation technique with a multiple-wiggler scheme or a high gain optical klystron configuration.

The work presented in this paper shows that the mixed-type scheme of Murman and Cole, originally developed for a scalar equation, can be extended to systems of conservation laws. A characteristic scheme for the equations of gas dynamics is introduced that has a close connection to a four operator scheme for the Burgers-Hopf equation. The results indicate that the scheme performs well on the classical test cases. The scheme has no tuning parameters and can be interpreted as the projection of an L-stable scheme. At steady state second order accuracy is obtained as a by-product of the box-scheme feature. Copyright

Quantification of the similarity between nodes in multiple electronic classification schemes is provided by automatically identifying relationships and similarities between nodes within and across the electronic classification schemes. Quantifying the similarity between a first node in a first electronic classification scheme and a second node in a second electronic classification scheme involves finding a third node in the first electronic classification scheme, wherein a first product value of an inter-scheme similarity value between the second and third nodes and an intra-scheme similarity value between the first and third nodes is a maximum. A fourth node in the second electronic classification scheme can be found, wherein a second product value of an inter-scheme similarity value between the first and fourth nodes and an intra-scheme similarity value between the second and fourth nodes is a maximum. The maximum between the first and second product values represents a measure of similarity between the first and second nodes.

The telecare medicine information system enables or supports health-care delivery services. A secure authentication scheme will thus be needed to safeguard data integrity, confidentiality, and availability. In this paper, we propose a generic construction of smart-card-based password authentication protocol and prove its security. The proposed framework is superior to previous schemes in three following aspects : (1) our scheme is a true two-factor authentication scheme. (2) our scheme can yield a forward secure two-factor authentication scheme with user anonymity when appropriately instantiated. (3) our scheme utilizes each user's unique identity to accomplish the user authentication and does not need to store or verify others's certificates. And yet, our scheme is still reasonably efficient and can yield such a concrete scheme that is even more efficient than previous schemes. Therefore the end result is more practical for the telecare medicine system. PMID:21594637

This study formulates general guidelines to extend an explicit code with a great variety of implicit and semi-implicit time integration schemes. The discussion is based on their specific implementation in the Versatile Advection Code, which is a general purpose software package for solving systems of non-linear hyperbolic (and/or parabolic) partial differential equations, using standard high resolution shock capturing schemes. For all combinations of explicit high resolution schemes with implicit and semi-implicit treatments, it is shown how second-order spatial and temporal accuracy for the smooth part of the solutions can be maintained. Strategies to obtain steady state and time accurate solutions implicitly are discussed. The implicit and semi-implicit schemes require the solution of large linear systems containing the Jacobian matrix. The Jacobian matrix itself is calculated numerically to ensure the generality of this implementation. Three options are discussed in terms of applicability, storage requirements and computational efficiency. One option is the easily implemented matrix-free approach, but the Jacobian matrix can also be calculated by using a general grid masking algorithm, or by an efficient implementation for a specific Lax-Friedrich-type total variation diminishing (TVD) spatial discretization. The choice of the linear solver depends on the dimensionality of the problem. In one dimension, a direct block tridiagonal solver can be applied, while in more than one spatial dimension, a conjugate gradient (CG)-type iterative solver is used. For advection-dominated problems, preconditioning is needed to accelerate the convergence of the iterative schemes. The modified block incomplete LU-preconditioner is implemented, which performs very well. Examples from two-dimensional hydrodynamic and magnetohydrodynamic computations are given. They model transonic stellar outflow and recover the complex magnetohydrodynamic bow shock flow in the switch-on regime

The increasing use of renewable energy technologies for electricity generation, many of which have an unpredictably intermittent nature, will inevitably lead to a greater demand for large-scale electricity storage schemes. For example, the expanding fraction of electricity produced by wind turbines will require either backup or storage capacity to cover extended periods of wind lull. This paper describes a recently proposed storage scheme, referred to here as Pumped Thermal Storage (PTS), and which is based on “sensible heat” storage in large thermal reservoirs. During the charging phase, the system effectively operates as a high temperature-ratio heat pump, extracting heat from a cold reservoir and delivering heat to a hot one. In the discharge phase the processes are reversed and it operates as a heat engine. The round-trip efficiency is limited only by process irreversibilities (as opposed to Second Law limitations on the coefficient of performance and the thermal efficiency of the heat pump and heat engine respectively). PTS is currently being developed in both France and England. In both cases, the schemes operate on the Joule-Brayton (gas turbine) cycle, using argon as the working fluid. However, the French scheme proposes the use of turbomachinery for compression and expansion, whereas for that being developed in England reciprocating devices are proposed. The current paper focuses on the impact of the various process irreversibilities on the thermodynamic round-trip efficiency of the scheme. Consideration is given to compression and expansion losses and pressure losses (in pipe-work, valves and thermal reservoirs); heat transfer related irreversibility in the thermal reservoirs is discussed but not included in the analysis. Results are presented demonstrating how the various loss parameters and operating conditions influence the overall performance.

The increasing use of renewable energy technologies for electricity generation, many of which have an unpredictably intermittent nature, will inevitably lead to a greater demand for large-scale electricity storage schemes. For example, the expanding fraction of electricity produced by wind turbines will require either backup or storage capacity to cover extended periods of wind lull. This paper describes a recently proposed storage scheme, referred to here as Pumped Thermal Storage (PTS), and which is based on "sensible heat" storage in large thermal reservoirs. During the charging phase, the system effectively operates as a high temperature-ratio heat pump, extracting heat from a cold reservoir and delivering heat to a hot one. In the discharge phase the processes are reversed and it operates as a heat engine. The round-trip efficiency is limited only by process irreversibilities (as opposed to Second Law limitations on the coefficient of performance and the thermal efficiency of the heat pump and heat engine respectively). PTS is currently being developed in both France and England. In both cases, the schemes operate on the Joule-Brayton (gas turbine) cycle, using argon as the working fluid. However, the French scheme proposes the use of turbomachinery for compression and expansion, whereas for that being developed in England reciprocating devices are proposed. The current paper focuses on the impact of the various process irreversibilities on the thermodynamic round-trip efficiency of the scheme. Consideration is given to compression and expansion losses and pressure losses (in pipe-work, valves and thermal reservoirs); heat transfer related irreversibility in the thermal reservoirs is discussed but not included in the analysis. Results are presented demonstrating how the various loss parameters and operating conditions influence the overall performance.

Two numerical schemes, which simulate the propagation of dispersive non-linear waves, are described. The first is a split-step Fourier scheme for the Korteweg-de Vries (KdV) equation. The second is a finite-difference scheme for the modified KdV equation. The stability and accuracy of both schemes are discussed. These simple schemes can be used to study a wide variety of physical processes that involve dispersive nonlinear waves.

A class of new explicit second order accurate finite difference schemes for the computation of weak solutions of hyperbolic conservation laws is presented. These highly nonlinear schemes are obtained by applying a nonoscillatory first order accurae scheme to an appropriately modified flux function. The so derived second order accurate schemes achieve high resolution while preserving the robustness of the original nonoscillatory first order accurate scheme.

Finite difference schemes for the evaluation of first and second derivatives are presented. These second order compact schemes were designed for long-time integration of evolution equations by solving a quadratic constrained minimization problem. The quadratic cost function measures the global truncation error while taking into account the initial data. The resulting schemes are applicable for integration times fourfold, or more, longer than similar previously studied schemes. A similar approach was used to obtain improved integration schemes.

Anew multisignature scheme using re-encryption technique based on the RSA algorithm is suggested what enhance version of Okamoto scheme. The suggested scheme results bit expansion in block length of the multisignature, but the bit size of the expansion is not larger than the number of signers irrespective of their modulus. In addition, the new scheme has no limitations in signing order and in fact is more efficient than the Okamoto scheme.

An arbitrated quantum signature scheme without using entangled states is proposed. In the scheme, by employing a classical hash function and random numbers, the secret keys of signer and receiver can be reused. It is shown that the proposed scheme is secure against several well-known attacks. Specifically, it can stand against the receiver's disavowal attack. Moreover, compared with previous relevant arbitrated quantum signature schemes, the scheme proposed has the advantage of less transmission complexity.

A constructive scheme has been devised to enable mapping of any quantum computation into a spintronic circuit in which the computation is encoded in a basis that is, in principle, immune to quantum decoherence. The scheme is implemented by an algorithm that utilizes multiple physical spins to encode each logical bit in such a way that collective errors affecting all the physical spins do not disturb the logical bit. The scheme is expected to be of use to experimenters working on spintronic implementations of quantum logic. Spintronic computing devices use quantum-mechanical spins (typically, electron spins) to encode logical bits. Bits thus encoded (denoted qubits) are potentially susceptible to errors caused by noise and decoherence. The traditional model of quantum computation is based partly on the assumption that each qubit is implemented by use of a single two-state quantum system, such as an electron or other spin-1.2 particle. It can be surprisingly difficult to achieve certain gate operations . most notably, those of arbitrary 1-qubit gates . in spintronic hardware according to this model. However, ironically, certain 2-qubit interactions (in particular, spin-spin exchange interactions) can be achieved relatively easily in spintronic hardware. Therefore, it would be fortunate if it were possible to implement any 1-qubit gate by use of a spin-spin exchange interaction. While such a direct representation is not possible, it is possible to achieve an arbitrary 1-qubit gate indirectly by means of a sequence of four spin-spin exchange interactions, which could be implemented by use of four exchange gates. Accordingly, the present scheme provides for mapping any 1-qubit gate in the logical basis into an equivalent sequence of at most four spin-spin exchange interactions in the physical (encoded) basis. The complexity of the mathematical derivation of the scheme from basic quantum principles precludes a description within this article; it must suffice to report

The coupling of statistical cloud schemes with mass-flux convection schemes is addressed. Source terms representing the impact of convection are derived within the framework of prognostic equations for the width and asymmetry of the probability distribution function of total water mixing ratio. The accuracy of these source terms is quantified by examining output from a cloud resolving model simulation of deep convection. Practical suggestions for inclusion of these source terms in large-scale models are offered.

It has been shown that for two different multipartite unitary operations U 1 and U 2, when tr( U 1 † U 2) = 0, they can always be perfectly distinguished by local operations and classical communication in the single-run scenario. However, how to find the detailed scheme to complete the local discrimination is still a fascinating problem. In this paper, aiming at some U 1 and U 2 acting on the bipartite and tripartite space respectively, especially for U 1 † U 2 locally unitary equivalent to the high dimensional X-type hermitian unitary matrix V with tr V = 0, we put forward the explicit local distinguishing schemes in the single-run scenario.

The recently developed Flexible Local Approximation MEthod (FLAME) produces accurate difference schemes by replacing the usual Taylor expansion with Trefftz functions - local solutions of the underlying differential equation. This paper advances and casts in a general form a significant modification of FLAME proposed recently by Pinheiro and Webb: a least-squares fit instead of the exact match of the approximate solution at the stencil nodes. As a consequence of that, FLAME schemes can now be generated on irregular stencils with the number of nodes substantially greater than the number of approximating functions. The accuracy of the method is preserved but its robustness is improved. For demonstration, the paper presents a number of numerical examples in 2D and 3D: electrostatic (magnetostatic) particle interactions, scattering of electromagnetic (acoustic) waves, and wave propagation in a photonic crystal. The examples explore the role of the grid and stencil size, of the number of approximating functions, and of the irregularity of the stencils.

In this article, we show that for a WENO scheme to improve the numerical resolution of smooth waves, increasing to some extent the contribution of the substencils where the solution is less smooth is much more important than improving the accuracy at critical points. WENO-Z, for instance, achieved less dissipative results than classical WENO through the use of a high-order global smoothness measurement, τ, which increased the weights of less-smooth substencils. This time, we present a way of further increasing the relevance of less-smooth substencils by adding a new term to the WENO-Z weights that uses information which is already available in its formula. The improved scheme attains much better resolution at the smooth parts of the solution, while keeping the same numerical stability of the original WENO-Z at shocks and discontinuities.

We offer an alternative scheme to detect spin polarization of conduction electrons injected into a nonmagnetic metal or degeneratively doped semiconductor using transport to two oppositely polarized ferromagnetic metal contacts. We show that, as in the well-known spin injection problem, detection efficiency can be amplified by the addition of spin-selective tunneling barriers. Considering the appropriate geometry and achievable injection rates, we estimate that the differential current can be as high as 1-10?nA for reasonable design parameters. We will also discuss the realization of this detection scheme in laboratory set-ups. ONR No. N000141110637, NSF Nos. ECCS0901941 and ECCS1231855, and DTRA No. HDTRA1-13-1-0013.

The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been implemented and demonstrated on electromagnetic wave propagation problems. This paper extends the Linear Bicharacteristic Scheme for computational electromagnetics to model lossy dielectric and magnetic materials and perfect electrical conductors. This is accomplished by proper implementation of the LBS for homogeneous lossy dielectric and magnetic media and for perfect electrical conductors. Heterogeneous media are modeled through implementation of surface boundary conditions and no special extrapolations or interpolations at dielectric material boundaries are required. Results are presented for one-dimensional model problems on both uniform and nonuniform grids, and the FDTD algorithm is chosen as a convenient reference algorithm for comparison. The results demonstrate that the explicit LBS is a dissipation-free, second-order accurate algorithm which uses a smaller stencil than the FDTD algorithm, yet it has approximately one-third the phase velocity error. The LBS is also more accurate on nonuniform grids.

The goal of indirect quantum control is to coherently steer a quantum system solely by acting on a quantum actuator to which it is coupled. This approach to quantum control is convenient in many physical settings, as it allows one to avoid direct addressing of the system--and any associated difficulties--altogether. While it is known in principle that control of the actuator typically yields universal control of the system, the practical details of how such indirect control can be achieved are less clear. This deficiency has led to a number of implementation- and model-specific indirect control schemes, in lieu of a general recipe applicable to any physical setting. Here, we present such a recipe, in the form of an open-loop control scheme which implements arbitrary unitary operations on the system by exploiting open dynamics in the actuator. arXiv:1506.06749.

The paper describes a new upwind conservative numerical scheme for special relativistic resistive magnetohydrodynamics with scalar resistivity. The magnetic field is kept approximately divergence free and the divergence of the electric field is kept consistent with the electric charge distribution via the method of Generalized Lagrange Multiplier. The hyperbolic fluxes are computed using the Harten-Lax-van Leer (HLL) prescription and the source terms are accounted via the time-splitting technique. The results of test simulations show that the scheme can handle equally well both resistive current sheets and shock waves, and thus can be a useful tool for studying phenomena of relativistic astrophysics that involve both colliding supersonic flows and magnetic reconnection.

General difference approximations to the fluid dynamic equations require an artificial viscosity in order to converge to a steady state. This artificial viscosity serves two purposes. One is to suppress high frequency noise which is not damped by the central differences. The second purpose is to introduce an entropy-like condition so that shocks can be captured. These viscosities need a coefficient to measure the amount of viscosity to be added. In the standard scheme, a scalar coefficient is used based on the spectral radius of the Jacobian of the convective flux. However, this can add too much viscosity to the slower waves. Hence, it is suggested that a matrix viscosity be used. This gives an appropriate viscosity for each wave component. With this matrix valued coefficient, the central difference scheme becomes closer to upwind biased methods.

A class of high-resolution schemes established in integration of anelastic equations is extended to fully compressible flows, and documented for unsteady (and steady) problems through a span of Mach numbers from zero to supersonic. The schemes stem from iterated upwind technology of the multidimensional positive definite advection transport algorithm (MPDATA). The derived algorithms employ standard and modified forms of the equations of gas dynamics for conservation of mass, momentum and either total or internal energy as well as potential temperature. Numerical examples from elementary wave propagation, through computational aerodynamics benchmarks, to atmospheric small- and large-amplitude acoustics with intricate wave-flow interactions verify the approach for both structured and unstructured meshes, and demonstrate its flexibility and robustness.

Two years ago, the NASA Coding, Modulation, and Link Protocol (CMLP) study was completed. The study, led by the authors of this paper, recommended codes, modulation schemes, and desired attributes of link protocols for all space communication links in NASA's future space architecture. Portions of the NASA CMLP team were reassembled to resolve one open issue: the use of multiple access (MA) communication from the lunar surface. The CMLP-MA team analyzed and simulated two candidate multiple access schemes that were identified in the original CMLP study: Code Division MA (CDMA) and Frequency Division MA (FDMA) based on a bandwidth-efficient Continuous Phase Modulation (CPM) with a superimposed Pseudo-Noise (PN) ranging signal (CPM/PN). This paper summarizes the results of the analysis and simulation of the CMLP-MA study and describes the final recommendations.

The article puts forward a simple scheme for multivariable control of robot manipulators to achieve trajectory tracking. The scheme is composed of an inner loop stabilizing controller and an outer loop tracking controller. The inner loop utilizes a multivariable PD controller to stabilize the robot by placing the poles of the linearized robot model at some desired locations. The outer loop employs a multivariable PID controller to achieve input-output decoupling and trajectory tracking. The gains of the PD and PID controllers are related directly to the linearized robot model by simple closed-form expressions. The controller gains are updated on-line to cope with variations in the robot model during gross motion and for payload change. Alternatively, the use of high gain controllers for gross motion and payload change is discussed. Computer simulation results are given for illustration.

Synchronizing colliding beams at single or multiple collision points is a critical R&D issue in the design of a medium energy electron-ion collider (MEIC) at Jefferson Lab. The path-length variation due to changes in the ion energy, which varies over 20 to 100 GeV, could be more than several times the bunch spacing. The scheme adopted in the present MEIC baseline is centered on varying the number of bunches (i.e., harmonic number) stored in the collider ring. This could provide a set of discrete energies for proton or ions such that the beam synchronization condition is satisfied. To cover the ion energy between these synchronized values, we further propose to vary simultaneously the electron ring circumference and the frequency of the RF systems in both collider rings. We also present in this paper the requirement of frequency tunability of SRF cavities to support the scheme.

Last few decades have witnessed boom in the development of information and communication technologies. Health-sector has also been benefitted with this advancement. To ensure secure access to healthcare services some user authentication mechanisms have been proposed. In 2012, Wei et al. proposed a user authentication scheme for telecare medical information system (TMIS). Recently, Zhu pointed out offline password guessing attack on Wei et al.'s scheme and proposed an improved scheme. In this article, we analyze both of these schemes for their effectiveness in TMIS. We show that Wei et al.'s scheme and its improvement proposed by Zhu fail to achieve some important characteristics necessary for secure user authentication. We find that security problems of Wei et al.'s scheme stick with Zhu's scheme; like undetectable online password guessing attack, inefficacy of password change phase, traceability of user's stolen/lost smart card and denial-of-service threat. We also identify that Wei et al.'s scheme lacks forward secrecy and Zhu's scheme lacks session key between user and healthcare server. We therefore propose an authentication scheme for TMIS with forward secrecy which preserves the confidentiality of air messages even if master secret key of healthcare server is compromised. Our scheme retains advantages of Wei et al.'s scheme and Zhu's scheme, and offers additional security. The security analysis and comparison results show the enhanced suitability of our scheme for TMIS. PMID:23828650

Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

We present an alternative scheme for an Emergent Universe scenario, developed previously in Phys. Rev. D 86, 083524 (2012), where the universe is initially in a static state supported by a scalar field located in a false vacuum. The universe begins to evolve when, by quantum tunneling, the scalar field decays into a state of true vacuum. The Emergent Universe models are interesting since they provide specific examples of non-singular inflationary universes.

The provision of an efficient and acceptable library system for the dental literature is examined. It is suggested that an index to the dental literature is best provided by a combination of Index Medicus and Medical Subject Headings. The Library of Congress scheme would be best for an autonomous dental school and, where a dental school library is provided by a large medical library, the National Library of Medicine Classification would be suitable for dental student use. PMID:395935

The following work uses the dynamic capabilities of an evolutionary algorithm in order to obtain an optimal immunization strategy in a user specified network. The produced algorithm uses a basic genetic algorithm with crossover and mutation techniques, in order to locate certain nodes in the inputted network. These nodes will be immunized in an SIR epidemic spreading process, and the performance of each immunization scheme, will be evaluated by the level of containment that provides for the spreading of the disease.

Maximizing both accuracy and efficiency has been the primary objective in designing a numerical algorithm for CFD. This is especially important for solution of complex three-dimensional systems of Navier-Stokes equations which often include turbulence modeling and chemistry effects. Recently, upwind schemes have been well received for both their capability of resolving discontinuities and their sound theoretical basis in characteristic theory for hyperbolic systems. With this in mind, two new flux splitting techniques are presented for upwind differencing.

In this paper, we study the cryptanalysis of two quantum blind signature schemes and one quantum proxy blind signature protocol. We show that in these protocols the verifier can forge the signature under known message attack. The attack strategies are described in detail respectively. This kind of problem deserves more research attention in the following related study. We further point out that the arbitrator should be involved in the procedure of any dispute and some discussions of these protocols are given.

The problem of protecting or isolating extremely sensitive receive circuitry from high-voltage transmit circuitry is commonly addressed through the use of diode bridges, transformers, or high-voltage switches, which prove to be prohibitively expensive, bulky, and power consuming for use in portable, low-cost, battery-powered systems. These approaches also compound the interconnect difficulties associated with two-dimensional (2-D) transducer arrays. In this paper we present a novel transmit protection scheme that allows compact MOSFET shunting devices to be brought on-chip within each receive channel implemented in a standard CMOS integrated circuit process. During transmit, the high voltage transmit pulse is driven onto the common connection of the transducer array and the on-chip MOSFET devices shunt the current to ground. During receive, these devices are turned off, the common connection of the transducer array is shunted to ground, and the received echo can be detected as usual. The transmit protection scheme was experimentally shown to shunt a 16 mA peak current resulting from the equivalent of a 100-V, 25-ns-risetime transmit pulse through a 4 pF transducer element. The scheme was also incorporated into a prototype 1024-channel, low-cost, ultrasound system successfully used to form pulse echo images. PMID:17225802

The 25,000 b/d fluid catalytic cracking unit (FCCU) at Petroleos Mexicanos' idle Azcapotzalco refinery near Mexico City has been relocated to Pemex's 235,000 b/d Cadereyta refinery. The results of a thermal-integration analysis are being used to revamp the unit and optimize its vapor-recovery scheme. For the case of the Azcapotzalco FCCU, the old unit was designed in the 1950s, so modifications to the reactor/regenerator section incorporate many important changes, including a new riser, feed nozzles, cyclones, air distributor, and other internals. For the new scheme, the analysis was based on the following restrictions: (1) Two cases concerning gas oil feed conditions must be met. In the hot-feed case, feed is introduced from a processing unit outside battery limits (OSBL) at 188 C. For the cold-feed case, feed is introduced from OSBL from storage tanks at 70 C. (2) No new fire heaters are to be installed. (3) Existing equipment must be reused whenever possible. The paper describes and analyzes three alternative schemes.

Improvements in rural health care in China in the 1950s, 1960s and 1970s were largely due to the development of cooperative medical schemes (CMSs) and the establishment of a three-tier rural health network. Since the economic reforms were instituted in the late 1970s, the financing and delivery of rural health services have seen many changes, some positive, others not. Most CMSs have collapsed. In the absence of CMSs, the rural population has to pay for health care out-of-pocket and poor families have greater difficulty in getting access to essential health care. In the meantime, emphases of health services have tended to shift from lower to higher levels, from preventive to curative services, and from planning and management to market forces. This paper outlines the evolution of CMSs, reasons for their collapse, and their likely impact on rural health services. The main focus is on the development of a new generation of rural cooperative health care schemes, given their importance in the process of consolidating the rural three-tier health network after the impact of the economic reforms: the characteristics of some schemes, the apparent conditions for success, and government policy towards the development of cooperative health care financing are presented. PMID:8578334

As anonymity increasingly becomes a necessary and legitimate aim in many applications, a number of anonymous authentication schemes have been suggested over the years. Among the many schemes is Lee and Kwon's password-based authentication scheme for wireless environments. Compared with previous schemes, Lee and Kwon's scheme not only improves anonymity by employing random temporary IDs but also provides user-friendliness by allowing human-memorable passwords. In this letter, we point out that Lee and Kwon's scheme, despite its many merits, is vulnerable to off-line password guessing attacks and a forgery attack. In addition, we show how to eliminate these vulnerabilities.

Microphysics is the framework through which to understand the links between interactive aerosol, cloud and precipitation processes. These processes play a critical role in the water and energy cycle. CRMs with advanced microphysics schemes have been used to study the interaction between aerosol, cloud and precipitation processes at high resolution. But, there are still many uncertainties associated with these microphysics schemes. This has arisen, in part, from the fact microphysical processes cannot be measured directly; instead, cloud properties, which can be measured, are and have been used to validate model results. The utilization of current and future global high-resolution models is rapidly increasing and are at what has been traditional CRM resolutions and are using microphysics schemes that were developed in traditional CRMs. A potential NASA satellite mission called the Cloud and Precipitation Processes Mission (CaPPM) is currently being planned for submission to the NASA Earth Science Decadal Survey. This mission could provide the necessary global estimates of cloud and precipitation properties with which to evaluate and improve dynamical and microphysical parameterizations and the feedbacks. In order to facilitate the development of this mission, CRM simulations have been conducted to identify microphysical processes responsible for the greatest uncertainties in CRMs. In this talk, we will present results from numerical simulations conducted using two CRMs (NU-WRF and RAMS) with different dynamics, radiation, land surface and microphysics schemes. Specifically, we will conduct sensitivity tests to examine the uncertainty of the some of the key ice processes (i.e. riming, melting, freezing and shedding) in these two-microphysics schemes. The idea is to quantify how these two different models' respond (surface rainfall and its intensity, strength of cloud drafts, LWP/IWP, convective-stratiform-anvil area distribution) to changes of these key ice

New monotonicity-preserving hybrid schemes are proposed for multidimensional hyperbolic equations. They are convex combinations of high-order accurate central bicompact schemes and upwind schemes of first-order accuracy in time and space. The weighting coefficients in these combinations depend on the local difference between the solutions produced by the high- and low-order accurate schemes at the current space-time point. The bicompact schemes are third-order accurate in time, while having the fourth order of accuracy and the first difference order in space. At every time level, they can be solved by marching in each spatial variable without using spatial splitting. The upwind schemes have minimal dissipation among all monotone schemes constructed on a minimum space-time stencil. The hybrid schemes constructed has been successfully tested as applied to a number of two-dimensional gas dynamics benchmark problems.

First and second order explicit difference schemes are derived for a three dimensional hyperbolic system of conservation laws, without recourse to dimensional factorization. All schemes are upwind (backward) biased and optimally stable.

First- and second-order explicit difference schemes are derived for a three-dimensional hyperbolic system of conservation laws, without recourse to dimensional factorization. All schemes are upwind biased and optimally stable.

Investigated Piaget's distinction between the roles of scheme and schema in memory. Proposed that schemas may vary within wide limits while the underlying schemes from which the schemas stem remain stable. Subjects were 78, 6-year-old children. (SDH)

Implicit finite difference schemes are often the preferred numerical schemes in computational fluid dynamics, requiring less stringent stability bounds than the explicit schemes. Each iteration in an implicit scheme involves global data dependencies in the form of second and higher order recurrences. Efficient parallel implementations of such iterative methods are considerably more difficult and non-intuitive. The parallelization of the implicit schemes that are used for solving the Euler and the thin layer Navier-Stokes equations and that require inversions of large linear systems in the form of block tri-diagonal and/or block penta-diagonal matrices is discussed. Three-dimensional cases are emphasized and schemes that minimize the total execution time are presented. Partitioning and scheduling schemes for alleviating the effects of the global data dependencies are described. An analysis of the communication and the computation aspects of these methods is presented. The effect of the boundary conditions on the parallel schemes is also discussed.

The Pharmaceutical Benefits Scheme (PBS) grew by 8% in 2003-04; a slower rate than the 12.0% pa average growth over the last decade. Nevertheless, the sustainability of the Scheme remained an ongoing concern given an aging population and the continued introduction of useful (but increasingly expensive) new medicines. There was also concern that the Australia-United States Free Trade Agreement could place further pressure on the Scheme. In 2003, as in 2002, the government proposed a 27% increase in PBS patient co-payments and safety-net thresholds in order to transfer more of the cost of the PBS from the government to consumers. While this measure was initially blocked by the Senate, the forthcoming election resulted in the Labor Party eventually supporting this policy. Recommendations of the Pharmaceutical Benefits Advisory Committee to list, not list or defer a decision to list a medicine on the PBS were made publicly available for the first time and the full cost of PBS medicines appeared on medicine labels if the price was greater than the co-payment. Pharmaceutical reform in Victorian public hospitals designed to minimise PBS cost-shifting was evaluated and extended to other States and Territories. Programs promoting the quality use of medicines were further developed coordinated by the National Prescribing Service, Australian Divisions of General Practice and the Pharmacy Guild of Australia. The extensive uptake of computerised prescribing software by GPs produced benefits but also problems. The latter included pharmaceutical promotion occurring at the time of prescribing, failure to incorporate key sources of objective therapeutic information in the software and gross variation in the ability of various programs to detect important drug-drug interactions. These issues remain to be tackled. PMID:15679896

We propose a nearly perfect optical scheme for the quantum teleportation of entangled coherent states using optical devices such as nonlinear Kerr media, beam splitters, phase shifters, and photon detectors. Different from those previous schemes, our scheme needs only ``yes'' or `no' measurements of the photon number of the related modes, i.e. nonzero- and zero-photon measurements, while in previous schemes one has to exactly identify the even or odd parity character of the photon numbers detected by detectors.

In this paper we analyze and compare the lattice Boltzmann equation with the beam scheme in details. We notice the similarity and differences between the lattice Boltzmann equation and the beam scheme. We show that the accuracy of the lattice Boltzmann equation is indeed second order in space. We discuss the advantages and limitations of lattice Boltzmann equation and the beam scheme. Based on our analysis, we propose an improved multi-dimensional beam scheme.

A new signature scheme of MPKC is proposed. It is created by perturbing a traditional encryption scheme in two ways. The proposed perturbation polynomials successfully reinforce the Matsumoto-Imai cryptosystem This new signature scheme has a structure very difficult to cryptanalyze. Along with the security against algebraic attacks, its security against existing attacks is discussed. The experimental data imply that the scheme can create a both lightweight and secure signature system.

We consider the interaction of a three-level system with phase-modulated resonant fields in the {lambda} excitation scheme. We treat theoretically the case of a sinusoidal phase modulation, a phase step perturbation, and a stochastic phase modulation. The appearance of a Rabi resonance both in the spectrum of the optical transmitted signal (electromagnetically induced transparency) and in the spectrum of the microwave emission (coherent population trapping maser) is considered in detail. All the theoretical results are compared with the analogous ones reported for the two-level system and with our experimental observations obtained for the case of rubidium in a buffer gas.

A two-dimensional (2D) visual computer code to solve the steady state (SS) or transient shock problems including partially ionizing plasma is presented. Since the flows considered are hypersonic and the resulting temperatures are high, the plasma is partially ionized. Hence the plasma constituents are electrons, ions and neutral atoms. It is assumed that all the above species are in thermal equilibrium, namely, that they all have the same temperature. The ionization degree is calculated from Saha equation as a function of electron density and pressure by means of a nonlinear Newton type root finding algorithms. The code utilizes a wave model and numerical fluctuation distribution (FD) scheme that runs on structured or unstructured triangular meshes. This scheme is based on evaluating the mesh averaged fluctuations arising from a number of waves and distributing them to the nodes of these meshes in an upwind manner. The physical properties (directions, strengths, etc.) of these wave patterns are obtained by a new wave model: ION-A developed from the eigen-system of the flux Jacobian matrices. Since the equation of state (EOS) which is used to close up the conservation laws includes electronic effects, it is a nonlinear function and it must be inverted by iterations to determine the ionization degree as a function of density and temperature. For the time advancement, the scheme utilizes a multi-stage Runge-Kutta (RK) algorithm with time steps carefully evaluated from the maximum possible propagation speed in the solution domain. The code runs interactively with the user and allows to create different meshes to use different initial and boundary conditions and to see changes of desired physical quantities in the form of color and vector graphics. The details of the visual properties of the code has been published before (see [N. Aslan, A visual fluctuation splitting scheme for magneto-hydrodynamics with a new sonic fix and Euler limit, J. Comput. Phys. 197 (2004) 1

We provide detailed calculations for modeling an alternative scheme to detect spin polarization of conduction electrons injected into a nonmagnetic metal or degeneratively doped semiconductor using transport to two oppositely polarized ferromagnetic metal contacts. We show that, as in the well-known spin injection problem, detection efficiency can be amplified by the addition of spin-selective tunneling barriers. Considering the appropriate geometry and achievable injection rates, we estimate that the differential current can be as high as 1-10 nA for reasonable design parameters.

A classification scheme is proposed for amplitude vs. angle (AVA) responses as an aid to the interpretation of seismic reflectivity in glaciological research campaigns. AVA responses are a powerful tool in characterising the material properties of glacier ice and its substrate. However, before interpreting AVA data, careful true amplitude processing is required to constrain basal reflectivity and compensate amplitude decay mechanisms, including anelastic attenuation and spherical divergence. These fundamental processing steps can be difficult to design in cases of noisy data, e.g. where a target reflection is contaminated by surface wave energy (in the case of shallow glaciers) or by energy reflected from out of the survey plane. AVA methods have equally powerful usage in estimating the fluid fill of potential hydrocarbon reservoirs. However, such applications seldom use true amplitude data and instead consider qualitative AVA responses using a well-defined classification scheme. Such schemes are often defined in terms of the characteristics of best-fit responses to the observed reflectivity, e.g. the intercept (I) and gradient (G) of a linear approximation to the AVA data. The position of the response on a cross-plot of I and G then offers a diagnostic attribute for certain fluid types. We investigate the advantages in glaciology of emulating this practice, and develop a cross-plot based on the 3-term Shuey AVA approximation (using I, G, and a curvature term C). Model AVA curves define a clear lithification trend: AVA responses to stiff (lithified) substrates fall discretely into one quadrant of the cross-plot, with positive I and negative G, whereas those to fluid-rich substrates plot diagonally opposite (in the negative I and positive G quadrant). The remaining quadrants are unoccupied by plausible single-layer responses and may therefore be diagnostic of complex thin-layer reflectivity, and the magnitude and polarity of the C term serves as a further indicator

This paper is devoted to the study of numerical approximation schemes for a class of parabolic equations on (0,1) perturbed by a non-linear rough signal. It is the continuation of Deya (Electron. J. Probab. 16:1489-1518, 2011) and Deya et al. (Probab. Theory Relat. Fields, to appear), where the existence and uniqueness of a solution has been established. The approach combines rough paths methods with standard considerations on discretizing stochastic PDEs. The results apply to a geometric 2-rough path, which covers the case of the multidimensional fractional Brownian motion with Hurst index H>1/3.

The development of the shock capturing methodology is reviewed, paying special attention to the increasing nonlinearity in its design and its relation to interpolation. It is well-known that higher-order approximations to a discontinuous function generate spurious oscillations near the discontinuity (Gibbs phenomenon). Unlike standard finite-difference methods which use a fixed stencil, modern shock capturing schemes use an adaptive stencil which is selected according to the local smoothness of the solution. Near discontinuities this technique automatically switches to one-sided approximations, thus avoiding the use of discontinuous data which brings about spurious oscillations.

An angular correlation experiment was carried out for 33 S at Laboratori Nazionali di Legnaro with the gamma-ray detector array GASP. The reaction used was 24Mg(14N,α p)33S at a beam energy of 40MeV. An analysis of DCO ratios and triple gamma coincidences was performed. So far, a new level depopulated by 3 γ -ray transitions has been found and its spin was determined. The work for further extension of the level scheme is in progress.

The possibility of mono-chromatizing SPEAR for the purpose of increasing the hadronic event rate at the narrow resonances was investigated. By using two pairs of electostatic skew quads in monochromator scheme it is found that the event rate can be increased by a factor of 2 for the mini beta optics assuming the luminosity is kept unchanged. An attempt to increase this enhancement factor by major rearrangements of the ring magnets has encountered serious optical difficulties; although enhancement factor of 8 seems possible in principle, this alternative is not recommended.

The implementation of the e-voting becomes more substantial with the rapid increase of e-government development. The recent growth in communications and cryptographic techniques facilitate the implementation of e-voting. Many countries introduced e-voting systems; unfortunately most of these systems are not fully functional. In this paper we will present an e-voting scheme that covers most of the e-voting requirements, smart card and biometric recognition technology were implemented to guarantee voter's privacy and authentication.

I review the ideas of holographic space-time (HST), Cosmological SUSY breaking (CSB), and the Pyramid Schemes, which are the only known models of Tera-scale physics consistent with CSB, current particle data, and gauge coupling unification. There is considerable uncertainty in the estimate of the masses of supersymmetric partners of the standard model particles, but the model predicts that the gluino is probably out of reach of the LHC, squarks may be in reach, and the NLSP is a right handed slepton, which should be discovered soon.

I review the ideas of holographic spacetime (HST), cosmological SUSY breaking (CSB), and the Pyramid Schemes, which are the only known models of Tera-scale physics consistent with CSB, current particle data, and gauge coupling unification. There is considerable uncertainty in the estimate of the masses of supersymmetric partners of the Standard Model particles, but the model predicts that the gluino is probably out of reach of the LHC, squarks may be in reach, and the NLSP is a right-handed slepton, which should be discovered soon.

We propose a new lossless digital image encryption scheme based on the permutation and substitution architecture. Initially, original image is divided into squared sub-images and then three layers of pixels corresponding to additive primary colours (RGB) of each sub-image are separated. Each layer of pixels of squared sub-images are scrambled by three different ways in the permutation process whereas a simple arithmetic, mainly sorting and differencing, is performed on each layer of pixels to achieve the substitution. The results of several experiments show that the proposed image cipher provides an efficient way for image encryption with high decryption rate.

Performance is a critical factor hindering the use of object-oriented databases (OODB). This article proposes a new and uniform indexing scheme for enhancing OODBs with advantages for small range, clustered sets queries. Reviews several other indexing schemes; presents the U-index scheme; discusses its performance; and presents experimental…

A quantum identification scheme including registration and identification phases is proposed. The users' passwords are transmitted by qubit string and recorded as a set of quantum operators. The security of the proposed scheme is guaranteed by the no-cloning theorem. Based on photon polarization modulation, an experimental approach is also designed to implement our proposed scheme.

A recently developed taxonomic scheme for the identification of marine bacteria is presented. The scheme is based on numerous reviews and monographs on marine bacteria, as well as Bergey's Manual of Determinative Bacteriology. While fairly extensive, the scheme is designed to identify marine bacteria using relatively few tests.

A newly developed advection scheme, the Hybrid Eulerian Lagrangian (HEL) scheme, has been tested, including a module for atmospheric chemistry, including 58 chemical species, and compared to two other traditional advection schemes; a classical pseudospectral Eulerian method the Accurate Space Derivative (ASD) scheme and the bi-cubic semi-Lagrangian (SL) scheme using classical rotation tests. The rotation tests have been designed to test and compare the advection schemes for different spatial and temporal resolutions in different chemical conditions (rural and urban) and for different shapes (cone and slotted cylinder) giving the advection schemes different challenges with respect to relatively slow or fast chemistry and smooth or sharp gradients, respectively. In every test, error measures have been calculated and used for ranking of the advection schemes with respect to performance, i.e. lowest overall errors for all chemical species. Furthermore, the HEL and SL schemes have been compared in a shallow water model, demonstrating the performance in a more realistic non-linear deformation flow. The results in this paper show that the new advection scheme, HEL, by far outperforms both the Eulerian and semi-Lagrangian schemes with very low error estimates compared to the two other schemes. Although no analytic solution can be obtained for the performance in the non-linear shallow water model flow, the tracer distribution appears realistic as compared to LMCSL when a mixing between local parcel concentrations is introduced in HEL.

... 7 Agriculture 10 2010-01-01 2010-01-01 false Scheme and device. 1467.19 Section 1467.19... device. (a) If it is determined by the NRCS that a participant has employed a scheme or device to defeat... determined appropriate by NRCS. (b) A scheme or device includes, but is not limited to, coercion,...

... 7 Agriculture 6 2011-01-01 2011-01-01 false Scheme and device. 633.18 Section 633.18 Agriculture... AGRICULTURE LONG TERM CONTRACTING WATER BANK PROGRAM § 633.18 Scheme and device. (a) If it is determined by the NRCS that a person has employed a scheme or device to defeat the purposes of this part, any...

... 7 Agriculture 7 2011-01-01 2011-01-01 false Scheme or device. 795.17 Section 795.17 Agriculture... PROVISIONS COMMON TO MORE THAN ONE PROGRAM PAYMENT LIMITATION General § 795.17 Scheme or device. All or any... person adopts or participates in adopting any scheme or device designed to evade or which has the...

... 7 Agriculture 10 2010-01-01 2010-01-01 false Scheme or device. 1491.32 Section 1491.32 Agriculture... Administration § 1491.32 Scheme or device. (a) If it is determined by the NRCS that a cooperating entity has employed a scheme or device to defeat the purposes of this part, any part of any program payment...

... 7 Agriculture 6 2011-01-01 2011-01-01 false Scheme and device. 625.20 Section 625.20 Agriculture... AGRICULTURE WATER RESOURCES HEALTHY FORESTS RESERVE PROGRAM § 625.20 Scheme and device. (a) If it is determined by NRCS that a person has employed a scheme or device to defeat the purposes of this part,...

... 7 Agriculture 7 2011-01-01 2011-01-01 false Misrepresentation, scheme, or device. 760.819 Section....819 Misrepresentation, scheme, or device. (a) A person is ineligible to receive assistance under this part if it is determined that such person has: (1) Adopted any scheme or device that tends to...

... 7 Agriculture 7 2010-01-01 2010-01-01 false Scheme or device. 795.17 Section 795.17 Agriculture... PROVISIONS COMMON TO MORE THAN ONE PROGRAM PAYMENT LIMITATION General § 795.17 Scheme or device. All or any... person adopts or participates in adopting any scheme or device designed to evade or which has the...

... 7 Agriculture 7 2010-01-01 2010-01-01 false Misrepresentation, scheme, or device. 760.819 Section....819 Misrepresentation, scheme, or device. (a) A person is ineligible to receive assistance under this part if it is determined that such person has: (1) Adopted any scheme or device that tends to...

... 7 Agriculture 6 2011-01-01 2011-01-01 false Scheme and device. 623.21 Section 623.21 Agriculture... AGRICULTURE WATER RESOURCES EMERGENCY WETLANDS RESERVE PROGRAM § 623.21 Scheme and device. (a) If it is determined by NRCS that a landowner has employed a scheme or device to defeat the purposes of this part,...

Probability reigns in biology, with random molecular events dictating the fate of individual organisms, and propelling populations of species through evolution. In principle, the master probability equation provides the most complete model of probabilistic behavior in biomolecular networks. In practice, master equations describing complex reaction networks have remained unsolved for over 70 years. This practical challenge is a reason why master equations, for all their potential, have not inspired biological discovery. Herein, we present a closure scheme that solves the master probability equation of networks of chemical or biochemical reactions. We cast the master equation in terms of ordinary differential equations that describe the time evolution of probability distribution moments. We postulate that a finite number of moments capture all of the necessary information, and compute the probability distribution and higher-order moments by maximizing the information entropy of the system. An accurate order closure is selected, and the dynamic evolution of molecular populations is simulated. Comparison with kinetic Monte Carlo simulations, which merely sample the probability distribution, demonstrates this closure scheme is accurate for several small reaction networks. The importance of this result notwithstanding, a most striking finding is that the steady state of stochastic reaction networks can now be readily computed in a single-step calculation, without the need to simulate the evolution of the probability distribution in time. PMID:23940327

A multilevel computation scheme for time-harmonic fields in three dimensions will be formulated with a new Gaussian translation operator that decays exponentially outside a circular cone centered on the line connecting the source and observation groups. This Gaussian translation operator is directional and diagonal with its sharpness determined by a beam parameter. When the beam parameter is set to zero, the Gaussian translation operator reduces to the standard fast multipole method translation operator. The directionality of the Gaussian translation operator makes it possible to reduce the number of plane waves required to achieve a given accuracy. The sampling rate can be determined straightforwardly to achieve any desired accuracy. The use of the computation scheme will be illustrated through a near-field scanning problem where the far-field pattern of a source is determined from near-field measurements with a known probe. Here the Gaussian translation operator improves the condition number of the matrix equation that determines the far-field pattern. The Gaussian translation operator can also be used when the probe pattern is known only in one hemisphere, as is common in practice. Also, the Gaussian translation operator will be used to solve the scattering problem of the perfectly conducting sphere.

The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been implemented and demonstrated on electromagnetic wave propagation problems. This paper extends the Linear Bicharacteristic Scheme for computational electromagnetics to treat lossy dielectric and magnetic materials and perfect electrical conductors. This is accomplished by proper implementation of the LBS for homogeneous lossy dielectric and magnetic media, and treatment of perfect electrical conductors (PECs) are shown to follow directly in the limit of high conductivity. Heterogeneous media are treated through implementation of surface boundary conditions and no special extrapolations or interpolations at dielectric material boundaries are required. Results are presented for one-dimensional model problems on both uniform and nonuniform grids, and the FDTD algorithm is chosen as a convenient reference algorithm for comparison. The results demonstrate that the explicit LBS is a dissipation-free, second-order accurate algorithm which uses a smaller stencil than the FDTD algorithm, yet it has approximately one-third the phase velocity error. The LBS is also more accurate on nonuniform grids.

Classic Cartesian staggered mesh schemes have a number of attractive properties. They do not display spurious pressure modes and they have been shown to locally conserve, mass, momentum, kinetic energy, and circulation to machine precision. Recently, a number of generalizations of the staggered mesh approach have been proposed for unstructured (triangular or tetrahedral) meshes. These unstructured staggered mesh methods have been created to retain the attractive pressure aspects and mass conservation properties of the classic Cartesian mesh method. This work addresses the momentum, kinetic energy, and circulation conservation properties of unstructured staggered mesh methods. It is shown that with certain choices of the velocity interpolation, unstructured staggered mesh discretization of the divergence form of the Navier-Stokes equations can conserve kinetic energy and momentum both locally and globally. In addition, it is shown that unstructured staggered mesh discretization of the rotational form of the Navier-Stokes equations can conserve kinetic energy and circulation both locally and globally. The analysis includes viscous terms and a generalization of the concept of conservation in the presence of viscosity to include a negative definite dissipation term in the kinetic energy equation. These novel conserving unstructured staggered mesh schemes have not been previously analyzed. It is shown that they are first-order accurate on nonuniform two-dimensional unstructured meshes and second-order accurate on uniform unstructured meshes. Numerical confirmation of the conservation properties and the order of accuracy of these unstructured staggered mesh methods is presented.

When built, quantum repeaters will allow the distribution of entangled quantum states across large distances, playing a vital part in many proposed quantum technologies. Enabling multiple users to connect through the same network will be key to their real-world deployment. Previous work on repeater technologies has focussed only on simple entanglment production, without considering the issues of resource scarcity and competition that necessarily arise in a network setting. In this paper we simulated a thirteen-node network with up to five flows sharing different parts of the network, measuring the total throughput and fairness for each case. Our results suggest that the Internet-like approach of statistical multiplexing use of a congested link gives the highest aggregate throughput. Time division multiplexing and buffer space multiplexing were slightly less effective, but all three schemes allow the sum of multiple flows to substantially exceed that of any one flow, improving over circuit switching by taking advantage of resources that are forced to remain idle in circuit switching. All three schemes proved to have excellent fairness. The high performance, fairness and simplicity of implementation support a recommendation of statistical multiplexing for shared quantum repeater networks.

Given a function u(x) which is represented by its cell-averages in cells which are formed by some unstructured grid, we show how to decompose the function into various scales of variation. This is done by considering a set of nested grids in which the given grid is the finest, and identifying in each locality the coarsest grid in the set from which u(x) can be recovered to a prescribed accuracy. We apply this multi-resolution analysis to Essentially Non-oscillatory Schemes (ENO) schemes in order to reduce the number of numerical flux computations which is needed in order to advance the solution by one time-step. This is accomplished by decomposing the numerical solution at the beginning of each time-step into levels of resolution, and performing the computation in each locality at the appropriate coarser grid. We present an efficient algorithm for implementing this program in the one-dimensional case; this algorithm can be extended to the multi-dimensional case with cartesian grids.

Voice Over Internet Protocol (VoIP) applications have received significant interests from the Mobile WiMAX standard in terms of capabilities and means of delivery multimedia services, by providing high bandwidth over long-range transmission. However, one of the main problems of IEEE 802.16 is that it covers multi BS with too many profiled layers, which can lead to potential interoperability problems. The multi BS mode requires multiple BSs to be scanned synchronously before initiating the transmission of broadcast data. In this paper, we first identify the key issues for VoIP over WiMAX. Then we present a MAC Layer solution to guarantee the demanded bandwidth and supporting a higher possible throughput between two WiMAX end points during the handover. Moreover, we propose a PHY and MAC layers scheme to maintain the required communication channel quality for VoIP during handover. Results show that our proposed schemes can significantly improve the network throughput up to 55%, reducing the data dropped to 70% while satisfying VoIP quality requirements.

With increasing computing power, the horizontal resolution of numerical weather prediction (NWP) models is improving and today reaches 1 to 5 km. Nevertheless, clouds and precipitation are still subgrid scale processes for most cloud types, such as cumulus and stratocumulus. Subgrid scale parameterizations for water vapor condensation have been in use for many years and are based on a prescribed PDF of relative humidity spatial variability within the grid, thus providing a diagnosis of the cloud fraction. A similar scheme is developed and tested here. It is based on a prescribed PDF of cloud water variability and a threshold value of liquid water content for droplet collection to derive a rain fraction within the model grid. Precipitation of rainwater raises additional concerns relative to the overlap of cloud and rain fractions, however. The scheme is developed following an analysis of data collected during field campaigns in stratocumulus (DYCOMS-II) and fair weather cumulus (RICO) and tested in a 1-D framework against large eddy simulations of these observed cases. The new parameterization is then implemented in a 3-D NWP model with a horizontal resolution of 2.5 km to simulate real cases of precipitating cloud systems over France.

With increasing computing power, the horizontal resolution of numerical weather prediction (NWP) models is improving and today reaches 1 to 5 km. Nevertheless, clouds and precipitation formation are still subgrid scale processes for most cloud types, such as cumulus and stratocumulus. Subgrid scale parameterizations for water vapor condensation have been in use for many years and are based on a prescribed probability density function (PDF) of relative humidity spatial variability within the model grid box, thus providing a diagnosis of the cloud fraction. A similar scheme is developed and tested here. It is based on a prescribed PDF of cloud water variability and a threshold value of liquid water content for droplet collection to derive a rain fraction within the model grid. Precipitation of rainwater raises additional concerns relative to the overlap of cloud and rain fractions, however. The scheme is developed following an analysis of data collected during field campaigns in stratocumulus (DYCOMS-II) and fair weather cumulus (RICO) and tested in a 1-D framework against large eddy simulations of these observed cases. The new parameterization is then implemented in a 3-D NWP model with a horizontal resolution of 2.5 km to simulate real cases of precipitating cloud systems over France.

We present a review of the different coupling schemes in a planar array of terahertz metamaterials. The gap-to-gap near-field capacitive coupling between split-ring resonators in a unit cell leads to either blue shift or red shift of the fundamental inductive-capacitive ( LC ) resonance, depending on the position of the split gap. The inductive coupling is enhanced by decreasing the inter resonator distance resulting in strong blue shifts of the LC resonance. We observe the LC resonance tuning only when the split-ring resonators are in close proximity of each other; otherwise, they appear to be uncoupled. Conversely, the higher-ordermore » resonances are sensitive to the smallest change in the inter particle distance or split-ring resonator orientation and undergo tremendous resonance line reshaping giving rise to a sharp subradiant resonance mode which produces hot spots useful for sensing applications. Most of the coupling schemes in a metamaterial are based on a near-field effect, though there also exists a mechanism to couple the resonators through the excitation of lowest-order lattice mode which facilitates the long-range radiative or diffractive coupling in the split-ring resonator plane leading to resonance line narrowing of the fundamental as well as the higher order resonance modes.« less

A concatenated coding scheme for error control in data communications is analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. The probability of undetected error of the above error control scheme is derived and upper bounded. Two specific exmaples are analyzed. In the first example, the inner code is a distance-4 shortened Hamming code with generator polynomial (X+1)(X(6)+X+1) = X(7)+X(6)+X(2)+1 and the outer code is a distance-4 shortened Hamming code with generator polynomial (X+1)X(15+X(14)+X(13)+X(12)+X(4)+X(3)+X(2)+X+1) = X(16)+X(12)+X(5)+1 which is the X.25 standard for packet-switched data network. This example is proposed for error control on NASA telecommand links. In the second example, the inner code is the same as that in the first example but the outer code is a shortened Reed-Solomon code with symbols from GF(2(8)) and generator polynomial (X+1)(X+alpha) where alpha is a primitive element in GF(z(8)).

After 2003 free licensees' act in power sector, it is observed that many power plants from public sector as well as from private sectors are going to be commissioned. The load growth in India is about more than 10% pa. As these plants are going to connect to the power grid, therefore the grid is going to become more complicated. Also the problems related to grid stability are enhanced. There shall be possibilities regarding failure of grid system and under such circumstance it is always desirable to island minimum single generating unit in power plant of specified geographical area. After islanding the generating unit, this unit has to survive not only for the restoration of grid but also for power supply to important consumers. For the grid stability and effective survival of islanded generating unit, it is mandatory to maintain the power balance equation. This paper focuses on the lacunae's observed in implementation of special protection scheme to carry out islanding operation at Bhusawal Thermal Power Station (BTPS) by considering the case studies. The concepts of islanding, load shedding, generator tripping and along with importance of power balance equation is discussed. Efforts are made to provide the solution for the survival of islanding scheme.

The deployment of fingerprint recognition systems has always raised concerns related to personal privacy. A fingerprint is permanently associated with an individual and, generally, it cannot be reset if compromised in one application. Given that fingerprints are not a secret, potential misuses besides personal recognition represent privacy threats and may lead to public distrust. Privacy mechanisms control access to personal information and limit the likelihood of intrusions. In this paper, image- and feature-level schemes for privacy protection in fingerprint recognition systems are reviewed. Storing only key features of a biometric signature can reduce the likelihood of biometric data being used for unintended purposes. In biometric cryptosystems and biometric-based key release, the biometric component verifies the identity of the user, while the cryptographic key protects the communication channel. Transformation-based approaches only a transformed version of the original biometric signature is stored. Different applications can use different transforms. Matching is performed in the transformed domain which enable the preservation of low error rates. Since such templates do not reveal information about individuals, they are referred to as cancelable templates. A compromised template can be re-issued using a different transform. At image-level, de-identification schemes can remove identifiers disclosed for objectives unrelated to the original purpose, while permitting other authorized uses of personal information. Fingerprint images can be de-identified by, for example, mixing fingerprints or removing gender signature. In both cases, degradation of matching performance is minimized.

A semaphore scheme has been devised to satisfy a requirement to enable ultrahigh- frequency (UHF) radio communication between a spacecraft descending from orbit to a landing on Mars and a spacecraft, in orbit about Mars, that relays communications between Earth and the lander spacecraft. There are also two subsidiary requirements: (1) to use UHF transceivers, built and qualified for operation aboard the spacecraft that operate with residual-carrier binary phase-shift-keying (BPSK) modulation at a selectable data rate of 8, 32, 128, or 256 kb/s; and (2) to enable low-rate signaling even when received signals become so weak as to prevent communication at the minimum BPSK rate of 8 kHz. The scheme involves exploitation of Manchester encoding, which is used in conjunction with residual-carrier modulation to aid the carrier-tracking loop. By choosing various sequences of 1s, 0s, or 1s alternating with 0s to be fed to the residual-carrier modulator, one would cause the modulator to generate sidebands at a fundamental frequency of 4 or 8 kHz and harmonics thereof. These sidebands would constitute the desired semaphores. In reception, the semaphores would be detected by a software demodulator.

In fusion plasmas diffusion tensors are extremely anisotropic due to the high temperature and large magnetic field strength. This causes diffusion, heat conduction, and viscous momentum loss, to effectively be aligned with the magnetic field lines. This alignment leads to different values for the respective diffusive coefficients in the magnetic field direction and in the perpendicular direction, to the extent that heat diffusion coefficients can be up to 10{sup 12} times larger in the parallel direction than in the perpendicular direction. This anisotropy puts stringent requirements on the numerical methods used to approximate the MHD-equations since any misalignment of the grid may cause the perpendicular diffusion to be polluted by the numerical error in approximating the parallel diffusion. Currently the common approach is to apply magnetic field-aligned coordinates, an approach that automatically takes care of the directionality of the diffusive coefficients. This approach runs into problems at x-points and at points where there is magnetic re-connection, since this causes local non-alignment. It is therefore useful to consider numerical schemes that are tolerant to the misalignment of the grid with the magnetic field lines, both to improve existing methods and to help open the possibility of applying regular non-aligned grids. To investigate this, in this paper several discretization schemes are developed and applied to the anisotropic heat diffusion equation on a non-aligned grid.

The present finite-difference schemes for the evaluation of first-order, second-order, and higher-order derivatives yield improved representation of a range of scales and may be used on nonuniform meshes. Various boundary conditions may be invoked, and both accurate interpolation and spectral-like filtering can be accomplished by means of schemes for derivatives at mid-cell locations. This family of schemes reduces to the Pade schemes when the maximal formal accuracy constraint is imposed with a specific computational stencil. Attention is given to illustrative applications of these schemes in fluid dynamics.

In this paper, we present a group signature scheme using quantum teleportation. Different from classical group signature and current quantum signature schemes, which could only deliver either group signature or unconditional security, our scheme guarantees both by adopting quantum key preparation, quantum encryption algorithm and quantum teleportation. Security analysis proved that our scheme has the characteristics of group signature, non-counterfeit, non-disavowal, blindness and traceability. Our quantum group signature scheme has a foreseeable application in the e-payment system, e-government, e-business, etc.

We investigate the existing arbitrated quantum signature schemes as well as their cryptanalysis, including intercept-resend attack and denial-of-service attack. By exploring the loopholes of these schemes, a malicious signatory may successfully disavow signed messages, or the receiver may actively negate the signature from the signatory without being detected. By modifying the existing schemes, we develop counter-measures to these attacks using Bell states. The newly proposed scheme puts forward the security of arbitrated quantum signature. Furthermore, several valuable topics are also presented for further research of the quantum signature scheme.

Due to the potential capability of providing unconditional security, arbitrated quantum signature (AQS) schemes, whose implementation depends on the participation of a trusted third party, received intense attention in the past decade. Recently, some typical AQS schemes were cryptanalyzed and improved. In this paper, we analyze the security property of some AQS schemes and show that all the previous AQS schemes, no matter whether original or improved, are still insecure in the sense that the messages and the corresponding signatures can be exchanged among different receivers, allowing the receivers to deny having accepted the signature of an appointed message. Some further improved methods on the AQS schemes are also discussed.

A newly developed transport scheme, the Hybrid Eulerian Lagrangian (HEL) scheme, has been tested using a module for atmospheric chemistry, including 58 chemical species, and compared to two other traditional advection schemes; a classical pseudospectral Eulerian method the Accurate Space Derivative (ASD) scheme and the bi-cubic semi-Lagrangian (SL) scheme using classical rotation tests. The rotation tests have been designed to test and compare the advection schemes for different spatial and temporal resolutions in different chemical conditions (rural and urban) and for different shapes (cone and slotted cylinder). This gives the advection schemes different challenges with respect to relatively slow or fast chemistry and smooth or sharp gradients. In every test, error measures have been calculated and used for ranking of the advection schemes with respect to performance, i.e. lowest overall errors for all chemical species. The results presented show that the new transport scheme, HEL, by far outperforms both the Eulerian and semi-Lagrangian schemes with very low error estimates compared to the two other schemes.

At present, two radiation schemes are used in RAMS: the Mahrer and Pielke (M-P) scheme and the Chen and Cotton (C-C) scheme. The M-P scheme requires little computational expense, but does not include the radiative effects of liquid water or ice; the C-C scheme accounts for the radiative effects of liquid water and ice but is fairly expensive computationally. For simulations with clouds, the C-C scheme is obviously a better choice, but for clear sky conditions, RAMS users face a decision regarding which radiation scheme to use. It has been noted that the choice of radiation scheme may result in significantly different results for the same case. To examine the differences in the radiative fluxes and the boundary-layer structure corresponding to the two radiation schemes in RAMS we have carried out a study where Rams was used to simulate the same case with two different radiation schemes. The modeled radiative fluxes by the two schemes were then compared with the field measurements. A description of the observations and the case study, a comparison and discussion of the results, and a summary and conclusions follow.

Artificial numerical dissipation is in important issue in large Reynolds number computations. In such computations, the artificial dissipation inherent in traditional numerical schemes can overwhelm the physical dissipation and yield inaccurate results on meshes of practical size. In the present work, the space-time conservation element and solution element method is used to construct new and accurate implicit numerical schemes such that artificial numerical dissipation will not overwhelm physical dissipation. Specifically, these schemes have the property that numerical dissipation vanishes when the physical viscosity goes to zero. These new schemes therefore accurately model the physical dissipation even when it is extremely small. The new schemes presented are two highly accurate implicit solvers for a convection-diffusion equation. The two schemes become identical in the pure convection case, and in the pure diffusion case. The implicit schemes are applicable over the whole Reynolds number range, from purely diffusive equations to convection-dominated equations with very small viscosity. The stability and consistency of the schemes are analysed, and some numerical results are presented. It is shown that, in the inviscid case, the new schemes become explicit and their amplification factors are identical to those of the Leapfrog scheme. On the other hand, in the pure diffusion case, their principal amplification factor becomes the amplification factor of the Crank-Nicolson scheme.

We propose two novel schemes for probabilistic remote preparation of an arbitrary quantum state with the aid of the introduction of auxiliary particles and appropriate local unitary operations. The first new proposal could be used to improve the total successful probability of the remote preparation of a general quantum state, and the successful probability is twice as much as the one of the preceding schemes. Meanwhile, one can make use of the second proposal to realize the remote state preparation when the information of the partially entangled state is only available for the sender. This is in contrast to the fact that the receiver must know the non-maximally entangled state in previous typical schemes. Hence, our second proposal could enlarge the applied range of probabilistic remote state preparation. Additionally, we will illustrate how to combine these novel proposals in detail, and our results show that the union has the advantages of both schemes. Of course, our protocols are implemented at the cost of the increased complexity of the practical realizations.

Most option pricing problems have nonsmooth payoffs or discontinuous derivatives at the exercise price. Discrete barrier options have not only nonsmooth payoffs but also time dependent discontinuities. In pricing barrier options, certain aspects are triggered if the asset price becomes too high or too low. Standard smoothing schemes used to solve problems with nonsmooth payoff do not work well for discrete barrier options because of discontinuities introduced in the time domain when each barrier is applied. Moreover, these unwanted oscillations become worse when estimating the hedging parameters, e.g., Delta and Gamma. We have an improved smoothing strategy for the Crank-Nicolson method which is unique in achieving optimal order convergence for barrier option problems. Numerical experiments are discussed for one asset and two asset problems. Time evolution graphs are obtained for one asset problems to show how option prices change with respect to time. This smoothing strategy is then extended to higher order methods using diagonal (m,m)--Pade main schemes under a smoothing strategy of using as damping schemes the (0,2m-1) subdiagonal Pade schemes.

Explicit second-order accurate finite-difference schemes for the approximation of hyperbolic conservation laws are presented. These schemes are nonlinear even for the constant coefficient case. They are based on first-order upwind schemes. Their accuracy is enhanced by locally replacing the first-order one-sided differences with either second-order one-sided differences or central differences or a blend thereof. The appropriate local difference stencils are selected such that they give TVD schemes of uniform second-order accuracy in the scalar, or linear systems, case. Like conventional TVD schemes, the new schemes avoid a Gibbs phenomenon at discontinuities of the solution, but they do not switch back to first-order accuracy, in the sense of truncation error, at extrema of the solution. The performance of the new schemes is demonstrated in several numerical tests.

Two fully-discrete finite-difference schemes for wave propagation problems are presented, a maximum-order scheme and an optimized (or spectral-like) scheme. Both combine a seven-point spatial operator and an explicit six-stage time-march method. The maximum-order operator is fifth-order in space and is sixth-order in time for a linear problem with periodic boundary conditions. The phase and amplitude errors of the schemes obtained using Fourier analysis are given and compared with a second-order and a fourth-order method. Numerical experiments are presented which demonstrate the usefulness of the schemes for a range of problems. For some problems, the optimized scheme leads to a reduction in global error compared to the maximum-order scheme with no additional computational expense.

The spectral mimetic (SM) properties of operator-difference schemes for solving the Cauchy problem for first-order evolutionary equations concern the time evolution of individual harmonics of the solution. Keeping track of the spectral characteristics makes it possible to select more appropriate approximations with respect to time. Among two-level implicit schemes of improved accuracy based on Padé approximations, SM-stability holds for schemes based on polynomial approximations if the operator in an evolutionary equation is self-adjoint and for symmetric schemes if the operator is skew-symmetric. In this paper, additive schemes (also called splitting schemes) are constructed for evolutionary equations with general operators. These schemes are based on the extraction of the self-adjoint and skew-symmetric components of the corresponding operator.

When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

An improved timing scheme has been conceived for operation of a scanning satellite-borne rain-measuring radar system. The scheme allows a real-time-generated solution, which is required for auto targeting. The current timing scheme used in radar satellites involves pre-computing a solution that allows the instrument to catch all transmitted pulses without transmitting and receiving at the same time. Satellite altitude requires many pulses in flight at any time, and the timing solution to prevent transmit and receive operations from colliding is usually found iteratively. The proposed satellite has a large number of scanning beams each with a different range to target and few pulses per beam. Furthermore, the satellite will be self-targeting, so the selection of which beams are used will change from sweep to sweep. The proposed timing solution guarantees no echo collisions, can be generated using simple FPGA-based hardware in real time, and can be mathematically shown to deliver the maximum number of pulses per second, given the timing constraints. The timing solution is computed every sweep, and consists of three phases: (1) a build-up phase, (2) a feedback phase, and (3) a build-down phase. Before the build-up phase can begin, the beams to be transmitted are sorted in numerical order. The numerical order of the beams is also the order from shortest range to longest range. Sorting the list guarantees no pulse collisions. The build-up phase begins by transmitting the first pulse from the first beam on the list. Transmission of this pulse starts a delay counter, which stores the beam number and the time delay to the beginning of the receive window for that beam. The timing generator waits just long enough to complete the transmit pulse plus one receive window, then sends out the second pulse. The second pulse starts a second delay counter, which stores its beam number and time delay. This process continues until an output from the first timer indicates there is less

Atomic chains, precise structures of atomic scale created on an atomically regulated substrate surface, are candidates for future electronics. A doping scheme for intrinsic semiconducting Mg chains is considered. In order to suppress the unwanted Anderson localization and minimize the deformation of the original band shape, atomic modulation doping is considered, which is to place dopant atoms beside the chain periodically. Group I atoms are donors, and group VI or VII atoms are acceptors. As long as the lattice constant is long so that the s-p band crossing has not occurred, whether dopant atoms behave as donors or acceptors is closely related to the energy level alignment of isolated atomic levels. Band structures are calculated for Br-doped (p-type) and Cs-doped (n-type) Mg chains using the tight-binding theory with universal parameters, and it is shown that the band deformation is minimized and only the Fermi energy position is modified.

The resonance ionization laser ion source (RILIS) of the ISOLDE on-line isotope separation facility is based on the method of laser step-wise resonance ionization of atoms in a hot metal cavity. The atomic selectivity of the RILIS complements the mass selection process of the ISOLDE separator magnets to provide beams of a chosen isotope with greatly reduced isobaric contamination. Using a system of dye lasers pumped by copper vapour lasers, ion beams of 24 elements have been generated at ISOLDE with ionization efficiencies in the range of 0.5-15%. As part of the ongoing RILIS development off-line resonance ionization spectroscopy studies carried out in 2003 and 2004 have determined the optimal three-step ionization schemes for scandium, antimony, dysprosium and yttrium.

This paper focuses on the evolution of advection upstream splitting method(AUSM) schemes. The main ingredients that have led to the development of modern computational fluid dynamics (CFD) methods have been reviewed, thus the ideas behind AUSM. First and foremost is the concept of upwinding. Second, the use of Riemann problem in constructing the numerical flux in the finite-volume setting. Third, the necessity of including all physical processes, as characterised by the linear (convection) and nonlinear (acoustic) fields. Fourth, the realisation of separating the flux into convection and pressure fluxes. The rest of this review briefly outlines the technical evolution of AUSM and more details can be found in the cited references. Keywords: Computational fluid dynamics methods, hyperbolic systems, advection upstream splitting method, conservation laws, upwinding, CFD

We propose a scheme or procedure for performing practical calculations with generalized seniority. It reduces the total computing time by precalculating a set of intermediate quantities. We show that practically the computational (time and space) complexity of the algorithm does not depend on the valence particle number, in sharp contrast to the standard shell model. The method is demonstrated in semi-magic nuclei {}{46,48,50}Ca, 116Sn, and 182Pb, where the low-lying states could be well reproduced through achieved convergence at high generalized seniority. Odd particle-number systems or possible three-body terms from the Hamiltonian could be treated by the same formalism without complication.

We present the M1 excitation scheme in even-even deformed nuclei from the sum-rule viewpoint based on the Nilsson+BCS approach. The sum-rule states are introduced for the Scissors, spin and spin-flip modes. The functional form of the B(M1) sum rule of the Scissors mode is obtained, and its actual value is shown to be 4˜6(μN2). The spin excitation B(M1) is 10˜15(μN2) including the spin-flip transitions. The total B(M1) is 15˜20(μN2). The effect of the SD and SDG pair truncation is studied to test IBM-2 for M1 excitations. The SDG truncation reproduces very well the calculation without truncation. The SD truncation reproduces the orbital excitation, whereas yields some deviations for the spin excitation.

Given an function, u(x), which is represented by its cell-averages in cells which are formed by some unstructured grid, we show how to decompose the function into various scales of variation. This is done by considering a set of nested grids in which the given grid is the finest, and identifying in each locality the coarsest grid in the set from which u(x) can be recovered to a prescribed accuracy. This multi-resolution analysis was applied to essentially non-oscillatory (ENO) schemes in order to advance the solution by one time-step. This is accomplished by decomposing the numerical solution at the beginning of each time-step into levels of resolution, and performing the computation in each locality at the appropriate coarser grid. An efficient algorithm for implementing this program in the 1-D case is presented; this algorithm can be extended to the multi-dimensional case with Cartesian grids.

In optical SETI (OSETI) experiments, it is generally assumed that signals will be deliberate, narrowly targeted beacons sent by extraterrestrial societies to large numbers of candidate star systems. If this is so, then it may be unrealistic to expect a high duty cycle for the received signal. Ergo, an advantage accrues to any OSETI scheme that realistically suggests where and when to search. In this paper, we elaborate a proposal (Castellano, Doyle, &McIntosh 2000) for selecting regions of sky for intensive optical SETI monitoring based on characteristics of our solar system that would be visible at great distance. This can enormously lessen the amount of sky that needs to be searched. In addition, this is an attractive approach for the transmitting society because it both increases the chances of reception and provides a large reduction in energy required. With good astrometric information, the transmitter need be no more powerful than an automobile tail light.

Plastics waste is causing a major headache for Duales System Deutschland (DSD: Bonn), one of Europe`s groundbreaking national packaging recycling programs. Five of Germany`s states have threatened to withdraw from the plan mainly because of the lack of plastics recycling capacity, says a DSD spokeswoman. {open_quotes}The pace of establishing recycling capacity does not meet the zeal in collection.{close_quotes} she notes. In addition, the organization has been crippled by a lack of funds. It claims that up to half the subscribers to the scheme - who pay a fee to display a green dot on packaging - are either irregular payers or not paying fees in proportion to their use of the green dot. The cost of setting up and paying for plastics recycling - not originally part of DSD`s responsibility - is also hurting the organization.

Fluid flows in the transitional and turbulent regimes possess a wide range of length and time scales. The numerical computation of these flows therefore requires numerical methods that can accurately represent the entire, or at least a significant portion, of this range of scales. The inaccurate representation of small scales is inherent to non-spectral schemes. This can be detrimental to computations where the energy in the small scales is comparable to that in the larger scales, e.g. large-eddy simulations of high Reynolds number turbulence. The inaccurate numerical representation of the small scales in these large-eddy simulations can result in the numerical error overwhelming the contribution of the subgrid-scale model.

A finite element of Galerkin type semidiscrete method is proposed for numerical solution of a linear hyperbolic partial differential equation. The question of stability is reduced to the stability of a system of ordinary differential equations for which Dahlquist theory applied. Results of separating the part of numerical solution which causes the spurious oscillation near shock-like response of semidiscrete scheme to a step function initial condition are presented. In general all methods produce such oscillatory overshoots on either side of shocks. This overshoot pathology, which displays a behavior similar to Gibb's phenomena of Fourier series, is explained on the basis of dispersion of separated Fourier components which relies on linearized theory to be satisfactory. Expository results represented.

Porous electrode theory coupled with transport and reaction mechanisms is a widely used technique to model Li-ion batteries employing an appropriate discretization or approximation for solid phase diffusion with electrode particles. One of the major difficulties in simulating Li-ion battery models is the need to account for solid phase diffusion in a second radial dimension r, which increases the computation time/cost to a great extent. Various methods that reduce the computational cost have been introduced to treat this phenomenon, but most of them do not guarantee mass conservation. The aim of this paper is to introduce an inherently mass conserving yet computationally efficient method for solid phase diffusion based on Lobatto III A quadrature. This paper also presents coupling of the new solid phase reformulation scheme with a macro-homogeneous porous electrode theory based pseudo 20 model for Li-ion battery. (C) The Author(s) 2015. Published by ECS. All rights reserved.

Reflecting the two tasks proposed for the current year, namely a feasibility study of simulating the NASA network, and a study of progressive transmission schemes, are presented. The view of the NASA network, gleaned from the various technical reports made available to use, is provided. Also included is a brief overview of how the current simulator could be modified to accomplish the goal of simulating the NASA network. As the material in this section would be the basis for the actual simulation, it is important to make sure that it is an accurate reflection of the requirements on the simulator. Brief descriptions of the set of progressive transmission algorithms selected for the study are contained. The results available in the literature were obtained under a variety of different assumptions, not all of which are stated. As such, the only way to compare the efficiency and the implementational complexity of the various algorithms is to simulate them.

To asses the advantages of reprocess and recycle the spent fuel from nuclear power reactors, against a once through policy, a MOX fuel design is proposed to match a generic scenario for twin BWRs and establish a fuel management scheme. Calculations for the amount of fuel that the plants will use during 40 years of operation were done, and an evaluation of costs using constant money method for each option applying current prices for uranium and services were made. Finally a comparison between the options was made, resulting that even the current high prices of uranium, still the recycling option is more expensive that the once through alternative. But reprocessing could be an alternative to reduce the amount of spent fuel stored in the reactor pools. (authors)

Turbo codes can deliver performance that is very close to the Shannon limit. This report investigates algorithms for convolutional turbo codes and block turbo codes. Both coding schemes can achieve performance near Shannon limit. The performance of the schemes is obtained using computer simulations. There are three sections in this report. First section is the introduction. The fundamental knowledge about coding, block coding and convolutional coding is discussed. In the second section, the basic concepts of convolutional turbo codes are introduced and the performance of turbo codes, especially high rate turbo codes, is provided from the simulation results. After introducing all the parameters that help turbo codes achieve such a good performance, it is concluded that output weight distribution should be the main consideration in designing turbo codes. Based on the output weight distribution, the performance bounds for turbo codes are given. Then, the relationships between the output weight distribution and the factors like generator polynomial, interleaver and puncturing pattern are examined. The criterion for the best selection of system components is provided. The puncturing pattern algorithm is discussed in detail. Different puncturing patterns are compared for each high rate. For most of the high rate codes, the puncturing pattern does not show any significant effect on the code performance if pseudo - random interleaver is used in the system. For some special rate codes with poor performance, an alternative puncturing algorithm is designed which restores their performance close to the Shannon limit. Finally, in section three, for iterative decoding of block codes, the method of building trellis for block codes, the structure of the iterative decoding system and the calculation of extrinsic values are discussed.

Developing shape models is an important aspect of computer vision research. Geometric and differential properties of the surface can be computed from shape models. They also aid the tasks of object representation and recognition. In this paper we present an innovative new approach for shape modeling which, while retaining important features of the existing methods, overcomes most of their limitations. Our technique can be applied to model arbitrarily complex shapes, shapes with protrusions, and to situations where no a priori assumption about the object's topology can be made. A single instance of our model, when presented with an image having more than one object of interest, has the ability to split freely to represent each object. Our method is based on the level set ideas developed by Osher & Sethian to follow propagating solid/liquid interfaces with curvature-dependent speeds. The interface is a closed, nonintersecting, hypersurface flowing along its gradient field with constant speed or a speed that depends on the curvature. We move the interface by solving a `Hamilton-Jacobi' type equation written for a function in which the interface is a particular level set. A speed function synthesized from the image is used to stop the interface in the vicinity of the object boundaries. The resulting equations of motion are solved by numerical techniques borrowed from the technology of hyperbolic conservation laws. An added advantage of this scheme is that it can easily be extended to any number of space dimensions. The efficacy of the scheme is demonstrated with numerical experiments on synthesized images and noisy medical images.

The existing certificateless signcryption schemes were designed mainly based on the traditional public key cryptography, in which the security relies on the hard problems, such as factor decomposition and discrete logarithm. However, these problems will be easily solved by the quantum computing. So the existing certificateless signcryption schemes are vulnerable to the quantum attack. Multivariate public key cryptography (MPKC), which can resist the quantum attack, is one of the alternative solutions to guarantee the security of communications in the post-quantum age. Motivated by these concerns, we proposed a new construction of the certificateless multi-receiver signcryption scheme (CLMSC) based on MPKC. The new scheme inherits the security of MPKC, which can withstand the quantum attack. Multivariate quadratic polynomial operations, which have lower computation complexity than bilinear pairing operations, are employed in signcrypting a message for a certain number of receivers in our scheme. Security analysis shows that our scheme is a secure MPKC-based scheme. We proved its security under the hardness of the Multivariate Quadratic (MQ) problem and its unforgeability under the Isomorphism of Polynomials (IP) assumption in the random oracle model. The analysis results show that our scheme also has the security properties of non-repudiation, perfect forward secrecy, perfect backward secrecy and public verifiability. Compared with the existing schemes in terms of computation complexity and ciphertext length, our scheme is more efficient, which makes it suitable for terminals with low computation capacity like smart cards. PMID:23967037

This paper proposes a new multi-input multi-output (MIMO) transmit scheme aided by artificial neural network (ANN). The morphological perceptron with competitive learning (MP/CL) concept is deployed as a decision rule in the MIMO detection stage. The proposed MIMO transmission scheme is able to achieve double spectral efficiency; hence, in each time-slot the receiver decodes two symbols at a time instead one as Alamouti scheme. Other advantage of the proposed transmit scheme with MP/CL-aided detector is its polynomial complexity according to modulation order, while it becomes linear when the data stream length is greater than modulation order. The performance of the proposed scheme is compared to the traditional MIMO schemes, namely Alamouti scheme and maximum-likelihood MIMO (ML-MIMO) detector. Also, the proposed scheme is evaluated in a scenario with variable channel information along the frame. Numerical results have shown that the diversity gain under space-time coding Alamouti scheme is partially lost, which slightly reduces the bit-error rate (BER) performance of the proposed MP/CL-NN MIMO scheme. PMID:27135805

As a significant part of the Internet of Things (IoT), Wireless Body Area Network (WBAN) has attract much attention in this years. In WBANs, sensors placed in or around the human body collect the sensitive data of the body and transmit it through an open wireless channel in which the messages may be intercepted, modified, etc. Recently, Wang et al. presented a new anonymous authentication scheme for WBANs and claimed that their scheme can solve the security problems in the previous schemes. Unfortunately, we demonstrate that their scheme cannot withstand impersonation attack. Either an adversary or a malicious legal client could impersonate another legal client to the application provider. In this paper, we give the detailed weakness analysis of Wang et al.'s scheme at first. Then we present a novel anonymous authentication scheme for WBANs and prove that it's secure under a random oracle model. At last, we demonstrate that our presented anonymous authentication scheme for WBANs is more suitable for practical application than Wang et al.'s scheme due to better security and performance. Compared with Wang et al.'s scheme, the computation cost of our scheme in WBANs has reduced by about 31.58%. PMID:27091755

A digital signature is a mathematical scheme for demonstrating the authenticity of a digital message or document. For signing quantum messages, some arbitrated quantum signature (AQS) schemes have been proposed. It was claimed that these AQS schemes could guarantee unconditional security. However, we show that they can be repudiated by the receiver Bob. To conquer this shortcoming, we construct an AQS scheme using a public board. The AQS scheme not only avoids being disavowed by the receiver but also preserves all merits in the existing schemes. Furthermore, we discover that entanglement is not necessary while all these existing AQS schemes depend on entanglement. Therefore, we present another AQS scheme without utilizing entangled states in the signing phase and the verifying phase. This scheme has three advantages: it does not utilize entangled states and it preserves all merits in the existing schemes; the signature can avoid being disavowed by the receiver; and it provides a higher efficiency in transmission and reduces the complexity of implementation.

The existing certificateless signcryption schemes were designed mainly based on the traditional public key cryptography, in which the security relies on the hard problems, such as factor decomposition and discrete logarithm. However, these problems will be easily solved by the quantum computing. So the existing certificateless signcryption schemes are vulnerable to the quantum attack. Multivariate public key cryptography (MPKC), which can resist the quantum attack, is one of the alternative solutions to guarantee the security of communications in the post-quantum age. Motivated by these concerns, we proposed a new construction of the certificateless multi-receiver signcryption scheme (CLMSC) based on MPKC. The new scheme inherits the security of MPKC, which can withstand the quantum attack. Multivariate quadratic polynomial operations, which have lower computation complexity than bilinear pairing operations, are employed in signcrypting a message for a certain number of receivers in our scheme. Security analysis shows that our scheme is a secure MPKC-based scheme. We proved its security under the hardness of the Multivariate Quadratic (MQ) problem and its unforgeability under the Isomorphism of Polynomials (IP) assumption in the random oracle model. The analysis results show that our scheme also has the security properties of non-repudiation, perfect forward secrecy, perfect backward secrecy and public verifiability. Compared with the existing schemes in terms of computation complexity and ciphertext length, our scheme is more efficient, which makes it suitable for terminals with low computation capacity like smart cards. PMID:23967037

A digital signature is a mathematical scheme for demonstrating the authenticity of a digital message or document. For signing quantum messages, some arbitrated quantum signature (AQS) schemes have been proposed. It was claimed that these AQS schemes could guarantee unconditional security. However, we show that they can be repudiated by the receiver Bob. To conquer this shortcoming, we construct an AQS scheme using a public board. The AQS scheme not only avoids being disavowed by the receiver but also preserves all merits in the existing schemes. Furthermore, we discover that entanglement is not necessary while all these existing AQS schemes depend on entanglement. Therefore, we present another AQS scheme without utilizing entangled states in the signing phase and the verifying phase. This scheme has three advantages: it does not utilize entangled states and it preserves all merits in the existing schemes; the signature can avoid being disavowed by the receiver; and it provides a higher efficiency in transmission and reduces the complexity of implementation.

The telecare medicine information system enables or supports health-care delivery services. In order to safeguard patients' privacy, such as telephone number, medical record number, health information, etc., a secure authentication scheme will thus be in demand. Recently, Wu et al. proposed a smart card based password authentication scheme for the telecare medicine information system. Later, He et al. pointed out that Wu et al.'s scheme could not resist impersonation attacks and insider attacks, and then presented a new scheme. In this paper, we show that both of them fail to achieve two-factor authentication as smart card based password authentication schemes should achieve. We also propose an improved authentication scheme for the telecare medicine information system, and demonstrate that the improved one satisfies the security requirements of two-factor authentication and is also efficient. PMID:22374237

Patient can obtain sorts of health-care delivery services via Telecare Medical Information Systems (TMIS). Authentication, security, patient's privacy protection and data confidentiality are important for patient or doctor accessing to Electronic Medical Records (EMR). In 2012, Chen et al. showed that Khan et al.'s dynamic ID-based authentication scheme has some weaknesses and proposed an improved scheme, and they claimed that their scheme is more suitable for TMIS. However, we show that Chen et al.'s scheme also has some weaknesses. In particular, Chen et al.'s scheme does not provide user's privacy protection and perfect forward secrecy, is vulnerable to off-line password guessing attack and impersonation attack once user's smart card is compromised. Further, we propose a secure anonymity authentication scheme to overcome their weaknesses even an adversary can know all information stored in smart card. PMID:23321972

We propose and analyze two hybrid automatic-repeat-request (ARQ) schemes employing bandwidth efficient coded modulation and coded sequence combining. In the first scheme, a trellis-coded modulation (TCM) is used to control channel noise; while in the second scheme a concatenated coded modulation is employed. The concatenated coded modulation is formed by cascading a Reed-Solomon (RS) outer code and a coded modulation (BCM) inner code. In both schemes, the coded modulation decoder, by performing sequence combining and soft-decision maximum likelihood decoding, makes full use of the information available in all received sequences corresponding to a given information message. It is shown, by means of analysis as well as computer simulations, that both schemes are capable of providing high throughput efficiencies over a wide range of signal-to-noise ratios. The schemes are suitable for large file transfers over satellite communication links where high throughput and high reliability are required.

In this paper, we propose a novel quantum group signature scheme. It can make the signer sign a message on behalf of the group without the help of group manager (the arbitrator), which is different from the previous schemes. In addition, a signature can be verified again when its signer disavows she has ever generated it. We analyze the validity and the security of the proposed signature scheme. Moreover, we discuss the advantages and the disadvantages of the new scheme and the existing ones. The results show that our scheme satisfies all the characteristics of a group signature and has more advantages than the previous ones. Like its classic counterpart, our scheme can be used in many application scenarios, such as e-government and e-business.

Existing arbitrated quantum signature (AQS) schemes are almost all based on the Leung quantum one-time pad (L-QOTP) algorithm. In these schemes, the receiver can achieve an existential forgery of the sender's signatures under the known message attack, and the sender can successfully disavow any of her/his signatures by a simple attack. In this paper, a solution of solving the problems is given, through designing a new QOTP algorithm relying largely on inserting decoy states into fixed insertion positions. Furthermore, we present an AQS scheme with fast signing and verifying, which is based on the new QOTP algorithm. It is just using single particle states and is unconditional secure. To fulfill the functions of AQS schemes, our scheme needs a significantly lower computational costs than that required by other AQS schemes based on the L-QOTP algorithm.

Deadtime corrections for passive neutron coincidence counting are traditionally formulated in terms of the Totals counting rate. The deadtime correction is exponential in form with the effective deadtime being linear in terms of observed Totals rate. The deadtime coefficient for the Reals rate is traditionally fixed at four times that of the Totals rate parameter. When it comes to multiplicity counting, however, more complex expressions are typically used for the Doubles and Triples rates based on mathematical actions to the multiplicity histograms with the Singles (or Trigger) rate being treated rather simplistically. Since the Totals and Singles and Reals and Doubles, respectively, are effectively equivalent measures, the difference in deadtime treatment results is an inconsistency. Furthermore, additional empirical correction factors are often applied in the case of the multiplicity deadtime corrections and these do not follow from the underlying theoretical framework. The purpose of this paper is to re-examine the semi-empirical deadtime correction expressions from a fresh perspective. We propose to a scheme whereby Totals and Singles are treated equivalently with the correction having the transcendental form of the paralysable model. The impact of correlations on the Totals deadtime correction is shown to be modest. The deadtime correction factor for Reals and Doubles are again treated similarly also using an exponential form in terms of the corrected Total event rate but with a deadtime parameter which is not fixed ahead of time to be four times that used in the Totals correction. In the case of the Triples correction, which is evaluated from a composite expression, the deadtime corrections for the Singles and Doubles are used as appropriate but a new empirical correction, again given in terms of the corrected rate, is introduced. The new correction acts only on the part of the Triples expression which is does not represent the correlated-accidentals. The new

In many Quaternary lacustrine and marine settings, fossil magnetotactic bacteria are a major contributor to sedimentary magnetization [1]. Magnetite particles produced by magnetotactic bacteria have traits, shaped by natural selection, that increase the efficiency with which the bacteria utilize iron and also facilitate the recognition of the particles' biological origin. In particular, magnetotactic bacteria generally produce particles with characteristic shapes and narrow size and shape distributions that lie within the single domain stability field. The particles have effective positive magnetic anisotropy, produced by alignment in chains and frequently by particle elongation. In addition, the crystals are often nearly stochiometric and have few crystallographic defects. Yet, despite these distinctive traits, there are few identified magnetofossils that predate the Quaternary, and many putative identifications are highly controversial. We propose a six-criteria scoring scheme for evaluating identifications based on the quality of the geological, magnetic, and electron microscopic evidence. Our criteria are: (1) whether the geological context is well-constrained stratigraphically, and whether paleomagnetic evidence suggests a primary magnetization; (2) whether magnetic or microscopic evidence support the presence of significant single-domain magnetite; (3) whether magnetic or ferromagnetic resonance evidence indicates narrow size and shape distributions, and whether microscopic evidence reveals single-domain particles with truncated edges, elongate single-domain particles, and/or narrow size and shape distributions; (4) whether ferromagnetic resonance, low-temperature magnetic, or electron microscopic evidence reveals the presence of chains; (5) whether low-temperature magnetometry, energy dispersive X-ray spectroscopy, or other techniques demonstrate the near-stochiometry of the particles; and (6) whether high-resolution TEM indicates the near- absence of

This scheme is used to clarify the journal's scope and enable authors and readers to more easily locate the appropriate section for their work. For each of the sections listed in the scope statement we suggest some more detailed subject areas which help define that subject area. These lists are by no means exhaustive and are intended only as a guide to the type of papers we envisage appearing in each section. We acknowledge that no classification scheme can be perfect and that there are some papers which might be placed in more than one section. We are happy to provide further advice on paper classification to authors upon request (please email jphysa@iop.org). 1. Statistical physics numerical and computational methods statistical mechanics, phase transitions and critical phenomena quantum condensed matter theory Bose-Einstein condensation strongly correlated electron systems exactly solvable models in statistical mechanics lattice models, random walks and combinatorics field-theoretical models in statistical mechanics disordered systems, spin glasses and neural networks nonequilibrium systems network theory 2. Chaotic and complex systems nonlinear dynamics and classical chaos fractals and multifractals quantum chaos classical and quantum transport cellular automata granular systems and self-organization pattern formation biophysical models 3. Mathematical physics combinatorics algebraic structures and number theory matrix theory classical and quantum groups, symmetry and representation theory Lie algebras, special functions and orthogonal polynomials ordinary and partial differential equations difference and functional equations integrable systems soliton theory functional analysis and operator theory inverse problems geometry, differential geometry and topology numerical approximation and analysis geometric integration computational methods 4. Quantum mechanics and quantum information theory coherent states eigenvalue problems supersymmetric quantum mechanics

The paper discusses the design of a laser projection microscope with a mirror-based scheme of image formation. It is shown that the laser projection microscope with the mirror-based scheme of image formation is well suited for distant objects monitoring. This scheme allowed obtaining a field of view of more than 3 cm at the distance of 4 m from the brightness amplifier

A scheme is presented for the long-distance teleportation of an unknown atomic state between two separated cavities. Our scheme works in the regime where the atom-cavity coupling strength is smaller than the cavity decay rate. Thus the requirement on the quality factor of the cavities is greatly relaxed. Furthermore, the fidelity of our scheme is not affected by the detection inefficiency and atomic decay. These advantages are important in view of experiments.

The goal of this talk is to extend our recently developed highly parallelizable nonlinear stable high order schemes for complex multiscale hydrodynamic applications to the viscous MHD equations. These schemes employed multiresolution wavelets as adaptive numerical dissipation controls to limit the amount and to aid the selection and/or blending of the appropriate types of dissipation to be used. The new scheme is formulated for both the conservative and non-conservative form of the MHD equations in curvilinear grids.

The University of Michigan global magnetohydrodynamics code BATS-R-US has long relied on the block-adaptive mesh refinement (AMR) to increase accuracy in regions of interest, and we used a second order accurate TVD scheme. While AMR can in principle produce arbitrarily accurate results, there are still practical limitations due to computational resources. To further improve the accuracy of the BATS-R-US code, recently, we have implemented a 4th order accurate finite volume scheme (McCorquodale and Colella, 2011}), the 5th order accurate Monotonicity Preserving scheme (MP5, Suresh and Huynh, 1997) and the 5th order accurate CWENO5 scheme (Capdeville, 2008). In the first implementation the high order accuracy is achieved in the uniform parts of the Cartesian grids, and we still use the second order TVD scheme at resolution changes. For spherical grids the new schemes are only second order accurate so far, but still much less diffusive than the TVD scheme. We show a few verification tests that demonstrate the order of accuracy as well as challenging space physics applications. The high order schemes are less robust than the TVD scheme, and it requires some tricks and effort to make the code work. When the high order scheme works, however, we find that in most cases it can obtain similar or better results than the TVD scheme on twice finer grids. For three dimensional time dependent simulations this means that the high order scheme is almost 10 times faster requires 8 times less storage than the second order method.

Conservative, shock capturing methods for the unsteady Euler equations are reviewed and it is shown that the concepts of entropy satisfaction and total variation diminution can be applied to well-known classical schemes. For an associated scheme to be efficient in applications, it is necessary that it be constructed with economy of implementation in mind, and that it be able to capture strong shock waves with high resolution. We describe a scheme which is efficient in both respects.

Conservative shock-capturing methods for the unsteady Euler equations are reviewed, and it is shown that the concepts of entropy satisfaction and total variation diminution can be applied to well known classical schemes. For an associated scheme to be efficient in applications, it is necessary that it be constructed with economy of implementation in mind, and that it be able to capture strong shock waves with high resolution. A scheme which is efficient in both respects is described.

Time-marching dispersion-relation-preserving (DRP) schemes can be constructed by optimizing the finite difference approximations of the space and time derivatives in wave number and frequency space. A set of radiation and outflow boundary conditions compatible with the DRP schemes is constructed, and a sequence of numerical simulations is conducted to test the effectiveness of the DRP schemes and the radiation and outflow boundary conditions. Close agreement with the exact solutions is obtained.

A secure spread spectrum communication scheme using multiplication modulation is proposed. The proposed system multiplies the message by chaotic signal. The scheme does not need to know the initial condition of the chaotic signals and the receiver is based on an extended Kalman filter (EKF). This signal encryption scheme lends itself to cheap implementation and can therefore be used effectively for ensuring security and privacy in commercial consumer electronics products. To illustrate the effectiveness of the proposed scheme, a numerical example based on Genesio-Tesi system and also Chen dynamical system is presented and the results are compared.

Simulation of mass transport by particle tracking or random walk in highly heterogeneous media may be inefficient from a computational point of view if the traditional constant time step scheme is used. A new scheme which adjusts automatically the time step for each particle according to the local pore velocity, so that each particle always travels a constant distance, is shown to be computationally faster for the same degree of accuracy than the constant time step method. Using the constant displacement scheme, transport calculations in a 2-D aquifer model, with nature log-transmissivity variance of 4, can be 8.6 times faster than using the constant time step scheme.

Adaptive variable-length coding scheme for compression of stream of independent and identically distributed source data involves either Huffman code or alternating run-length Huffman (ARH) code, depending on characteristics of data. Enables efficient compression of output of lossless or lossy precompression process, with speed and simplicity greater than those of older coding schemes developed for same purpose. In addition, scheme suitable for parallel implementation on hardware with modular structure, provides for rapid adaptation to changing data source, compatible with block orientation to alleviate memory requirements, ensures efficiency over wide range of entropy, and easily combined with such other communication schemes as those for containment of errors and for packetization.

In this paper, we propose a new electronic voting scheme using Bell entangled states as quantum channels. This scheme is based on quantum proxy signature. The voter Alice, vote management center Bob, teller Charlie and scrutineer Diana only perform single particle measurement to realize the electronic voting process. So the scheme reduces the technical difficulty and increases operation efficiency. It can be easily realized. We use quantum key distribution and one-time pad to guarantee its unconditional security. The scheme uses the physical characteristics of quantum mechanics to guarantee its anonymity, verifiability, unforgetability and undeniability.

Motivated by finite element spaces used for representation of temperature in the compatible finite element approach for numerical weather prediction, we introduce locally bounded transport schemes for (partially-)continuous finite element spaces. The underlying high-order transport scheme is constructed by injecting the partially-continuous field into an embedding discontinuous finite element space, applying a stable upwind discontinuous Galerkin (DG) scheme, and projecting back into the partially-continuous space; we call this an embedded DG transport scheme. We prove that this scheme is stable in L2 provided that the underlying upwind DG scheme is. We then provide a framework for applying limiters for embedded DG transport schemes. Standard DG limiters are applied during the underlying DG scheme. We introduce a new localised form of element-based flux-correction which we apply to limiting the projection back into the partially-continuous space, so that the whole transport scheme is bounded. We provide details in the specific case of tensor-product finite element spaces on wedge elements that are discontinuous P1/Q1 in the horizontal and continuous P2 in the vertical. The framework is illustrated with numerical tests.

The pseudopotential lattice Boltzmann (LB) model is a widely used multiphase model in the LB community. In this model, an interaction force, which is usually implemented via a forcing scheme, is employed to mimic the molecular interactions that cause phase segregation. The forcing scheme is therefore expected to play an important role in the pseudoepotential LB model. In this paper, we aim to address some key issues about forcing schemes in the pseudopotential LB model. First, theoretical and numerical analyses will be made for Shan-Chen's forcing scheme [Shan and Chen, Phys. Rev. E 47, 1815 (1993)] and the exact-difference-method forcing scheme [Kupershtokh et al., Comput. Math. Appl. 58, 965 (2009)]. The nature of these two schemes and their recovered macroscopic equations will be shown. Second, through a theoretical analysis, we will reveal the physics behind the phenomenon that different forcing schemes exhibit different performances in the pseudopotential LB model. Moreover, based on the analysis, we will present an improved forcing scheme and numerically demonstrate that the improved scheme can be treated as an alternative approach to achieving thermodynamic consistency in the pseudopotential LB model. PMID:23005565

A novel adsorption-distillation hybrid scheme is proposed for propane/propylene separation. The suggested scheme has potential for saving up to [approximately]50% energy and [approximately]15-30% in capital costs as compared with current technology. The key concept of the proposed scheme is to separate olefins from alkanes by adsorption and then separate individual olefins and alkanes by simple distillation, thereby eliminating energy intensive and expensive olefin-alkane distillation. A conceptual flow schematic for the proposed hybrid scheme and potential savings are outlined.s

Five different central difference schemes, based on a conservative differencing form of the Kennedy and Gruber skew-symmetric scheme, were compared with six different upwind schemes based on primitive variable reconstruction and the Roe flux. These eleven schemes were tested on a one-dimensional acoustic standing wave problem, the Taylor-Green vortex problem and a turbulent channel flow problem. The central schemes were generally very accurate and stable, provided the grid stretching rate was kept below 10%. As near-DNS grid resolutions, the results were comparable to reference DNS calculations. At coarser grid resolutions, the need for an LES SGS model became apparent. There was a noticeable improvement moving from CD-2 to CD-4, and higher-order schemes appear to yield clear benefits on coarser grids. The UB-7 and CU-5 upwind schemes also performed very well at near-DNS grid resolutions. The UB-5 upwind scheme does not do as well, but does appear to be suitable for well-resolved DNS. The UF-2 and UB-3 upwind schemes, which have significant dissipation over a wide spectral range, appear to be poorly suited for DNS or LES.

We present two schemes to perform continuous variable (2, 3) threshold quantum secret sharing (QSS) on the quadrature amplitudes of bright light beams. Both schemes require a pair of entangled light beams. The first scheme utilizes two phase sensitive optical amplifiers, whilst the second uses an electro-optic feedforward loop for the reconstruction of the secret. We examine the efficacy of QSS in terms of fidelity, as well as the signal transfer coefficients and the conditional variances of the reconstructed output state. We show that both schemes in the ideal case yield perfect secret reconstruction.

In this paper, we propose a new electronic voting scheme using Bell entangled states as quantum channels. This scheme is based on quantum proxy signature. The voter Alice, vote management center Bob, teller Charlie and scrutineer Diana only perform single particle measurement to realize the electronic voting process. So the scheme reduces the technical difficulty and increases operation efficiency. It can be easily realized. We use quantum key distribution and one-time pad to guarantee its unconditional security. The scheme uses the physical characteristics of quantum mechanics to guarantee its anonymity, verifiability, unforgetability and undeniability.

Fingerprint recognition provides an effective user authentication solution for mobile computing systems. However, as a fingerprint template protection scheme, fingerprint fuzzy vault is subject to cross-matching attacks, since the same finger might be registered for various applications. In this paper, we propose a fingerprint-based biometric security scheme named the cancellable and fuzzy fingerprint scheme, which combines a cancellable non-linear transformation with the client/server version of fuzzy vault, to address the cross-matching attack in a mobile computing system. Experimental results demonstrate that our scheme can provide reliable and secure protection to the mobile computing system while achieving an acceptable matching performance.

The possible nuclei with X(5) symmetry are investigated in the Interacting Boson Model (IBM), in which the traditional scheme and a new alternative scheme from the spherical to the axially deformed limit of the IBM with a schematic Hamiltonian are studied by using the SU(3) quadrupole-quadrupole term and O(6) cubic interaction, respectively. The low-lying energy levels and E2 transition rates from the new scheme are calculated and compared with the experimental data and those of the traditional U(5) - SU(3) description. It is shown that the results from this new scheme seem better than those of the traditional description.

Fully homomorphic encryption enables arbitrary computation on encrypted data without decrypting the data. Here it is studied in the context of quantum information processing. Based on universal quantum circuit, we present a quantum fully homomorphic encryption (QFHE) scheme, which permits arbitrary quantum transformation on any encrypted data. The QFHE scheme is proved to be perfectly secure. In the scheme, the decryption key is different from the encryption key; however, the encryption key cannot be revealed. Moreover, the evaluation algorithm of the scheme is independent of the encryption key, so it is suitable for delegated quantum computing between two parties.

Composite schemes are formed by global composition of several Lax-Wendroff steps followed by a diffusive Lax-Friedrichs or WENO step, which filters out the oscillations around shocks typical for the Lax-Wendroff scheme. These schemes are applied to the shallow water equations in two dimensions. The Lax-Friedrichs composite is also formulated for a trapezoidal mesh, which is necessary in several example problems. The suitability of the composite schemes for the shallow water equations is demonstrated on several examples, including the circular dam break problem, the shock focusing problem and supercritical channel flow problems. Copyright

The routing and spectrum assignment (RSA) is one of the key problems in flexible optical networks. When in a gridless fashion, the shortest-path RSA algorithms have exponential computational complexity, and are thus not suitable to be employed in real networks. On the other hand, today most optical components cannot support fully gridless tunability, which also limits the application of gridless RSA schemes. In this paper, we propose a novel grid-based spectrum-scan routing (SSR) scheme in flexible optical networks. The SSR scheme achieves optimal routing with a polynomial computational complexity. Compared with other RSA schemes, SSR has shorter computation time, lower blocking probability, and higher resource utilization.

Even though a method to perfectly sign quantum messages has not been known, the arbitrated quantum signature scheme has been considered as one of the good candidates. However, its forgery problem has been an obstacle to the scheme becoming a successful method. In this paper, we consider one situation, which is slightly different from the forgery problem, that we use to check whether at least one quantum message with signature can be forged in a given scheme, although all the messages cannot be forged. If there are only a finite number of forgeable quantum messages in the scheme, then the scheme can be secured against the forgery attack by not sending forgeable quantum messages, and so our situation does not directly imply that we check whether the scheme is secure against the attack. However, if users run a given scheme without any consideration of forgeable quantum messages, then a sender might transmit such forgeable messages to a receiver and in such a case an attacker can forge the messages if the attacker knows them. Thus it is important and necessary to look into forgeable quantum messages. We show here that there always exists such a forgeable quantum message-signature pair for every known scheme with quantum encryption and rotation, and numerically show that there are no forgeable quantum message-signature pairs that exist in an arbitrated quantum signature scheme.

In this paper, a decentralized robust approach is proposed for the Automatic Generation Control (AGC) system based on a modified traditional AGC structure. This work addresses the new strategy to adapt well-tested classical AGC scheme to the changing environment of power system operation under deregulation. The effect of bilateral contracts is considered as a set of new input signals in each control area dynamical model. In practice, AGC systems use simple proportional-integral (PI) controllers. However, since the PI controller parameters are usually tuned based on classical or trial-and-error approaches, they are incapable of obtaining good dynamical performance for a wide range of operating conditions and various scenarios in deregulated environment. In this paper with regard to this problem, the AGC synthesis is formulated as an H∞ static output control problem and is solved using a developed iterative linear matrix inequalities (ILMI) algorithm to design of robust PI controllers in the restructured power system control areas. A three area power system example with possible contract scenarios and wide range of load changes is given to illustrate the proposed approach. The resulting controllers are shown to minimize the effect of disturbances and maintain the robust performance.

Electronic cash (e-cash) is definitely one of the most popular research topics in the e-commerce field. It is very important that e-cash be able to hold the anonymity and accuracy in order to preserve the privacy and rights of customers. There are two types of e-cash in general, which are online e-cash and offline e-cash. Both systems have their own pros and cons and they can be used to construct various applications. In this paper, we pioneer to propose a provably secure and efficient offline e-cash scheme with date attachability based on the blind signature technique, where expiration date and deposit date can be embedded in an e-cash simultaneously. With the help of expiration date, the bank can manage the huge database much more easily against unlimited growth, and the deposit date cannot be forged so that users are able to calculate the amount of interests they can receive in the future correctly. Furthermore, we offer security analysis and formal proofs for all essential properties of offline e-cash, which are anonymity control, unforgeability, conditional-traceability, and no-swindling. PMID:24982931

Maternal, newborn, and child health indices in Nigeria vary widely across geopolitical zones and between urban and rural areas, mostly due to variations in the availability of skilled attendance at birth. To improve these indices, the Midwives Service Scheme (MSS) in Nigeria engaged newly graduated, unemployed, and retired midwives to work temporarily in rural areas. The midwives are posted for 1 year to selected primary care facilities linked through a cluster model in which four such facilities with the capacity to provide basic essential obstetric care are clustered around a secondary care facility with the capacity to provide comprehensive emergency obstetric care. The outcome of the MSS 1 year on has been an uneven improvement in maternal, newborn, and child health indices in the six geopolitical zones of Nigeria. Major challenges include retention, availability and training of midwives, and varying levels of commitment from state and local governments across the country, and despite the availability of skilled birth attendants at MSS facilities, women still deliver at home in some parts of the country. PMID:22563303

A number of chemicals known to act on animal systems through the endocrine system have been termed environmental endocrine disruptors. This group includes some of the PCBs and TCDDs, as well as lead, mercury and a large number of pesticides. The common feature is that the chemicals interact with endogenous endocrine systems at the cellular and/or molecular level to alter normal processes that are controlled or regulated by hormones. Although the existence of artificial or environmental estrogens (e.g. chlordecone and DES) has been known for some time, recent data indicate that this phenomenon is widespread. Indeed, anti-androgens have been held responsible for reproductive dysfunction in alligator populations in Florida. But the significance of endocrine disruption was recognized by pesticide manufacturers when insect growth regulators were developed to interfere with hormonal control of growth. Controlling, regulating or managing these chemicals depends in no small part on the ability to identify, screen or otherwise know that a chemical is an endocrine disrupter. Two possible classifications schemes are: using the effects caused in an animal, or animals as an exposure indicator; and using a known screen for the point of contact with the animal. The former would require extensive knowledge of cause and effect relationships in dozens of animal groups; the latter would require a screening tool comparable to an estrogen binding assay. The authors present a possible classification based on chemicals known to disrupt estrogenic, androgenic and ecdysone regulated hormonal systems.

Electronic cash (e-cash) is definitely one of the most popular research topics in the e-commerce field. It is very important that e-cash be able to hold the anonymity and accuracy in order to preserve the privacy and rights of customers. There are two types of e-cash in general, which are online e-cash and offline e-cash. Both systems have their own pros and cons and they can be used to construct various applications. In this paper, we pioneer to propose a provably secure and efficient offline e-cash scheme with date attachability based on the blind signature technique, where expiration date and deposit date can be embedded in an e-cash simultaneously. With the help of expiration date, the bank can manage the huge database much more easily against unlimited growth, and the deposit date cannot be forged so that users are able to calculate the amount of interests they can receive in the future correctly. Furthermore, we offer security analysis and formal proofs for all essential properties of offline e-cash, which are anonymity control, unforgeability, conditional-traceability, and no-swindling. PMID:24982931

The effective use of biasing for the Monte Carlo solution of a void streaming problem is essential to obtaining a reasonable result in a reasonable amount of time. Most general purpose Monte Carlo shielding codes allow for the user to select the particular biasing techniques best oriented to the particular problem of interest. The biasing strategy for void streaming problems many times differs from that of a deep penetration problem. The key in void streaming is to bias particles into the streaming path, whereas in deep penetration problems the biasing is aimed at forcing particles through the shield. Until recently, the biasing scheme in the SCALE SAS4 shielding module was considered inadequate for void streaming problems due to the assumed one-dimensional nature of the automated bias prescription. A modified approach to the automated biasing in SAS4 has allowed for significant gains to be realised in the use of the code for void streaming problems. This paper applies the modified SAS4 procedures to a spent fuel storage cask model with vent ports. The results of the SAS4 analysis are compared with those of the ADVANTG methodology, which is an accelerated version of MCNP. Various options available for the implementation of the SAS4 methodology are reviewed and recommendations offered. PMID:16604687

In recent years, we have seen several different approaches dealing with multiview compression. First, we can find the H264/MVC extension which generates quite heavy bitstreams when used on n-views autostereoscopic medias and does not allow inter-view reconstruction. Another solution relies on the MVD (MultiView+Depth) scheme which keeps p views (n > p > 1) and their associated depth-maps. This method is not suitable for multiview compression since it does not exploit the redundancy between the p views, moreover occlusion areas cannot be accurately filled. In this paper, we present our method based on the LDV (Layered Depth Video) approach which keeps one reference view with its associated depth-map and the n-1 residual ones required to fill occluded areas. We first perform a global per-pixel matching step (providing a good consistency between each view) in order to generate one unified-color RGB texture (where a unique color is devoted to all pixels corresponding to the same 3D-point, thus avoiding illumination artifacts) and a signed integer disparity texture. Next, we extract the non-redundant information and store it into two textures (a unified-color one and a disparity one) containing the reference and the n-1 residual views. The RGB texture is compressed with a conventional DCT or DWT-based algorithm and the disparity texture with a lossless dictionary algorithm. Then, we will discuss about the signal deformations generated by our approach.

The cytoarchitecture of the rhesus monkey's auditory cortex was examined using immunocytochemical staining with parvalbumin, calbindin-D28K, and SMI32, as well as staining for cytochrome oxidase (CO). The results suggest that Kaas and Hackett's scheme of the auditory cortices can be extended to include five concentric rings surrounding an inner core. The inner core, containing areas A1 and R, is the most densely stained with parvalbumin and CO and can be separated on the basis of laminar patterns of SMI32 staining into lateral and medial subdivisions. From the inner core to the fifth (outermost) ring, parvalbumin staining gradually decreases and calbindin staining gradually increases. The first ring corresponds to Kaas and Hackett's auditory belt, and the second, to their parabelt. SMI32 staining revealed a clear border between these two. Rings 2 through 5 extend laterally into the dorsal bank of the superior temporal sulcus. The results also suggest that the rostral tip of the outermost ring adjoins the rostroventral part of the insula (area Pro) and the temporal pole, while the caudal tip adjoins the ventral part of area 7a.

Bright sources of entangled photons are of great interest in the quantum information community, and the non-linear optical process of Spontaneous Parametric Downconversion (SPDC) is a well-known means to create entangled photons. Additionally, periodic polling has emerged as a viable choice for quasi-phase matching the downconverted photons rendering them useful for experimentation. Periodically Poled Lithium Niobate (PPLN) is among the best choices for these materials as it optically robust, temperature tunable, and commercially available. The addition of waveguide structures in PPLN devices not only increase its viability as a source of entangled photons but can also become an integral part of the entanglement schemes as well. Thorough characterization of PPLN devices is essential for the optimization of SPDC and their use to create entangled states. We will report characterization results for wave-guided PPLN devices including: waveguide geometry, fiber coupling efficiency, polling period details, and downconversion efficiency. Of particular interest is our device's ability to be used for novel entanglement states involving one or more waveguides.

In the field of collusion-resistant traitor tracing, Oosterwijk et al. recently determined the optimal suspicion function for simple decoders. Earlier, Moulin also considered another type of decoder: the generic joint decoder that compares all possible coalitions, and showed that usually the generic joint decoder outperforms the simple decoder. Both Amiri and Tardos, and Meerwald and Furon described constructions that assign suspicion levels to c-tuples, where c is the number of colluders. We investigate a novel idea: the tuple decoder, assigning a suspicion level to tuples of a fixed size. In contrast to earlier work, we use this in a novel accusation algorithm to decide for each distinct user whether or not to accuse him. We expect such a scheme to outperform simple decoders while not being as computationally intensive as the generic joint decoder. In this paper we generalize the optimal suspicion functions to tuples, and describe a family of accusation algorithms in this setting that accuses individual users using this tuple-based information.

Topography is well known to alter the shortwave radiation balance at the surface. A detailed radiation balance is therefore required in mountainous terrain. In order to maintain the computational performance of large-scale models while at the same time increasing grid resolutions, subgrid parameterizations are gaining more importance. A complete radiation parameterization scheme for subgrid topography accounting for shading, limited sky view, and terrain reflections is presented. Each radiative flux is parameterized individually as a function of sky view factor, slope and sun elevation angle, and albedo. We validated the parameterization with domain-averaged values computed from a distributed radiation model which includes a detailed shortwave radiation balance. Furthermore, we quantify the individual topographic impacts on the shortwave radiation balance. Rather than using a limited set of real topographies we used a large ensemble of simulated topographies with a wide range of typical terrain characteristics to study all topographic influences on the radiation balance. To this end slopes and partial derivatives of seven real topographies from Switzerland and the United States were analyzed and Gaussian statistics were found to best approximate real topographies. Parameterized direct beam radiation presented previously compared well with modeled values over the entire range of slope angles. The approximation of multiple, anisotropic terrain reflections with single, isotropic terrain reflections was confirmed as long as domain-averaged values are considered. The validation of all parameterized radiative fluxes showed that it is indeed not necessary to compute subgrid fluxes in order to account for all topographic influences in large grid sizes.

We examine how three fundamentally different numerical hydrodynamics codes follow the evolution of an isothermal galactic disc with an external spiral potential. We compare an adaptive mesh refinement code (RAMSES), a smoothed particle hydrodynamics code (SPHNG), and a volume-discretized mesh-less code (GIZMO). Using standard refinement criteria, we find that RAMSES produces a disc that is less vertically concentrated and does not reach such high densities as the SPHNG or GIZMO runs. The gas surface density in the spiral arms increases at a lower rate for the RAMSES simulations compared to the other codes. There is also a greater degree of substructure in the SPHNG and GIZMO runs and secondary spiral arms are more pronounced. By resolving the Jeans length with a greater number of grid cells, we achieve more similar results to the Lagrangian codes used in this study. Other alterations to the refinement scheme (adding extra levels of refinement and refining based on local density gradients) are less successful in reducing the disparity between RAMSES and SPHNG/GIZMO. Although more similar, SPHNG displays different density distributions and vertical mass profiles to all modes of GIZMO (including the smoothed particle hydrodynamics version). This suggests differences also arise which are not intrinsic to the particular method but rather due to its implementation. The discrepancies between codes (in particular, the densities reached in the spiral arms) could potentially result in differences in the locations and time-scales for gravitational collapse, and therefore impact star formation activity in more complex galaxy disc simulations.

This paper presents a family of two-level five-point implicit schemes for the solution of one-dimensional systems of hyperbolic conservation laws, which generalized the Crank-Nicholson scheme to fourth order accuracy (4-4) in both time and space. These 4-4 schemes are nondissipative and unconditionally stable. Special attention is given to the system of linear equations associated with these 4-4 implicit schemes. The regularity of this system is analyzed and efficiency of solution-algorithms is examined. A two-datum representation of these 4-4 implicit schemes brings about a compactification of the stencil to three mesh points at each time-level. This compact two-datum representation is particularly useful in deriving boundary treatments. Numerical results are presented to illustrate some properties of the proposed scheme.

In the implementation of spectral difference (SD) method, the conserved variables at the flux points are calculated from the solution points using extrapolation or interpolation schemes. The errors incurred in using extrapolation and interpolation would result in instability. On the other hand, the difference between the left and right conserved variables at the edge interface will introduce dissipation to the SD method when applying a Riemann solver to compute the flux at the element interface. In this paper, an optimization of the extrapolation and interpolation schemes for the fourth order SD method on quadrilateral element is carried out in the wavenumber space through minimizing their dispersion error over a selected band of wavenumbers. The optimized coefficients of the extrapolation and interpolation are presented. And the dispersion error of the original and optimized schemes is plotted and compared. An improvement of the dispersion error over the resolvable wavenumber range of SD method is obtained. The stability of the optimized fourth order SD scheme is analyzed. It is found that the stability of the 4th order scheme with Chebyshev-Gauss-Lobatto flux points, which is originally weakly unstable, has been improved through the optimization. The weak instability is eliminated completely if an additional second order filter is applied on selected flux points. One and two dimensional linear wave propagation analyses are carried out for the optimized scheme. It is found that in the resolvable wavenumber range the new SD scheme is less dispersive and less dissipative than the original scheme, and the new scheme is less anisotropic for 2D wave propagation. The optimized SD solver is validated with four computational aeroacoustics (CAA) workshop benchmark problems. The numerical results with optimized schemes agree much better with the analytical data than those with the original schemes.

We design finite volume schemes for the equations of ideal magnetohydrodynamics (MHD) and based on splitting these equations into a fluid part and a magnetic induction part. The fluid part leads to an extended Euler system with magnetic forces as source terms. This set of equations are approximated by suitable two- and three-wave HLL solvers. The magnetic part is modeled by the magnetic induction equations which are approximated using stable upwind schemes devised in a recent paper [F. Fuchs, K.H. Karlsen, S. Mishra, N.H. Risebro, Stable upwind schemes for the Magnetic Induction equation. Math. Model. Num. Anal., Available on conservation laws preprint server, submitted for publication, URL: ]. These two sets of schemes can be combined either component by component, or by using an operator splitting procedure to obtain a finite volume scheme for the MHD equations. The resulting schemes are simple to design and implement. These schemes are compared with existing HLL type and Roe type schemes for MHD equations in a series of numerical experiments. These tests reveal that the proposed schemes are robust and have a greater numerical resolution than HLL type solvers, particularly in several space dimensions. In fact, the numerical resolution is comparable to that of the Roe scheme on most test problems with the computational cost being at the level of a HLL type solver. Furthermore, the schemes are remarkably stable even at very fine mesh resolutions and handle the divergence constraint efficiently with low divergence errors.

We present an implementation of smoothed particle hydrodynamics (SPH) with improved accuracy for simulations of galaxies and the large-scale structure. In particular, we implement and test a vast majority of SPH improvement in the developer version of GADGET-3. We use the Wendland kernel functions, a particle wake-up time-step limiting mechanism and a time-dependent scheme for artificial viscosity including high-order gradient computation and shear flow limiter. Additionally, we include a novel prescription for time-dependent artificial conduction, which corrects for gravitationally induced pressure gradients and improves the SPH performance in capturing the development of gas-dynamical instabilities. We extensively test our new implementation in a wide range of hydrodynamical standard tests including weak and strong shocks as well as shear flows, turbulent spectra, gas mixing, hydrostatic equilibria and self-gravitating gas clouds. We jointly employ all modifications; however, when necessary we study the performance of individual code modules. We approximate hydrodynamical states more accurately and with significantly less noise than standard GADGET-SPH. Furthermore, the new implementation promotes the mixing of entropy between different fluid phases, also within cosmological simulations. Finally, we study the performance of the hydrodynamical solver in the context of radiative galaxy formation and non-radiative galaxy cluster formation. We find galactic discs to be colder and more extended and galaxy clusters showing entropy cores instead of steadily declining entropy profiles. In summary, we demonstrate that our improved SPH implementation overcomes most of the undesirable limitations of standard GADGET-SPH, thus becoming the core of an efficient code for large cosmological simulations.

This doctoral dissertation is concerned with the formulation and application of a high order accurate numerical algorithm suitable in solving complex multi dimensional equations and the application of this algorithm to a problem in Astrophysics. The algorithm is designed with the aim of resolving solutions of partial differential equations with sharp fronts propagating with time. This high order accurate class of numerical technique is called a Weighted Essentially Non Oscillatory (WENO) method and is well suited for shock capturing in solving conservation laws. The numerical approximation method, in the algorithm, is coupled with high order time marching as well as integration techniques designed to reduce computational cost. This numerical algorithm is used in several applications in computational cosmology to help understand questions about certain physical phenomena which occurred during the formation and evolution of first generation stars. The thesis is divided broadly in terms of the algorithm and its application to the different galactic processes. The first chapter deals with the astrophysical problem and offers an introduction to the numerical algorithm. In chapter 2 we outline the mathematical model and the various functions and parameters associated with the model. We also give a brief description of the relevant physical phenomena and the conservation laws associated with them. In chapter 3, we give a detailed description of the higher order algorithm and its formulation. We also highlight the special techniques incorporated in the algorithm in order to make it more suitable for handling cases which are computationally intensive. In the later chapters, 4-7, we explore in detail the physical processes and the different applications of our numerical scheme. We calculate different results such as the time scale of a temperature coupling mechanism, radiation and intensity changes etc. Different tests are also performed to illustrate the stability and

Cells receive a wide variety of cellular and environmental signals, which are often processed combinatorially to generate specific genetic responses. Here we explore theoretically the potentials and limitations of combinatorial signal integration at the level of cis-regulatory transcription control. Our analysis suggests that many complex transcription-control functions of the type encountered in higher eukaryotes are already implementable within the much simpler bacterial transcription system. Using a quantitative model of bacterial transcription and invoking only specific protein-DNA interaction and weak glue-like interaction between regulatory proteins, we show explicit schemes to implement regulatory logic functions of increasing complexity by appropriately selecting the strengths and arranging the relative positions of the relevant protein-binding DNA sequences in the cis-regulatory region. The architectures that emerge are naturally modular and evolvable. Our results suggest that the transcription regulatory apparatus is a "programmable" computing machine, belonging formally to the class of Boltzmann machines. Crucial to our results is the ability to regulate gene expression at a distance. In bacteria, this can be achieved for isolated genes via DNA looping controlled by the dimerization of DNA-bound proteins. However, if adopted extensively in the genome, long-distance interaction can cause unintentional intergenic cross talk, a detrimental side effect difficult to overcome by the known bacterial transcription-regulation systems. This may be a key factor limiting the genome-wide adoption of complex transcription control in bacteria. Implications of our findings for combinatorial transcription control in eukaryotes are discussed. Abbreviations: TF, transcription factor RNAP, RNA polymerase DNF, disjunctive normal form CNF, conjunctive normal form

We examine how three fundamentally different numerical hydrodynamics codes follow the evolution of an isothermal galactic disc with an external spiral potential. We compare an adaptive mesh refinement code (RAMSES), a smoothed particle hydrodynamics code (sphNG), and a volume-discretised meshless code (GIZMO). Using standard refinement criteria, we find that RAMSES produces a disc that is less vertically concentrated and does not reach such high densities as the sphNG or GIZMO runs. The gas surface density in the spiral arms increases at a lower rate for the RAMSES simulations compared to the other codes. There is also a greater degree of substructure in the sphNG and GIZMO runs and secondary spiral arms are more pronounced. By resolving the Jeans' length with a greater number of grid cells we achieve more similar results to the Lagrangian codes used in this study. Other alterations to the refinement scheme (adding extra levels of refinement and refining based on local density gradients) are less successful in reducing the disparity between RAMSES and sphNG/GIZMO. Although more similar, sphNG displays different density distributions and vertical mass profiles to all modes of GIZMO (including the smoothed particle hydrodynamics version). This suggests differences also arise which are not intrinsic to the particular method but rather due to its implementation. The discrepancies between codes (in particular, the densities reached in the spiral arms) could potentially result in differences in the locations and timescales for gravitational collapse, and therefore impact star formation activity in more complex galaxy disc simulations.

A 5-point-stencil optimised nonlinear scheme with spectral-like resolution within the whole wave number range for secondary derivatives is devised. The proposed scheme can compensate for the dissipation deficiency of traditional linear schemes and suppress the spurious energy accumulation that occurs at high wave numbers, both of which are frequently encountered in large eddy simulation. The new scheme is composed of a linear fourth-order central scheme term and an artificial viscosity term. These two terms are connected by a nonlinear weight. The proposed nonlinear weight is designed based on Fourier analysis, rather than Taylor analysis, to guarantee a spectral-like resolution. Moreover, the accuracy is not affected by the optimisation, and the new scheme reaches fourth-order accuracy. The new scheme is tested numerically using the one-dimensional diffusion problem, one-dimensional steady viscous Burger's shock, two-dimensional vortex decaying, three-dimensional isotropic decaying turbulence and fully developed turbulent channel flow. All the tests confirm that the new scheme has spectral-like resolution and can improve the accuracy of the energy spectrum, dissipation rate and high-order statistics of turbulent flows.

Explicit finite difference schemes for the computation of weak solutions of nonlinear scalar conservation laws is presented and analyzed. These schemes are uniformly second-order accurate and nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time.

The Fullemploy Training Scheme is an experiment designed to overcome problems encountered by disadvantaged youth in Britain's Manpower Services Commission's job training program. The aim of the scheme is to bring minority disadvantaged young people into a special office skills training course which would combine vocational training with…

Quantum secret sharing schemes encrypting a quantum state into a multipartite entangled state are treated. The lower bound on the dimension of each share given by Gottesman [Phys. Rev. A 61, 042311 (2000)] is revisited based on a relation between the reversibility of quantum operations and the Holevo information. We also propose a threshold ramp quantum secret sharing scheme and evaluate its coding efficiency.

In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.

It is widely suggested that feedback on assignments is useful to students' learning, however, little research has examined how this feedback may be provided in large classes or the actual effects of such a scheme. We designed and implemented a voluntary "earlybird scheme" that provided detailed feedback to undergraduate Business students on a…

A high-speed distortionless predictive image-compression scheme that is based on differential pulse code modulation output modeling combined with efficient source-code design is introduced. Experimental results show that this scheme achieves compression that is very close to the difference entropy of the source.

The need for a realistic and rational method for interpolating sparse data sets is wide spread. eal porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. uch a scheme based on the properties ...

The convergence of a Runge-Kutta (RK) scheme with multigrid is accelerated by preconditioning with a fully implicit operator. With the extended stability of the Runge-Kutta scheme, CFL numbers as high as 1000 could be used. The implicit preconditioner addresses the stiffness in the discrete equations associated with stretched meshes. Numerical dissipation operators (based on the Roe scheme, a matrix formulation, and the CUSP scheme) as well as the number of RK stages are considered in evaluating the RK/implicit scheme. Both the numerical and computational efficiency of the scheme with the different dissipation operators are discussed. The RK/implicit scheme is used to solve the two-dimensional (2-D) and three-dimensional (3-D) compressible, Reynolds-averaged Navier-Stokes equations. In two dimensions, turbulent flows over an airfoil at subsonic and transonic conditions are computed. The effects of mesh cell aspect ratio on convergence are investigated for Reynolds numbers between 5.7 x 10(exp 6) and 100.0 x 10(exp 6). Results are also obtained for a transonic wing flow. For both 2-D and 3-D problems, the computational time of a well-tuned standard RK scheme is reduced at least a factor of four.

The initial boundary value problem for the neutron transport equation is considered. The first and second orders of accuracy difference schemes for the approximate solution of this problem are presented. In applications, the stability estimates for solutions of difference schemes for the approximate solution of the neutron transport equation are obtained. Numerical techniques are developed and algorithms are tested on an example in MATLAB.

The need for a realistic and rational method for interpolating sparse data sets is widespread. Real porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. Such a scheme based on the properties...

Purpose: The purpose of this paper is to propose and examine the new user support in university network. Design/methodology/approach: The new user support is realized by use of DACS (Destination Addressing Control System) Scheme which manages a whole network system through communication control on a client computer. This DACS Scheme has been…

The Perry scheme of intellectual and ethical development has become widely used in a range of academic disciplines and such areas as career training and faculty consultation. However, current measurement techniques for the scheme, whether interview format or paper and pencil measures, do not adequately address issues related to assessing cognitive…

Many applications of robots require that the same task be repeated a number of times. In such applications, the errors associated with one cycle are also repeated every cycle of the operation. An off-line learning control scheme is used here to modify the command function which would result in smaller errors in the next operation. The learning scheme is based on a knowledge of the errors and error rates associated with each cycle. Necessary conditions for the iterative scheme to converge to zero errors are derived analytically considering a second order servosystem model. Computer simulations show that the errors are reduced at a faster rate if the error rate is included in the iteration scheme. The results also indicate that the scheme may increase the magnitude of errors if the rate information is not included in the iteration scheme. Modification of the command input using a phase and gain adjustment is also proposed to reduce the errors with one attempt. The scheme is then applied to a computer model of a robot system similar to PUMA 560. Improved performance of the robot is shown by considering various cases of trajectory tracing. The scheme can be successfully used to improve the performance of actual robots within the limitations of the repeatability and noise characteristics of the robot.

Suggests a new ranking scheme especially adapted for hypertext environments in order to produce more effective retrieval results and still use Boolean search strategies. Topics include Boolean ranking schemes; single-term indexing and term weighting; fuzzy set theory extension; and citation indexing. (64 references) (Author/LRW)

A finite-volume calculation procedure for steady, incompressible, elliptic flows in complex geometries is presented. The methodology uses generalized body-fitted coordinates to model the shape of the boundary accurately. All variables are stored at the centroids of the elements, thus achieving simplicity and low cost of computations. Turbulence is modeled by using the standard two-equation k-epsilon model. The purpose of this work is to evaluate the performance and accuracy of flow calculations under different discretization schemes in the light of experimental results. The discretization schemes that are incorporated in the code include the classical hybrid scheme, the third-order QUICK scheme, and a fifth-order upwind scheme. Benchmark tests are performed for laminar and turbulent flows in 90 deg curved ducts of square and circular cross sections. Flaw solutions obtained using the classical hybrid scheme are compared with solutions obtained with the higher-order schemes. The results show that accurate solutions can be efficiently obtained on grids of moderate size by using high-order-accuracy schemes. Overall, the potential of the methodology for calculating real-life engineering flows is demonstrated.

... 7 Agriculture 1 2013-01-01 2013-01-01 false Scheme or device. 12.10 Section 12.10 Agriculture Office of the Secretary of Agriculture HIGHLY ERODIBLE LAND AND WETLAND CONSERVATION General Provisions § 12.10 Scheme or device. All or any part of the benefits listed in § 12.4 otherwise due a person...

... 7 Agriculture 1 2011-01-01 2011-01-01 false Scheme or device. 12.10 Section 12.10 Agriculture Office of the Secretary of Agriculture HIGHLY ERODIBLE LAND AND WETLAND CONSERVATION General Provisions § 12.10 Scheme or device. All or any part of the benefits listed in § 12.4 otherwise due a person...

... 7 Agriculture 1 2010-01-01 2010-01-01 false Scheme or device. 12.10 Section 12.10 Agriculture Office of the Secretary of Agriculture HIGHLY ERODIBLE LAND AND WETLAND CONSERVATION General Provisions § 12.10 Scheme or device. All or any part of the benefits listed in § 12.4 otherwise due a person...

... 7 Agriculture 1 2014-01-01 2014-01-01 false Scheme or device. 12.10 Section 12.10 Agriculture Office of the Secretary of Agriculture HIGHLY ERODIBLE LAND AND WETLAND CONSERVATION General Provisions § 12.10 Scheme or device. All or any part of the benefits listed in § 12.4 otherwise due a person...

... 7 Agriculture 1 2012-01-01 2012-01-01 false Scheme or device. 12.10 Section 12.10 Agriculture Office of the Secretary of Agriculture HIGHLY ERODIBLE LAND AND WETLAND CONSERVATION General Provisions § 12.10 Scheme or device. All or any part of the benefits listed in § 12.4 otherwise due a person...

Explores the possibility of adding user-oriented class associations to hierarchical library classification schemes. Analyses a log of book circulation records from a university library in Taiwan and shows that classification schemes can be made more adaptable by analyzing circulation patterns of similar users. (Author/LRW)

We present a synthesis of findings from constructivist teaching experiments regarding six schemes children construct for reasoning multiplicatively and tasks to promote them. We provide a task-generating platform game, depictions of each scheme, and supporting tasks. Tasks must be distinguished from children's thinking, and learning situations…

An evaluation examined how the Danish leave schemes, an offer to employed and unemployed persons who qualify for unemployment benefits, were functioning and to what extent the objectives have been achieved. It was found that 60 percent of those taking leave had previously been unemployed; women accounted for two-thirds of those joining the scheme;…

Zhang and Wang proposed an improved signature scheme without using one-way hash functions. In this paper, we analyze the odd and even probability of signature parameters in Zhang-Wang signature scheme, which combined with Boolean algebra, such as bitwise exclusive-or (XOR). Furthermore, it is pointed out that we can use them for attacks.

A total of 3,732 recipients of Assistance for Isolated Children (AIC) allowances during 1979 and 1980 received questionnaires and parents of 313 families were interviewed to determine who benefitted from the AIC Scheme, what use was made of the AIC allowance, what effect did the AIC Scheme appear to have had, and what anomalies existed in relation…

Twenty middle grades students were interviewed to gain insights into their reasoning about problem-solving strategies using a Problem Solving Justification Scheme as our theoretical lens and the basis for our analysis. The scheme was modified from the work of Harel and Sowder (1998) making it more broadly applicable and accounting for research…

This comment explains that the quantum signature scheme proposed by Ming-Xing Luo et al. (in Int. J. Theor. Phys. 51:2134, 2012) cannot satisfy the signature requirements. The comment presents methods of possible attacks by forgers, while also demonstrating that it is difficult to proceed by the normal protocol because of some errors in the formula of the scheme.

A method for constructing explicit finite-difference schemes which can be used to solve Schroedinger-type partial-differential equations is presented. A forward Euler scheme that is conditionally stable is given by the procedure. The results presented are based on the analysis of the simplest Schroedinger type equation.

Several algorithms for introducing artificial dissipation into a central difference approximation to the Euler and Navier Stokes equations are considered. The focus of the paper is on the convective upwind and split pressure (CUSP) scheme, which is designed to support single interior point discrete shock waves. This scheme is analyzed and compared in detail with scalar and matrix dissipation (MATD) schemes. Resolution capability is determined by solving subsonic, transonic, and hypersonic flow problems. A finite-volume discretization and a multistage time-stepping scheme with multigrid are used to compute solutions to the flow equations. Numerical results are also compared with either theoretical solutions or experimental data. For transonic airfoil flows the best accuracy on coarse meshes for aerodynamic coefficients is obtained with a simple MATD scheme.

An approximate factorization scheme based on the AF2 algorithm is presented for solving the three-dimensional full potential equation for the transonic flow about isolated wings. Two spatial discretization variations are presented, one using a hybrid first-order/second-order-accurate scheme and the second using a fully second-order-accurate scheme. The present algorithm utilizes a C-H grid topology to map the flow field about the wing. One version of the AF2 iteration scheme is used on the upper wing surface and another slightly modified version is used on the lower surface. These two algorithm variations are then connected at the wing leading edge using a local iteration technique. The resulting scheme has improved linear stability characteristics and improved time-like damping characteristics relative to previous implementations of the AF2 algorithm. The presentation is highlighted with a grid refinement study and a number of numerical results.

An effective way to increase the luminosity in the Fermilab Tevatron collider program Run2 is to improve the overall antiproton transfer efficiency. During antiproton coalescing in the Main Injector (MI), about 10-15% particles get lost. This loss could be avoided in a new antiproton transfer scheme that removes coalescing from the process. Moreover, this scheme would also eliminate emittance dilution due to coalescing. This scheme uses a 2.5 MHz RF system to transfer antiprotons from the Accumulator to the Main Injector. It is then followed by a bunch rotation in the MI to shorten the bunch length so that it can be captured by a 53 MHz RF bucket. Calculations and ESME simulations show that this scheme works. No new hardware is needed to implement this scheme.

An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.

Nowadays, a variety of information related to the distance between two wireless devices can be easily obtained. This paper presents a hybrid localization scheme that combines received signal strength (RSS) and round-trip time (RTT) information with the aim of improving the previous localization schemes. The hybrid localization scheme is based on an RSS ranging technique that uses RTT ranging estimates as constraints among other heuristic constraints. Once distances have been well estimated, the position of the mobile station (MS) to be located is estimated using a new robust least-squared multilateration (RLSM) technique that combines the RSS and RTT ranging estimates mitigating the negative effect of outliers. The hybrid localization scheme coupled with simulations and measurements demonstrates that it outperforms the conventional RSS-based and RTT-based localization schemes, without using either a tracking technique or a previous calibration stage of the environment.

The upwind factorizable schemes for the equations of fluid were introduced recently. They facilitate achieving the Textbook Multigrid Efficiency (TME) and are expected also to result in the solvers of unparalleled robustness. The approach itself is very general. Therefore, it may well become a general framework for the large-scale, Computational Fluid Dynamics. In this paper we outline the triangular grid formulation of the factorizable schemes. The derivation is based on the fact that the factorizable schemes can be expressed entirely using vector notation. without explicitly mentioning a particular coordinate frame. We, describe the resulting discrete scheme in detail and present some computational results verifying the basic properties of the scheme/solver.

This paper proposes a new electronic voting (e-voting) scheme that fulfills all the security requirements of e-voting i.e. privacy, accuracy, universal verifiability, fairness, receipt-freeness, incoercibility, dispute-freeness, robustness, practicality and scalability; usually some of which are found to be traded. When compared with other existing schemes, this scheme requires much more simple computations and weaker assumptions about trustworthiness of individual election authorities. The key mechanism is the one that uses confirmation numbers involved in individual votes to make votes verifiable while disabling all entities including voters themselves to know the linkages between voters and their votes. Many existing e-voting schemes extensively deploy zero-knowledge proof (ZKP) to achieve verifiability. However, ZKP is expensive and complicated. The confirmation numbers attain the verifiability requirement in a much more simple and intuitive way, then the scheme becomes scalable and practical.

The effect of collision-partner selection schemes on the accuracy and the efficiency of the Direct Simulation Monte Carlo (DSMC) method of Bird is investigated. Several schemes to reduce the total discretization error as a function of the mean collision separation and the mean collision time are examined. These include the historically first sub-cell scheme, the more recent nearest-neighbor scheme, and various near-neighbor schemes, which are evaluated for their effect on the thermal conductivity for Fourier flow. Their convergence characteristics as a function of spatial and temporal discretization and the number of simulators per cell are compared to the convergence characteristics of the sophisticated and standard DSMC algorithms. Improved performance is obtained if the population from which possible collision partners are selected is an appropriate fraction of the population of the cell.

The paper presents a secure and efficient threshold group signature scheme aiming at two problems of current threshold group signature schemes: conspiracy attack and inefficiency. Scheme proposed in this paper takes strategy of separating designed clerk who is responsible for collecting and authenticating each individual signature from group, the designed clerk don't participate in distribution of group secret key and has his own public key and private key, designed clerk needs to sign part information of threshold group signature after collecting signatures. Thus verifier has to verify signature of the group after validating signature of the designed clerk. This scheme is proved to be secure against conspiracy attack at last and is more efficient by comparing with other schemes.

Dissipation mechanisms of Godunov-type schemes are presented in the framework of unified representation. The causes of inaccuracy at the contact discontinuity and the dissipation mechanism in the numerical mass flux of the HLLEM scheme are examined first. A "vacuum preserving property" is defined and the prominent role of the numerical signal speed involved with the rarefaction waves in the expanding region is analyzed. Through a linear perturbation analysis on the odd-even decoupling problem, necessary conditions for designing a shock stable scheme are discussed. As a result, an improved HLLE(HLLE+) scheme is proposed and its dissipation mechanism is analyzed. The diffusivity of the Godunov-type schemes is examined by two-dimensional hypersonic viscous flow.

The upwind factorizable schemes for the equations of fluid was introduced recently. They facilitate achieving the Textbook Multigrid Efficiency (TME) and are expected also to result in the solvers of unparalleled robustness. The approach itself is very general. Therefore, it may well become a general framework for the large-scale Computational Fluid Dynamics. In this paper we outline the triangular grid formulation of the factorizable schemes. The derivation is based on the fact that the factorizable schemes can be expressed entirely using vector notation, without explicitly mentioning a particular coordinate frame. We describe the resulting discrete scheme in detail and present some computational results verifying the basic properties of the scheme/solver.

Conservative schemes usually produce non-physical oscillations in multi-component flow solutions. Many methods were proposed to avoid these oscillations. Some of these correction schemes could fix these oscillations in the pressure profile at discontinuities, but the density profile still remained diffused between the two components. In the case of gas-liquid interfaces, density diffusion is not acceptable. In this paper, the interfacial correction scheme proposed by Cocchi et al. was modified to be used in conjunction with the level-set approach. After each time step two grid points that bound the interface are recalculated by using an exact Riemann solver so that pressure oscillations and the density diffusion at discontinuities were eliminated. The scheme presented here can be applied to any type of conservation law solver. Some examples solved by this scheme and their results are compared with the exact solution when available. Good agreement is obtained between the present results and the exact solutions. Copyright

The choice of entanglement purification scheme strongly depends on the fidelities of quantum gates and measurements, as well as the imperfection of initial entanglement. For instance, the purification scheme optimal at low gate fidelities may not necessarily be the optimal scheme at higher gate fidelities. We employ an evolutionary algorithm that efficiently optimizes the entanglement purification circuit for given system parameters. Such optimized purification schemes will boost the performance of entanglement purification, and consequently enhance the fidelity of teleportation-based non-local coupling gates, which is an indispensible building block for modular-based quantum computers. In addition, we study how these optimized purification schemes affect the resource overhead caused by error correction in modular based quantum computers.

A controlled field-effect passivation by a well-defined density of fixed charges is crucial for modern solar cell surface passivation schemes. Al2O3 nanolayers grown by atomic layer deposition contain negative fixed charges. Electrical measurements on slant-etched layers reveal that these charges are located within a 1 nm distance to the interface with the Si substrate. When inserting additional interface layers, the fixed charge density can be continuously adjusted from 3.5 × 10(12) cm(-2) (negative polarity) to 0.0 and up to 4.0 × 10(12) cm(-2) (positive polarity). A HfO2 interface layer of one or more monolayers reduces the negative fixed charges in Al2O3 to zero. The role of HfO2 is described as an inert spacer controlling the distance between Al2O3 and the Si substrate. It is suggested that this spacer alters the nonstoichiometric initial Al2O3 growth regime, which is responsible for the charge formation. On the basis of this charge-free HfO2/Al2O3 stack, negative or positive fixed charges can be formed by introducing additional thin Al2O3 or SiO2 layers between the Si substrate and this HfO2/Al2O3 capping layer. All stacks provide very good passivation of the silicon surface. The measured effective carrier lifetimes are between 1 and 30 ms. This chargecontrol in Al2O3 nanolayers allows the construction of zero-fixed-charge passivation layers as well as layers with tailored fixed charge densities for future solar cell concepts and other field-effect based devices. PMID:26618751

Most of the existing multi-recipient signcryption schemes do not take the anonymity of recipients into consideration because the list of the identities of all recipients must be included in the ciphertext as a necessary element for decryption. Although the signer's anonymity has been taken into account in several alternative schemes, these schemes often suffer from the cross-comparison attack and joint conspiracy attack. That is to say, there are few schemes that can achieve complete anonymity for both the signer and the recipient. However, in many practical applications, such as network conference, both the signer's and the recipient's anonymity should be considered carefully. Motivated by these concerns, we propose a novel multi-recipient signcryption scheme with complete anonymity. The new scheme can achieve both the signer's and the recipient's anonymity at the same time. Each recipient can easily judge whether the received ciphertext is from an authorized source, but cannot determine the real identity of the sender, and at the same time, each participant can easily check decryption permission, but cannot determine the identity of any other recipient. The scheme also provides a public verification method which enables anyone to publicly verify the validity of the ciphertext. Analyses show that the proposed scheme is more efficient in terms of computation complexity and ciphertext length and possesses more advantages than existing schemes, which makes it suitable for practical applications. The proposed scheme could be used for network conferences, paid-TV or DVD broadcasting applications to solve the secure communication problem without violating the privacy of each participant. PMID:23675490

This paper presents the operational scheme of the National Satellite Meteorological Center (NSMC) of the China Meteorological Administration (CMA) to derive atmospheric motion vectors. The NSMC scheme is compared with a method developed at the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT) in preparation for Meteosat Second Generation. Both schemes employ similar basic principles in terms of feature tracking and height assignment, however there are also some important differences. Furthermore, the EUMETSAT scheme assigns quality indicators to each wind vector at the end of the processing chain, whereas the NMSC scheme has inbuilt quality checking at different processing steps allowing for reinstatement of winds rejected by a first quality check. The evaluation of the performance is gained from two periods: a week in January and a week in July 1999. European Centre for Medium-Range Weather Forecast analyses and radiosonde data are used as independent data for evaluation of the two schemes. It is shown that correlating infrared image data with water vapor data before height adjustment, as performed in the NSMC scheme, has a great potential to better distinguish high and low cloud and to provide high-density wind fields. The utilization of radiative transfer calculations for the estimation of the height of thin clouds in the EUMETSAT scheme is shown to be imperative for good quality wind fields. Finally, the feature of the EUMETSAT scheme to assign quality indicators improves the utility of the wind vectors for use in numerical weather prediction models. It is suggested that a combination of the different features of both schemes potentially provide highly increased spatial density in the wind field with improved quality.

This paper describes an optical voltage transducer(OVT) for a 35kV system based on Pockels effect in a BGO(Bi 4Ge 3O 12) crystal. OVT used to measure the voltage of power are superior to conventional electromagnet-induced voltage transducer in many aspects, thus it has great potential to applications. It has some advantages. These advantages are: 1)Optics provides total galvanic separation between the measuring point at high voltage (HV) potential and the measuring equipment at ground potential. 2)Transmission of measuring signals in optical fibers is immune to induced electromagnetic noise even in EMI-environment of switchyards and other high voltage installations. 3)Optics and especially optical fibers make the insulation costs independent of voltage levels thus giving an economical advantage at voltage levels above 100kV. 4)The use of optics is expected to reduce the weight of the transducers. 5)Optical transducers are expected to have a large bandwidth than conventional transducers. 6)The output-signals from an optical transducer are easily interfaced with computers and electronically operated equipment such as digital relays. New techniques developed in electronics and optical field including fiber optic technology bring new contributions to the measurement of voltage and electric field. A Pockels voltage sensor has been widely introduced to electrical power transmission and distribution systems and some advantage of the optical voltage measuring techniques are reported. In this paper, a brief summary of electro-optic effects and the principle of OVT is proposed. The signal processing schemes of different optical path and features are analyzed. The basic principle of OVT is to modulate the irradiance of the light-directed to OVT by an optical fiber-according to the potential difference between the HV-line and the ground potential. The modulation of the light is accomplished by placing a material-that has an optical property (the birefringence), which is

In this work, an ADER type finite volume numerical scheme is proposed as an extension of a first order solver based on weak solutions of RPs with source terms. The type of source terms considered here are a special but relevant type of source terms: their spatial integral is discontinuous. The relevant difference with other previously defined ADER schemes is that it considers the presence of the source term in the solutions of the DRP. Unlike the original ADER schemes, the proposed numerical scheme computes the RPs of the high order terms of the DRP departing from time derivatives of the fluxes as initial conditions for these RPs. Weak solutions of the RPs defined for the DRP are computed using an augmented version of the Roe solver that includes an extra wave that accounts for the contribution of the source term. The discretization done over the source term leads to an energy balanced numerical scheme that allows to obtain the exact solution for steady cases with independence of the grid refinement. In unsteady problems, the numerical scheme ensures the convergence to the exact solution. The numerical scheme is constructed with an arbitrary order of accuracy, and has no theoretical barrier. Numerical results for the Burger's equation and the shallow water equations are presented in this work and indicate that the proposed numerical scheme is able to converge with the expected order of accuracy.

Hypersonic flow is full of complex physical and chemical processes, hence its investigation needs careful analysis of existing schemes and choosing a suitable scheme or designing a brand new scheme. The present study deals with two numerical schemes Harten, Lax, and van Leer with Contact (HLLC) and advection upstream splitting method (AUSM) to effectively simulate hypersonic flow fields, and accurately predict shock waves with minimal diffusion. In present computations, hypersonic flows have been modeled as a system of hyperbolic equations with one additional equation for non-equilibrium energy and relaxing source terms. Real gas effects, which appear typically in hypersonic flows, have been simulated through energy relaxation method. HLLC and AUSM methods are modified to incorporate the conservation laws for non-equilibrium energy. Numerical implementation have shown that non-equilibrium energy convect with mass, and hence has no bearing on the basic numerical scheme. The numerical simulation carried out shows good comparison with experimental data available in literature. Both numerical schemes have shown identical results at equilibrium. Present study has demonstrated that real gas effects in hypersonic flows can be modeled through energy relaxation method along with either AUSM or HLLC numerical scheme.

The task of assimilating cloud and rainfall information is steadily increasing in importance with a new generation of observing platforms. The inherent non-linearity of many cloud related processes exacerbates the archetypical assimilation dilemma. Overly simplistic schemes may not represent the physical cloud processes adequately for assimilation purposes. At the other extreme, many current cloud schemes used in forecast or climate models are simply too complex for the tangent linear and adjoint codes to be constructed. Even if the latter is achievable, the full schemes in forecast models often contain discrete switches and non linear processes to maximize forecast performance, but render the linear approximation invalid for perturbations of the magnitude of typical analysis increments. This may cause convergence problems over typical assimilation windows (currently 12 hours) in 4D variational assimilation (4DVAR). The existing ECMWF cloud scheme used in the assimilating model is deficient since there is no connection between diagnosed cloud cover, water/ice and precipitation generation. We thus present a new non-linear diagnostic cloud scheme for which the tangent linear and adjoint codes have been constructed. The scheme is based on prognostic humidity and temperature variables and a diagnostic assumption concerning subgrid variability is used to derive cloud water, ice and cover characteristics. Results using the new scheme are shown.

The area of high speed flow is seeing a renewed interest due to advanced propulsion concepts such as the National Aerospace Plane (NASP), Space Shuttle, and future civil transport concepts. Upwind schemes to solve such flows have become increasingly popular in the last decade due to their excellent shock capturing properties. In the first part of this paper the authors present the extension of the Osher scheme to equilibrium and non-equilibrium gases. For simplicity, the source terms are treated explicitly. Computations based on the above scheme are presented to demonstrate the feasibility, accuracy and efficiency of the proposed scheme. One of the test problems is a Chapman-Jouguet detonation problem for which numerical solutions have been known to bifurcate into spurious weak detonation solutions on coarse grids. Results indicate that the numerical solution obtained depends both on the upwinding scheme used and the limiter employed to obtain second order accuracy. For example, the Osher scheme gives the correct CJ solution when the super-bee limiter is used, but gives the spurious solution when the Van Leer limiter is used. With the Roe scheme the spurious solution is obtained for all limiters.

The conditions under which finite difference schemes for the shallow water equations can conserve both total energy and potential enstrophy are considered. A method of deriving such schemes using operator formalism is developed. Several such schemes are derived for the A-, B- and C-grids. The derived schemes include second-order schemes and pseudo-fourth-order schemes. The simplest B-grid pseudo-fourth-order schemes are presented.

Voice traffic variations are characterized by on/off transitions of voice calls, and talkspurt/silence transitions of speakers in conversations. A speaker is known to be in silence for more than half the time during a telephone conversation. In this dissertation, we study some schemes which exploit speaker silences for an efficient utilization of the transmission capacity in integrated voice/data multiplexing and in digital speech interpolation. We study two voice/data multiplexing schemes. In each scheme, any time slots momentarily unutilized by the voice traffic are made available to data. In the first scheme, the multiplexer does not use speech activity detectors (SAD), and hence the voice traffic variations are due to call on/off only. In the second scheme, the multiplexer detects speaker silences using SAD and transmits voice only during talkspurts. The multiplexer with SAD performs digital speech interpolation (DSI) as well as dynamic channel allocation to voice and data. The performance of the two schemes is evaluated using discrete-time modeling and analysis. The data delay performance for the case of English speech is compared with that for the case of Japanese speech. A closed form expression for the mean data message delay is derived for the single-channel single-talker case. In a DSI system, occasional speech losses occur whenever the number of speakers in simultaneous talkspurt exceeds the number of TDM voice channels. In a buffered DSI system, speech loss is further reduced at the cost of delay. We propose a novel fixed-delay buffered DSI scheme. In this scheme, speech fill-in/hangover is not required because there are no variable delays. Hence, all silences that naturally occur in speech are fully utilized. Consequently, a substantial improvement in the DSI performance is made possible. The scheme is modeled and analyzed in discrete -time. Its performance is evaluated in terms of the probability of speech clipping, packet rejection ratio, DSI

The final goal behind any numerical method is give the smallest wall-clock time for a given final time error or, conversely, the smallest run-time error for a given wall clock time, etc. Here a comparison will be given between adaptive mesh refinement schemes and non-adaptive schemes of higher order. It will be shown that in three dimension calculations that in order for AMR schemes to be competitive that the finest scale must be restricted to an extremely, and unrealistic, small percentage of the computational domain.

The problem of adaptive coordinated control of multiple robot arms transporting an object is addressed. A stable adaptive control scheme for both trajectory tracking and internal force control is presented. Detailed analyses on tracking properties of the object position, velocity and the internal forces exerted on the object are given. It is shown that this control scheme can achieve satisfactory tracking performance without using the measurement of contact forces and their derivatives. It can be shown that this scheme can be realized by decentralized implementation to reduce the computational burden. Moreover, some efficient adaptive control strategies can be incorporated to reduce the computational complexity.

Recently, many documents or messages from an organization need to be signed by more than one person. For that reason, many threshold signatures based on various problems in number theory have been developed. In this paper, a threshold signature scheme based on two most popular number theory problems, namely factoring and discrete logarithms, was proposed. The advantage of this new scheme is based on the fact that it is very hard to solve both factoring and discrete logarithms problems simultaneously. This scheme is also shown secure against several attacks and requires a reasonable time complexity in both signing and verifying phase.

A triangle based total variation diminishing (TVD) scheme for the numerical approximation of hyperbolic conservation laws in two space dimensions is constructed. The novelty of the scheme lies in the nature of the preprocessing of the cell averaged data, which is accomplished via a nearest neighbor linear interpolation followed by a slope limiting procedures. Two such limiting procedures are suggested. The resulting method is considerably more simple than other triangle based non-oscillatory approximations which, like this scheme, approximate the flux up to second order accuracy. Numerical results for linear advection and Burgers' equation are presented.

There has been great interest in building compact synchrotrons for various applications, for example, inverse Compton scattering X-ray sources. However, the beam injection and extraction in compact rings require careful design for the lack of space. In this paper, we propose a simple combined injection-extraction scheme exploiting the fringe field of existing dipole magnets instead of additional septum magnets. This scheme is illustrated by using the 4.8 m ring proposed for Tsinghua Thomson scattering X-ray source as an example. Particle tracking is applied to demonstrate the validity of this scheme.

There has been great interest in building compact synchrotrons for various applications, for example, inverse Compton scattering X-ray sources. However, the beam injection and extraction in compact rings require careful design for the lack of space. In this paper, we propose a simple combined injection-extraction scheme exploiting the fringe field of existing dipole magnets instead of additional septum magnets. This scheme is illustrated by using the 4.8 m ring proposed for Tsinghua Thomson scattering X-ray source as an example. Particle tracking is applied to demonstrate the validity of this scheme. PMID:24689573

In order to improve the efficiency of quantum secret sharing, quantum ramp secret sharing schemes were proposed (Ogawa et al., Phys. Rev. A 72, 032318 [2005]), which had a trade-off between security and coding efficiency. In quantum ramp secret sharing, partial information about the secret is allowed to leak to a set of participants, called an intermediate set, which cannot fully reconstruct the secret. This paper revisits the size of a share in the quantum ramp secret scheme based on a relation between the quantum operations and the coherent information. We also propose an optimal quantum ramp secret sharing scheme.

A class of ENO schemes is presented for the numerical solution of multidimensional hyperbolic systems of conservation laws in structured and unstructured grids. This is a class of shock-capturing schemes which are designed to compute cell-averages to high order accuracy. The ENO scheme is composed of a piecewise-polynomial reconstruction of the solution form its given cell-averages, approximate evolution of the resulting initial value problem, and averaging of this approximate solution over each cell. The reconstruction algorithm is based on an adaptive selection of stencil for each cell so as to avoid spurious oscillations near discontinuities while achieving high order of accuracy away from them.

An account is given of the construction of efficient implementations of 'essentially nonoscillatory' (ENO) schemes that approximate systems of hyperbolic conservation laws. ENO schemes use a local adaptive stencil to automatically obtain information from regions of smoothness when the solution develops discontinuities. Approximations employing ENOs can thereby obtain uniformly high accuracy to the very onset of discontinuities, while retaining a sharp and essentially nonoscillatory shock transition. For ease of implementation, ENO schemes applying the adaptive stencil concept to the numerical fluxes and employing a TVD Runge-Kutta-type time discretization are constructed.

Cell-centered finite-volume (CCFV) schemes have certain attractive properties for the solution of the equations governing compressible fluid flow. Among others, they provide a natural vehicle for specifying flux conditions at the boundaries of the physical domain. Unfortunately, they lead to slow convergence for numerical programs utilizing them. In this report a method for investigating and improving the convergence of CCFV schemes is presented, which focuses on the effect of the numerical boundary conditions. The key to the method is the computation of the spectral radius of the iteration matrix of the entire demoralized system of equations, not just of the interior point scheme or the boundary conditions.

Private Set Intersection allows a client to privately compute set intersection with the collaboration of the server, which is one of the most fundamental and key problems within the multiparty collaborative computation of protecting the privacy of the parties. In this paper, we first present a cheat-sensitive quantum scheme for Private Set Intersection. Compared with classical schemes, our scheme has lower communication complexity, which is independent of the size of the server's set. Therefore, it is very suitable for big data services in Cloud or large-scale client-server networks.

Artificial numerical dissipation is an important issue in large Reynolds number computations. In such computations, the artificial dissipation inherent in traditional numerical schemes can overwhelm the physical dissipation and yield inaccurate results on meshes of practical size. In the present work, the space-time conservation element and solution element method is used to construct new and accurate numerical schemes such that artificial numerical dissipation will not overwhelm physical dissipation. Specifically, these schemes have the property that numerical dissipation vanishes when the physical viscosity goes to zero. These new schemes therefore accurately model the physical dissipation even when it is extremely small. The method of space-time conservation element and solution element, currently under development, is a nontraditional numerical method for solving conservation laws. The method is developed on the basis of local and global flux conservation in a space-time domain, in which space and time are treated in a unified manner. Explicit solvers for model and fluid dynamic conservation laws have previously been investigated. In this paper, we introduce a new concept in the design of implicit schemes, and use it to construct two highly accurate solvers for a convection-diffusion equation. The two schemes become identical in the pure convection case, and in the pure diffusion case. The implicit schemes are applicable over the whole Reynolds number range, from purely diffusive equations to purely inviscid (convective) equations. The stability and consistency of the schemes are analyzed, and some numerical results are presented. It is shown that, in the inviscid case, the new schemes become explicit and their amplification factors are identical to those of the Leapfrog scheme. On the other hand, in the pure diffusion case, their principal amplification factor becomes the amplification factor of the Crank-Nicolson scheme. We also construct an explicit solver

Chrominance used (in addition to luminance) in estimating motion. Variable-rate digital coding scheme for compression of color-video-image data designed to deliver pictures of good quality at moderate compressed-data rate of 1 to 2 bits per pixel, or of fair quality at rate less than 1 bit per pixel. Scheme, in principle, implemented by use of commercially available application-specific integrated circuits. Incorporates elements of some prior coding schemes, including motion compensation (MC) and discrete cosine transform (DCT).

We present a simple diode laser-based photoionization scheme for generating electrons and ions with well-defined spatial and energetic (≲2 eV) structures. This scheme can easily be implemented in ion or electron imaging spectrometers for the purpose of off-line characterization and calibration. The low laser power ˜1 mW needed from a passively stabilized diode laser and the low flux of potassium atoms in an effusive beam make our scheme a versatile source of ions and electrons for applications in research and education.

We propose a novel scheme to probabilistically teleport an unknown two-level quantum state when the information of the partially entangled state is only available for the sender. This is in contrast with the fact that the receiver must know the non-maximally entangled state in previous typical schemes for the teleportation. Additionally, we illustrate two potential applications of the novel scheme for probabilistic teleportation from a sender to a receiver with the help of an assistant, who plays distinct roles under different communication conditions, and our results show that the novel proposal could enlarge the applied range of probabilistic teleportation.

In order to improve the efficiency of quantum secret sharing, quantum ramp secret sharing schemes were proposed (Ogawa et al., Phys. Rev. A 72, 032318 [2005]), which had a trade-off between security and coding efficiency. In quantum ramp secret sharing, partial information about the secret is allowed to leak to a set of participants, called an intermediate set, which cannot fully reconstruct the secret. This paper revisits the size of a share in the quantum ramp secret scheme based on a relation between the quantum operations and the coherent information. We also propose an optimal quantum ramp secret sharing scheme.

A unified scheme was developed to define the composition, improve detection and qualitative identification of water soluble organics in heavy oil retort. Elements of the scheme included gas chromatography-mass spectrometry (GC-MS), high resolution mass spectrometry (HRMS), hybrid mass spectrometry-mass spectrometry (EB-TOF) with electron impact (EI) and fast atom bombardment (FAB) ionization and a computerized library search program. As part of the development of the process, each element of the analytical scheme was applied to complex samples of aqueous organic materials extracted from heavy oil retorts. Preliminary investigations have indicated that the heavy oil retort contains hundreds of compounds in ppm/ppb concentrations.

A novel integration scheme for nonlinear dynamics of geometrically exact shells is developed based on the inextensible director assumption. The new algorithm is designed so as to imply the strict decay of the system total mechanical energy at each time step, and consequently unconditional stability is achieved in the nonlinear regime. Furthermore, the scheme features tunable high frequency numerical damping and it is therefore stiffly accurate. The method is tested for a finite element spatial formulation of shells based on mixed interpolations of strain tensorial components and on a two-parameter representation of director rotations. The robustness of the, scheme is illustrated with the help of numerical examples.

A 4-point interpolatory subdivision scheme with a tension parameter is analyzed, and the local property of 4-point interpolatory subdivision scheme and a kind of G1-continuity sufficient condition between surfaces as well as between curves are discussed. An efficient method of generating natural boundary points of 4-point interpolatory curve is presented, as well as a surface modeling method with the entire fairing property by combining energy optimization with subdivision scheme. The method has been applied in modeling 3D virtual garment surface.

The development of implicit schemes for obtaining steady state solutions to the Euler and Navier-Stokes equations on unstructured grids is outlined. Applications are presented that compare the convergence characteristics of various implicit methods. Next, the development of explicit and implicit schemes to compute unsteady flows on unstructured grids is discussed. Next, the issues involved in parallelizing finite volume schemes on unstructured meshes in an MIMD (multiple instruction/multiple data stream) fashion are outlined. Techniques for partitioning unstructured grids among processors and for extracting parallelism in explicit and implicit solvers are discussed. Finally, some dynamic load balancing ideas, which are useful in adaptive transient computations, are presented.

A new class of high-order monotonicity-preserving schemes for the numerical solution of conservation laws is presented. The interface value in these schemes is obtained by limiting a higher-order polynominal reconstruction. The limiting is designed to preserve accuracy near extrema and to work well with Runge-Kutta time stepping. Computational efficiency is enhanced by a simple test that determines whether the limiting procedure is needed. For linear advection in one dimension, these schemes are shown as well as the Euler equations also confirm their high accuracy, good shock resolution, and computational efficiency.

This paper proposes an efficient scheme of IDCT in H.264 decoder. Firstly, motion compensation residuals of macro-block get from the bit-stream are classified into four cases: only dc coefficient is non-zero, only first row coefficients are non-zero, only first column coefficients are non-zero and others, and it is obvious that inverse transform processing of previous cases can be optimized, so in the second, we use different processing of IDCT in different cases to reduce their complexity. Compared with traditional IDCT scheme, the proposed scheme achieves an average 51.8% reduction in computation complexity without degradation in visual quality.

The accuracy of the space discretization for time-dependent problems when a nonuniform mesh is used is considered. Many schemes reduce to first-order accuracy while a popular finite volume scheme is even inconsistent for general grids. This accuracy is based on physical variables. However, when accuracy is measured in computational variables then second-order accuracy can be obtained. This is meaningful only if the mesh accurately reflects the properties of the solution. In addition, the stability properties of some improved accurate schemes are analyzed and it can be shown that they also allow for larger time steps when Runge-Kutta type methods are used to advance in time.

We present a simple diode laser-based photoionization scheme for generating electrons and ions with well-defined spatial and energetic (≲2 eV) structures. This scheme can easily be implemented in ion or electron imaging spectrometers for the purpose of off-line characterization and calibration. The low laser power ∼1 mW needed from a passively stabilized diode laser and the low flux of potassium atoms in an effusive beam make our scheme a versatile source of ions and electrons for applications in research and education. PMID:27587098

A new class of body-fitted grid system that can keep the third-order accuracy in time and space is proposed with the help of the CIP (constrained interpolation profile/cubic interpolated propagation) method. The grid system consists of the straight lines and grid points moving along these lines like abacus - Soroban in Japanese. The length of each line and the number of grid points in each line can be different. The CIP scheme is suitable to this mesh system and the calculation of large CFL (>10) at locally refined mesh is easily performed. Mesh generation and searching of upstream departure point are very simple and almost mesh-free treatment is possible. Adaptive grid movement and local mesh refinement are demonstrated.

As part of the continuous development of the space-time conservation element and solution element (CE-SE) method, recently a set of so call ed "Courant number insensitive schemes" has been proposed. The key advantage of these new schemes is that the numerical dissipation associa ted with them generally does not increase as the Courant number decre ases. As such, they can be applied to problems with large Courant number disparities (such as what commonly occurs in Navier-Stokes problem s) without incurring excessive numerical dissipation.

We reconsider the application of the "optimization" procedure to the problem of factorization scheme dependence in finite-order QCD calculations. The main difficulty encountered in a previous analysis disappears once an algebraic error is corrected.

Protecting the ownership and controlling the copies of digital data have become very important issues in Internet-based applications. Reversible watermark technology allows the distortion-free recovery of relational databases after the embedded watermark data are detected or verified. In this paper, we propose a new, blind, reversible, robust watermarking scheme that can be used to provide proof of ownership for the owner of a relational database. In the proposed scheme, a reversible data-embedding algorithm, which is referred to as “histogram shifting of adjacent pixel difference” (APD), is used to obtain reversibility. The proposed scheme can detect successfully 100% of the embedded watermark data, even if as much as 80% of the watermarked relational database is altered. Our extensive analysis and experimental results show that the proposed scheme is robust against a variety of data attacks, for example, alteration attacks, deletion attacks, mix-match attacks, and sorting attacks. PMID:24223033

Digital watermarking has been widely used for protecting digital contents from unauthorized duplication. This paper proposes a new watermarking scheme based on spectral modeling synthesis (SMS) for copyright protection of digital contents. SMS defines a sound as a combination of deterministic events plus a stochastic component that makes it possible for a synthesized sound to attain all of the perceptual characteristics of the original sound. In our proposed scheme, watermarks are embedded into the highest prominent peak of the magnitude spectrum of each non-overlapping frame in peak trajectories. Simulation results indicate that the proposed watermarking scheme is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, and MP3 compression and achieves similarity values ranging from 17 to 22. In addition, our proposed scheme achieves signal-to-noise ratio (SNR) values ranging from 29 dB to 30 dB.

Several multigrid schemes are considered for the numerical computation of viscous hypersonic flows. For each scheme, the basic solution algorithm uses upwind spatial discretization with explicit multistage time stepping. Two level versions of the various multigrid algorithms are applied to the two dimensional advection equation, and Fourier analysis is used to determine their damping properties. The capabilities of the multigrid methods are assessed by solving three different hypersonic flow problems. Some new multigrid schemes based on semicoarsening strategies are shown to be quite effective in relieving the stiffness caused by the high aspect ratio cells required to resolve high Reynolds number flows. These schemes exhibit good convergence rates for Reynolds numbers up to 200 x 10(exp 6) and Mach numbers up to 25.

We present an upwind high-resolution factorizable (UHF) discrete scheme for the compressible Euler equations that allows to distinguish between full-potential and advection factors at the discrete level. The scheme approximates equations in their general conservative form and is related to the family of genuinely multidimensional upwind schemes developed previously and demonstrated to have good shock-capturing capabilities. A unique property of this scheme is that in addition to the aforementioned features it is also factorizable, i.e., it allows to distinguish between full-potential and advection factors at the discrete level. The latter property facilitates the construction of optimally efficient multigrid solvers. This is done through a relaxation procedure that utilizes the factorizability property.

A comparison was made of two contrasting G-seat cueing schemes. The G-seat, an aircraft simulation subsystem, creates aircraft acceleration cues via seat contour changes. Of the two cueing schemes tested, one was designed to create skin pressure cues and the other was designed to create body position cues. Each cueing scheme was tested and evaluated subjectively by five pilots regarding its ability to cue the appropriate accelerations in each of four simple maneuvers: a pullout, a pushover, an S-turn maneuver, and a thrusting maneuver. A divergence of pilot opinion occurred, revealing that the perception and acceptance of G-seat stimuli is a highly individualistic phenomena. The creation of one acceptable G-seat cueing scheme was, therefore, deemed to be quite difficult.

Powerful γ-ray spectrometers such as the 8π and GAMMASPHERE are capable of rapidly collecting large data sets that incorporate hundreds of transitions. The determination of nuclear level schemes from the resulting experimental data is time consuming and is a substantial obstacle to the rapid development and formulation of new ideas, particularly when examining trends amongst large numbers of nuclei. The development of next-generation spectrometers such as GRETINA,AGATA, or GRIFFIN, will vastly increase the complexity of the experimental data sets and increase the need for new methods of level scheme determination. We present a new transition-centric level scheme representation that closely matches the form of the experimental data and facilitates the use of graph-theoretic methods. We then present a derivation of an analytical formula that directly relates level scheme structure to experimental singles and coincidence data.

Several multigrid schemes are considered for the numerical computation of viscous hypersonic flows. For each scheme, the basic solution algorithm employs upwind spatial discretization with explicit multistage time stepping. Two-level versions of the various multigrid algorithms are applied to the two-dimensional advection equation, and Fourier analysis is used to determine their damping properties. The capabilities of the multigrid methods are assessed by solving three different hypersonic flow problems. Some new multigrid schemes based on semicoarsening strategies are shown to be quite effective in relieving the stiffness caused by the high-aspect-ratio cells required to resolve high Reynolds number flows. These schemes exhibit good convergence rates for Reynolds numbers up to 200 X 10{sup 6} and Mach numbers up to 25. 32 refs., 31 figs., 1 tab.

The purpose of this research is to construct accurate finite difference schemes for incompressible unsteady flow simulations such as LES (large-eddy simulation) or DNS (direct numerical simulation). In this report, conservation properties of the continuity, momentum, and kinetic energy equations for incompressible flow are specified as analytical requirements for a proper set of discretized equations. Existing finite difference schemes in staggered grid systems are checked for satisfaction of the requirements. Proper higher order accurate finite difference schemes in a staggered grid system are then proposed. Plane channel flow is simulated using the proposed fourth order accurate finite difference scheme and the results compared with those of the second order accurate Harlow and Welch algorithm.

... CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS BIOENERGY PROGRAM § 1424.13... misrepresentation, scheme, or device, or to any other person as a result of the bioenergy producer's actions,...

An upwind total variation diminishing (TVD) scheme and a predictor-corrector symmetric TVD scheme were used to numerically simulate the blast wave diffraction on a stationary object. The objective is to help design an optimum configuration so that lateral motion is minimized and at the same time vortex shedding and flow separation are reduced during a blast wave encounter. Results are presented for a generic configuration for both a coarse grid and a fine grid to illustrate the global and local diffraction flow fields. Numerical experiments for the shock wave reflection on a wedge are also included to validate the current approach. Numerical study indicated that these TVD schemes are more stable and produced higher shock resolution than classical shock capturing methods such as the explicit MacCormack scheme.

Entanglement is a powerful resource for studying quantum effects in macroscopic objects and for quantum information processing. Here, we show that robust entanglement between cavity modes with distinct frequencies can be generated via a mechanical dark mode in an optomechanical quantum interface. Due to quantum interference, the effect of the mechanical noise is cancelled in a way that is similar to the electromagnetically induced transparency. We derive the entanglement in the strong coupling regime by solving the quantum Langevin equation using a perturbation theory approach. The entanglement in the adiabatic scheme is then compared with the entanglement in the stationary state scheme. Given the robust entanglement schemes and our previous schemes on quantum wave length conversion, the optomechanical interface hence forms an effective building block for a quantum network. This work is supported by DARPA-ORCHID program, NSF-DMR-0956064, NSF-CCF-0916303, and NSF-COINS.

This paper proposes an improved image coding scheme based on vector quantization. It is well known that the image quality of a VQ-compressed image is poor when a small-sized codebook is used. In order to solve this problem, the mean value of the image block is taken as an alternative block encoding rule to improve the image quality in the proposed scheme. To cut down the storage cost of compressed codes, a two-stage lossless coding approach including the linear prediction technique and the Huffman coding technique is employed in the proposed scheme. The results show that the proposed scheme achieves better image qualities than vector quantization while keeping low bit rates.

An entropy-bounded Discontinuous Galerkin (EBDG) scheme is proposed in which the solution is regularized by constraining the entropy. The resulting scheme is able to stabilize the solution in the vicinity of discontinuities and retains the optimal accuracy for smooth solutions. The properties of the limiting operator according to the entropy-minimum principle are proofed, and an optimal CFL-criterion is derived. We provide a rigorous description for locally imposing entropy constraints to capture multiple discontinuities. Significant advantages of the EBDG-scheme are the general applicability to arbitrary high-order elements and its simple implementation for multi-dimensional configurations. Numerical tests confirm the properties of the scheme, and particular focus is attributed to the robustness in treating discontinuities on arbitrary meshes.

DISGUISED IN AN OCEANIC CAMOUFLAGE PAINT SCHEME, EVERGREEN MAKES HER WAY THROUGH THE NORTH ATLANTIC DURING WORLD WAR II. HER 3" GUN IS VISIBLE BEHIND THE STACK - U.S. Coast Guard Cutter EVERGREEN, New London, New London County, CT

Numerical simulations of acoustic waves in a shear layer and in an idealized combustion chamber using high resolution Total Variation Diminishing (TVD) schemes has been carried out to study the effects of inherent scheme dissipation and dispersion errors of this class of problems. The numerical results are compared against available exact solutions to quantify these errors. Several popular TVD limiters widely used in the Computational Fluid Dynamics (CFD) community have been assessed. Osher-Chakravarthy limiters are modified so that they can be used in explicit schemes. Among all the limiters investigated, Osher-Chakravarthy third-order limiter is identified as having performed the best. It is also found that all TVD schemes have exceptionally small dispersive errors.

A recently proposed solution to the renormalization-scheme ambiguity in perturbation theory is critically analyzed and shown to possess another kind of ambiguity closely related to the one it is supposed to cure.

The development of a two-timescale discretization scheme for collocation is presented. This scheme allows a larger discretization to be utilized for smoothly varying state variables and a second finer discretization to be utilized for state variables having higher frequency dynamics. As such. the discretization scheme can be tailored to the dynamics of the particular state variables. In so doing. the size of the overall Nonlinear Programming (NLP) problem can be reduced significantly. Two two-timescale discretization architecture schemes are described. Comparison of results between the two-timescale method and conventional collocation show very good agreement. Differences of less than 0.5 percent are observed. Consequently. a significant reduction (by two-thirds) in the number of NLP parameters and iterations required for convergence can be achieved without sacrificing solution accuracy.

Since, in general, non-orthogonal states cannot be cloned, any eavesdropping attempt in a Quantum Communication scheme using non-orthogonal states as carriers of information introduces some errors in the transmission, leading to the possibility of detecting the spy. Usually, orthogonal states are not used in Quantum Cryptography schemes since they can be faithfully cloned without altering the transmitted data. Nevertheless, L. Goldberg and L. Vaidman [Phys. Rev. Lett. 75 (7), pp. 12391243, 1995] proposed a protocol in which, even if the data exchange is realized using two orthogonal states, any attempt to eavesdrop is detectable by the legal users. In this scheme the orthogonal states are superpositions of two localized wave packets which travel along separate channels, i.e. two different paths inside a balanced Mach-Zehnder interferometer. Here we present an experiment realizing this scheme.

Since, in general, nonorthogonal states cannot be cloned, any eavesdropping attempt in a quantum-communication scheme using nonorthogonal states as carriers of information introduces some errors in the transmission, leading to the possibility of detecting the spy. Usually, orthogonal states are not used in quantum-cryptography schemes since they can be faithfully cloned without altering the transmitted data. Nevertheless, L. Goldberg and L. Vaidman [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.75.1239 75, 1239 (1995)] proposed a protocol in which, even if the data exchange is realized using two orthogonal states, any attempt to eavesdrop is detectable by the legal users. In this scheme the orthogonal states are superpositions of two localized wave packets traveling along separate channels. Here we present an experiment realizing this scheme.

... CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS BIOENERGY PROGRAM § 1424.13... misrepresentation, scheme, or device, or to any other person as a result of the bioenergy producer's actions,...

... CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS BIOENERGY PROGRAM § 1424.13... misrepresentation, scheme, or device, or to any other person as a result of the bioenergy producer's actions,...

... CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS BIOENERGY PROGRAM § 1424.13... misrepresentation, scheme, or device, or to any other person as a result of the bioenergy producer's actions,...

... CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS BIOENERGY PROGRAM § 1424.13... misrepresentation, scheme, or device, or to any other person as a result of the bioenergy producer's actions,...

A novel scheme for dam operation has been developed based on the artificial neural network approach to predict the reservoir management and hydrologic effects in response to climate variation and change. The scheme is built upon the historic management information of operating each dam, including climate, ecology properties and attributes (e.g., storage, surface area) for all relevant reservoirs. The scheme implicitly introduces the relationship between water demand and supply for downstream fluvial ecosystem, agriculture irrigation, and hydropower. This study will first present the fundamental formulation of the predictive scheme along with detailed analysis of the historical management data, and then evaluate the performance for its application in the Colorado River basin. Caveats and merits will also be discussed.

... Preventing Collisions at Sea, 1972. Adjustment may be in the form of a temporary traffic lane shift, a temporary suspension of a section of the scheme, a temporary precautionary area overlaying a lane, or...

... Preventing Collisions at Sea, 1972. Adjustment may be in the form of a temporary traffic lane shift, a temporary suspension of a section of the scheme, a temporary precautionary area overlaying a lane, or...

... Preventing Collisions at Sea, 1972. Adjustment may be in the form of a temporary traffic lane shift, a temporary suspension of a section of the scheme, a temporary precautionary area overlaying a lane, or...

... Preventing Collisions at Sea, 1972. Adjustment may be in the form of a temporary traffic lane shift, a temporary suspension of a section of the scheme, a temporary precautionary area overlaying a lane, or...

In this paper, we propose a novel feature extraction scheme for texture classification, in which the texture features are extracted by a two-level hybrid scheme, by integrating two statistical techniques of texture analysis. In the first step, the low level features are extracted by the Gabor filters, and they are encoded with the feature map indices, using Kohonen's SOFM algorithm. In the next step, the encoded feature images are processed by the Gabor filters, Gaussian Markov random fields (GMRF), and Grey level co- occurrence matrix (GLCM) methods to extract the high level features. By integrating two methods of texture analysis in a cascaded manner, we obtained the texture features which achieved a high accuracy for the classification of texture patterns. The proposed schemes were tested on the real microtextures, and the Gabor-GMRF scheme achieved 10 percent increase of the recognition rate, compared to the result obtained by the simple Gabor filtering.

We observe the performances of three strategy evaluation schemes, which are the history-dependent wealth game, the trend-opposing minority game, and the trend-following majority game in a stock market where the price is exogenously determined. The price is either directly adopted from the real stock market indices or generated with the Markov chain of order <=2. Each scheme's success is quantified by average wealth accumulated by the traders equipped with the scheme. The wealth game, as it learns from the history, generally shows good performance unless the market is highly unpredictable. The majority game is relatively successful in a trendy market dominated by long periods of sustained price increasing or decreasing. On the other hand, the minority game is suitable for a market with persistent zig-zag price patterns. These observations agree with our intuition and support the viability of the wealth game as a strategy evaluation scheme in typical markets.

We propose an experimentally feasible scheme to teleport an unkown quantum state onto the vibrational degree of freedom of a macroscopic mirror. The quantum channel between the two parties is established by exploiting radiation pressure effects. PMID:12689325

Revisit the Gallager and van Voorhis optimal source coding scheme for geometrically distributed non-negative integer alphabets and show that the various subcodes in the popular Rice algorithm can be derived from the Gallager and van Voorhis code.

Since, in general, nonorthogonal states cannot be cloned, any eavesdropping attempt in a quantum-communication scheme using nonorthogonal states as carriers of information introduces some errors in the transmission, leading to the possibility of detecting the spy. Usually, orthogonal states are not used in quantum-cryptography schemes since they can be faithfully cloned without altering the transmitted data. Nevertheless, L. Goldberg and L. Vaidman [Phys. Rev. Lett. 75, 1239 (1995)] proposed a protocol in which, even if the data exchange is realized using two orthogonal states, any attempt to eavesdrop is detectable by the legal users. In this scheme the orthogonal states are superpositions of two localized wave packets traveling along separate channels. Here we present an experiment realizing this scheme.

Packet multiplexing has been proposed as a practical method in optical time-division multiplexing. One reasonable approach is to use a packet compression-expansion scheme at the node to match the transmission rate between the ultrafast backbone optical network and slower (electrical) networks. This scheme is superior to the conventional bit interleave scheme in that it does not require an ultrafast switch at the bit rate; instead, switching at the slower header bit rate and/or packet rate is sufficient. In contrast to the bit interleave, we call this scheme compressed optical packet multiplexing (COPM). Here we present an experimental demonstration of an all-optical COPM with use of a 155-Mbit/s video signal that is optically compressed into a 2.64-Gbit/s optical signal and optically expanded back to the original rate with a reasonable bit error rate.

Right-censored time-to-event data are sometimes observed from a (sub)cohort of patients whose survival times can be subject to outcome-dependent sampling schemes. In this paper, we propose a unified estimation method for semiparametric accelerated failure time models under general biased estimating schemes. The proposed estimator of the regression covariates is developed upon a bias-offsetting weighting scheme and is proved to be consistent and asymptotically normally distributed. Large sample properties for the estimator are also derived. Using rank-based monotone estimating functions for the regression parameters, we find that the estimating equations can be easily solved via convex optimization. The methods are confirmed through simulations and illustrated by application to real datasets on various sampling schemes including length-bias sampling, the case-cohort design and its variants. PMID:26941240

The efficiency and accuracy of several time integration schemes are investigated for the unsteady Navier-Stokes equations. This study focuses on the efficiency of higher-order Runge-Kutta schemes in comparison with the popular Backward Differencing Formulations. For this comparison an unsteady two-dimensional laminar flow problem is chosen, i.e., flow around a circular cylinder at Re = 1200. It is concluded that for realistic error tolerances (smaller than 10(exp -1)) fourth-and fifth-order Runge-Kutta schemes are the most efficient. For reasons of robustness and computer storage, the fourth-order Runge-Kutta method is recommended. The efficiency of the fourth-order Runge-Kutta scheme exceeds that of second-order Backward Difference Formula by a factor of 2.5 at engineering error tolerance levels (10(exp -1) to 10(exp -2)). Efficiency gains are more dramatic at smaller tolerances.

A visual communication scheme based on evolutionary spatial 2×2 games is proposed in this paper. Self-organizing patterns induced by complex interactions between competing individuals are exploited for hiding and transmitting secret visual information. Properties of the proposed communication scheme are discussed in details. It is shown that the hiding capacity of the system (the minimum size of the detectable primitives and the minimum distance between two primitives) is sufficient for the effective transmission of digital dichotomous images. Also, it is demonstrated that the proposed communication scheme is resilient to time backwards, plain image attacks and is highly sensitive to perturbations of private and public keys. Several computational experiments are used to demonstrate the effectiveness of the proposed communication scheme.

The use of parallel computers for numerically solving flow fields has gained much importance in recent years. This paper introduces a new high order numerical scheme for computational fluid dynamics (CFD) specifically designed for parallel computational environments. A distributed MIMD system gives the flexibility of treating different elements of the governing equations with totally different numerical schemes in different regions of the flow field. The parallel decomposition of the governing operator to be solved is the primary parallel split. The primary parallel split was studied using a hypercube like architecture having clusters of shared memory processors at each node. The approach is demonstrated using examples of simple steady state incompressible flows. Future studies should investigate the secondary split because, depending on the numerical scheme that each of the processors applies and the nature of the flow in the specific subdomain, it may be possible for a processor to seek better, or higher order, schemes for its particular subcase.

The relationship between two approaches towards construction of genuinely two-dimensional upwind advection schemes is established. One of these approaches is of the control volume type applicable on structured cartesian meshes. It resulted in the compact high resolution schemes capable of maintaining second order accuracy in both homogeneous and inhomogeneous cases. Another one is the fluctuation splitting approach, which is well suited for triangular (and possibly) unstructured meshes. Understanding the relationship between these two approaches allows us to formulate here a new fluctuation splitting high resolution (i.e. possible use of artificial compression, while maintaining positivity property) scheme. This scheme is shown to be linearity preserving in inhomogeneous as well as homogeneous cases.

A novel Reed Solomon (RS) block turbo code (BTC) coding scheme of RS(63,58)×RS(63,58) for optical communications is proposed. The simulation results show that the net coding gain (NCG) of this scheme at the sixth iteration is more than that of other coding schemes at the third iteration for the bit error rate (BER) of 10-12. Furthermore, the novel RS BTC has shorter component code and rapider encoding and decoding speed. Therefore, the novel RS BTC coding scheme can be better used in high-speed long-haul optical communication systems, and the novel RS BTC can be regarded as a candidate code of the super forward error correction (super-FEC) code. Moreover, the encoding/decoding design and implementation of the novel RS BTC are also presented

Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.

Engquist and Osher (1980) have introduced a finite difference scheme for solving the transonic small disturbance equation, taking into account cases in which only compression shocks are admitted. Osher et al. (1983) studied a class of schemes for the full potential equation. It is proved that these schemes satisfy a new discrete 'entropy inequality' which rules out expansion shocks. However, the conducted analysis is restricted to steady two-dimensional flows. The present investigation is concerned with the adoption of a heuristic approach. The full potential equation in conservation form is solved with the aid of a modified artificial density method, based on flux biasing. It is shown that, with the current scheme, expansion shocks are not possible.

The advantages and disadvantages of four new high-resolution difference schemes, namely the von Neumann-Richtmyer, Godunovs, MUSCL and Glimms, for mathematically representing physical conditions in compressible gas flows are compared. (LCL)

The Millennium Development Goal (MDG) target to reduce the proportion of people without sustainable access to safe drinking water by the year 2015 has been met as of 2010, but huge disparities exist. Some regions, particularly Sub-Saharan Africa are lagging behind it is also in this region where up to 30% of the rural schemes are not functional at any given time. There is need for more studies on factors affecting sustainability and necessary measures which when implemented will improve the sustainability of rural water schemes. The main objective of this study was to assess the main factors affecting the sustainability of rural water schemes in Swaziland using a Multi-Criteria Analysis Approach. The main factors considered were: financial, social, technical, environmental and institutional. The study was done in Lubombo region. Fifteen functional water schemes in 11 communities were studied. Data was collected using questionnaires, checklist and focused group discussion guide. A total of 174 heads of households were interviewed. Statistical Package for Social Sciences (SPSS) was used to analyse the data and to calculate sustainability scores for water schemes. SPSS was also used to classify sustainability scores according to sustainability categories: sustainable, partially sustainable and non-sustainable. The averages of the ratings for the different sub-factors studied and the results on the sustainability scores for the sustainable, partially sustainable and non-sustainable schemes were then computed and compared to establish the main factors influencing sustainability of the water schemes. The results indicated technical and social factors as most critical while financial and institutional, although important, played a lesser role. Factors which contributed to the sustainability of water schemes were: functionality; design flow; water fetching time; ability to meet additional demand; use by population; equity; participation in decision making on operation and

Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes for the compressible Euler and Navier-Stokes equations has been developed and verified by the authors and collaborators. These schemes are suitable for the problems in question. Basically, the scheme consists of sixth-order or higher non-dissipative spatial difference operators as the base scheme. To control the amount of numerical dissipation, multiresolution wavelets are used as sensors to adaptively limit the amount and to aid the selection and/or blending of the appropriate types of numerical dissipation to be used. Magnetohydrodynamics (MHD) waves play a key role in drag reduction in highly maneuverable high speed combat aircraft, in space weather forecasting, and in the understanding of the dynamics of the evolution of our solar system and the main sequence stars. Although there exist a few well-studied second and third-order high-resolution shock-capturing schemes for the MHD in the literature, these schemes are too diffusive and not practical for turbulence/combustion MHD flows. On the other hand, extension of higher than third-order high-resolution schemes to the MHD system of equations is not straightforward. Unlike the hydrodynamic equations, the inviscid MHD system is non-strictly hyperbolic with non-convex fluxes. The wave structures and shock types are different from their hydrodynamic counterparts. Many of the non-traditional hydrodynamic shocks are not fully understood. Consequently, reliable and highly accurate numerical schemes for multiscale MHD equations pose a great

The chemical coding scheme of the Medical Coding Scheme (MCS), developed for use in the Comparative Systems Laboratory (CSL), is outlined and evaluated in this report. The chemical coding scheme provides a classification scheme and encoding method for drugs and chemical terms. Using the scheme complicated chemical structures may be expressed…

In this paper we discuss the issue of conservation and convergence to weak solutions of several global schemes, including the commonly used compact schemes and spectral collocation schemes, for solving hyperbolic conservation laws. It is shown that such schemes, if convergent boundedly almost everywhere, will converge to weak solutions. The results are extensions of the classical Lax-Wendroff theorem concerning conservative schemes.

The paper presents a method, called CONDIF, which modifies the CDS (central-difference scheme) by introducing a controlled amount of numerical diffusion based on the local gradients. The numerical diffusion can be adjusted to be negligibly low for most problems. CONDIF results are significantly more accurate than those obtained from the hybrid scheme when the Peclet number is very high and the flow is at large angles to the grid.

Cluster states can be exploited for some tasks such as topological one-way computation, quantum error correction, teleportation and dense coding. In this paper, we investigate and propose an arbitrated quantum signature scheme with cluster states. The cluster states are used for quantum key distribution and quantum signature. The proposed scheme can achieve an efficiency of 100 %. Finally, we also discuss its security against various attacks.

The laser ion source is one of the most powerful heavy ion sources. However, it is difficult to obtain good stability and to control its intense current. To overcome these difficulties, we proposed a new beam injection scheme called 'direct plasma injection scheme'. Following this it was established to provide various species with desired charge state as an intense accelerated beam. Carbon, aluminum and iron beams have been tested.

We consider initial-boundary-value problems for systems of conservation laws and design entropy stable finite difference schemes to approximate them. The schemes are shown to be entropy stable for a large class of systems that are equipped with a symmetric splitting, derived from the entropy formulation. Numerical examples for the Euler equations of gas dynamics are presented to illustrate the robust performance of the proposed method.

All-electron B3LYP harmonic frequencies of Ge2H5 and Ge2H6 are computed for several choices of grid and using both the Becke and the Stratmann, Scuseria, and Frisch atomic partition functions (weight scheme). For large grids, the results are independent of the weighting scheme. The lowest frequency mode is much more stable with respect to the number of grid points when the Stratmann, Scuseria, and Frisch weights are used.

We present new, efficient central schemes for multi-dimensional Hamilton-Jacobi equations. These non-oscillatory, non-staggered schemes are first- and second-order accurate and are designed to scale well with an increasing dimension. Efficiency is obtained by carefully choosing the location of the evolution points and by using a one-dimensional projection step. First-and second-order accuracy is verified for a variety of multi-dimensional, convex and non-convex problems.

In this paper, we propose a new fuzzy structural matching scheme for space stereo vision which is based on the fuzzy properties of regions of images and effectively reduces the computational burden in the following low level matching process. Three dimensional distance images of a space truss structural model are estimated using this scheme from stereo images sensed by Charge Coupled Device (CCD) TV cameras.

An expert system based intelligent control scheme is being developed for the effective control and full automation of bioreactor systems in space. The scheme developed will have the capability to capture information from various resources including heuristic information from process researchers and operators. The knowledge base of the expert system should contain enough expertise to perform on-line system identification and thus be able to adapt the controllers accordingly with minimal human supervision.

We describe a general optimization procedure for both maximizing the resolution characteristics of existing finite differencing schemes as well as designing finite difference schemes that will meet the error tolerance requirements of numerical solutions. The procedure is based on an optimization process. This is a generalization of the compact scheme introduced by Lele in which the resolution is improved for single, one-dimensional spatial derivative, whereas in the present approach the complete scheme, after spatial and temporal discretizations, is optimized on a range of parameters of the scheme and the governing equations. The approach is to linearize and Fourier analyze the discretized equations to check the resolving power of the scheme for various wave number ranges in the solution and optimize the resolution to satisfy the requirements of the problem. This represents a constrained nonlinear optimization problem which can be solved to obtain the nodal weights of discretization. An objective function is defined in the parametric space of wave numbers, Courant number, Mach number and other quantities of interest. Typical criterion for defining the objective function include the maximization of the resolution of high wave numbers for acoustic and electromagnetic wave propagations and turbulence calculations. The procedure is being tested on off-design conditions of non-uniform mesh, non-periodic boundary conditions, and non-constant wave speeds for scalar and system of equations. This includes the solution of wave equations and Euler equations using a conventional scheme with and without optimization and the design of an optimum scheme for the specified error tolerance.

A key idea in finite difference weighted essentially non-oscillatory (WENO) schemes is a combination of lower order fluxes to obtain a higher order approximation. The choice of the weight to each candidate stencil, which is a nonlinear function of the grid values, is crucial to the success of WENO schemes. For the system case, WENO schemes are based on local characteristic decompositions and flux splitting to avoid spurious oscillation. But the cost of computation of nonlinear weights and local characteristic decompositions is very high. In this paper, we investigate hybrid schemes of WENO schemes with high order up-wind linear schemes using different discontinuity indicators and explore the possibility in avoiding the local characteristic decompositions and the nonlinear weights for part of the procedure, hence reducing the cost but still maintaining non-oscillatory properties for problems with strong shocks. The idea is to identify discontinuity by an discontinuity indicator, then reconstruct numerical flux by WENO approximation in discontinuous regions and up-wind linear approximation in smooth regions. These indicators are mainly based on the troubled-cell indicators for discontinuous Galerkin (DG) method which are listed in the paper by Qiu and Shu (J. Qiu, C.-W. Shu, A comparison of troubled-cell indicators for Runge-Kutta discontinuous Galerkin methods using weighted essentially non-oscillatory limiters, SIAM Journal of Scientific Computing 27 (2005) 995-1013). The emphasis of the paper is on comparison of the performance of hybrid scheme using different indicators, with an objective of obtaining efficient and reliable indicators to obtain better performance of hybrid scheme to save computational cost. Detail numerical studies in one- and two-dimensional cases are performed, addressing the issues of efficiency (less CPU time and more accurate numerical solution), non-oscillatory property.

The variational assimilation theory is generally based on unbiased observations. In practice, however, almost all observations suffer from biases arising from observational instruments, radiative transfer operator, precondition of data, and so on. Therefore, a bias correction scheme is indispensable. The current scheme for radiance bias correction in the GRAPES 3DVar system is an offline scheme. It is actually a static correction for the radiance bias before the process of cost function minimization. In consideration of its effects on forecast results, this kind of scheme has some shortcomings. Thus, this study provides a variational bias correction (VarBC) scheme for the GRAPES 3DVar system following Dee's idea. In the VarBC scheme, the observation operator is modified and a new control variable is defined by taking the predictor coefficients as the control parameters. According to the feature of the GRAPES-3DVAR, an incremental formulation is applied and the original bias correction scheme is maintained in the actual process of observations. The VarBC is designed to co-exist with the original scheme, because it is a dynamic revision to the observational operator on the basis of the old method, i.e., it adjusts the model state vector along with the control parameters to an unbiased state in the process of minimization and the assimilation system remains consistent with available information automatically. Preliminary experimental results show that the mean departures of background-minus-observation and analysis-minus-observation are reduced as expected. In a case study of the heavy rainfall that happened in South China on 11-13 June 2008, the 500-hPa geopotential height is better simulated using the analyzed field from the VarBC as the initial condition.

Cluster states can be exploited for some tasks such as topological one-way computation, quantum error correction, teleportation and dense coding. In this paper, we investigate and propose an arbitrated quantum signature scheme with cluster states. The cluster states are used for quantum key distribution and quantum signature. The proposed scheme can achieve an efficiency of 100 %. Finally, we also discuss its security against various attacks.

Solutions of many physical problems have salient local features that are qualitatively known a priori (for example, singularities at point sources, edge and corners; boundary layers; derivative jumps at material interfaces; strong dipole field components near polarized spherical particles; cusps of electronic wavefunctions at the nuclei; electrostatic double layers around colloidal particles, etc.) The known methods capable of providing flexible local approximation of such features include the generalized finite element - partition of unity method, special variational-difference schemes in broken Sobolev spaces, and a few other specialized techniques. In the proposed new class of Flexible Local Approximation MEthods (FLAME), a desirable set of local approximating functions (such as cylindrical or spherical harmonics, plane waves, harmonic polynomials, etc.) defines a finite difference scheme on a chosen grid stencil. One motivation is to minimize the notorious 'staircase' effect at curved and slanted interface boundaries. However, the new approach has much broader applications. As illustrative examples, the paper presents arbitrarily high order 3-point schemes for the 1D Schroedinger equation and a 1D singular equation, schemes for electrostatic interactions of colloidal particles, electromagnetic wave propagation and scattering, plasmon resonances. Moreover, many classical finite difference schemes, including the Collatz 'Mehrstellen' schemes, are direct particular cases of FLAME.

Security techniques like cryptography and authentication can fail to protect a network once a node is compromised. Hence, trust establishment continuously monitors and evaluates node behavior to detect malicious and compromised nodes. However, just like other security schemes, trust establishment is also vulnerable to attack. Moreover, malicious nodes might misbehave intelligently to trick trust establishment schemes. Unfortunately, attack-resistance and robustness issues with trust establishment schemes have not received much attention from the research community. Considering the vulnerability of trust establishment to different attacks and the unique features of sensor nodes in wireless sensor networks, we propose a lightweight and robust trust establishment scheme. The proposed trust scheme is lightweight thanks to a simple trust estimation method. The comprehensiveness and flexibility of the proposed trust estimation scheme make it robust against different types of attack and misbehavior. Performance evaluation under different types of misbehavior and on-off attacks shows that the detection rate of the proposed trust mechanism is higher and more stable compared to other trust mechanisms. PMID:25806875

Background & objectives: Quality of care is an important determinant for utilizing health services. In India, the quality of care in most health services is poor. The government recognizes this and has been working on both supply and demand aspects. In particular, it is promoting community health insurance (CHI) schemes, so that patients can access quality services. This observational study was undertaken to measure the level of satisfaction among insured and uninsured patients in two CHI schemes in India. Methods: Patient satisfaction was measured, which is an outcome of good quality care. Two CHI schemes, Action for Community Organisation, Rehabilitation and Development (ACCORD) and Kadamalai Kalanjiam Vattara Sangam (KKVS), were chosen. Randomly selected, insured and uninsured households were interviewed. The household where a patient was admitted to a hospital was interviewed in depth about the health seeking behaviour, the cost of treatment and the satisfaction levels. Results: It was found that at both ACCORD and KKVS, there was no significant difference in the levels of satisfaction between the insured and uninsured patients. The main reasons for satisfaction were the availability of doctors and medicines and the recovery by the patient. Interpretation & conclusions: Our study showed that insured hospitalized patients did not have significantly higher levels of satisfaction compared to uninsured hospitalized patients. If CHI schemes want to improve the quality of care for their clients, so that they adhere to the scheme, the scheme managers need to negotiate actively for better quality of care with empanelled providers. PMID:21321418

In a previous paper, Morel and Montry used a Galerkin-based diffusion analysis to define a particular weighted diamond angular discretization for S{sub n}n calculations in curvilinear geometries. The weighting factors were chosen to ensure that the Galerkin diffusion approximation was preserved, which eliminated the discrete-ordinates flux dip. It was also shown that the step and diamond angular differencing schemes, which both suffer from the flux dip, do not preserve the diffusion approximation in the Galerkin sense. In this paper we re-derive the Morel and Montry weighted diamond scheme using a formal asymptotic diffusion-limit analysis. The asymptotic analysis yields more information than the Galerkin analysis and demonstrates that the step and diamond schemes do in fact formally preserve the diffusion limit to leading order, while the Morel and Montry weighted diamond scheme preserves it to first order, which is required for full consistency in this limit. Nonetheless, the fact that the step and diamond differencing schemes preserve the diffusion limit to leading order suggests that the flux dip should disappear as the diffusion limit is approached for these schemes. Computational results are presented that confirm this conjecture. We further conjecture that preserving the Galerkin diffusion approximation is equivalent to preserving the asymptotic diffusion limit to first order.

In this paper, some elegant extended finite element method (XFEM) schemes for level set method structural optimization are proposed. Firstly, two-dimension (2D) and three-dimension (3D) XFEM schemes with partition integral method are developed and numerical examples are employed to evaluate their accuracy, which indicate that an accurate analysis result can be obtained on the structural boundary. Furthermore, the methods for improving the computational accuracy and efficiency of XFEM are studied, which include the XFEM integral scheme without quadrature sub-cells and higher order element XFEM scheme. Numerical examples show that the XFEM scheme without quadrature sub-cells can yield similar accuracy of structural analysis while prominently reducing the time cost and that higher order XFEM elements can improve the computational accuracy of structural analysis in the boundary elements, but the time cost is increasing. Therefore, the balance of time cost between FE system scale and the order of element needs to be discussed. Finally, the reliability and advantages of the proposed XFEM schemes are illustrated with several 2D and 3D mean compliance minimization examples that are widely used in the recent literature of structural topology optimization. All numerical results demonstrate that the proposed XFEM is a promising structural analysis approach for structural optimization with the level set method.

Blind signature schemes allow users to obtain the signature of a message while the signer learns neither the message nor the resulting signature. Therefore, blind signatures have been used to realize cryptographic protocols providing the anonymity of some participants, such as: secure electronic payment systems and electronic voting systems. A fair blind signature is a form of blind signature which the anonymity could be removed with the help of a trusted entity, when this is required for legal reasons. Recently, a fair quantum blind signature scheme was proposed and thought to be safe. In this paper, we first point out that there exists a new attack on fair quantum blind signature schemes. The attack shows that, if any sender has intercepted any valid signature, he (she) can counterfeit a valid signature for any message and can not be traced by the counterfeited blind signature. Then, we construct a fair quantum blind signature scheme by improved the existed one. The proposed fair quantum blind signature scheme can resist the preceding attack. Furthermore, we demonstrate the security of the proposed fair quantum blind signature scheme and compare it with the other one.

We present a kinetic numerical scheme for the relativistic Euler equations, which describe the flow of a perfect fluid in terms of the particle density n, the spatial part of the four-velocity u and the pressure p. The kinetic approach is very simple in the ultra-relativistic limit, but may also be applied to more general cases. The basic ingredients of the kinetic scheme are the phase-density in equilibrium and the free flight. The phase-density generalizes the non-relativistic Maxwellian for a gas in local equilibrium. The free flight is given by solutions of a collision free kinetic transport equation. The scheme presented here is an explicit method and unconditionally stable. We establish that the conservation laws of mass, momentum and energy as well as the entropy inequality are everywhere exactly satisfied by the solution of the kinetic scheme. For that reason we obtain weak admissible Euler solutions including arbitrarily complicated shock interactions. In the numerical case studies the results obtained from the kinetic scheme are compared with the first order upwind and centered schemes.

Convective processes profoundly affect the global water and energy balance of our planet but remain a challenge for global climate modeling. Here we develop and investigate the suitability of a unified convection scheme, capable of handling both shallow and deep convection, to simulate cases of tropical oceanic convection, mid-latitude continental convection, and maritime shallow convection. To that aim, we employ large-eddy simulations (LES) as a benchmark to test and refine a unified convection scheme implemented in the Single-Column Community Atmosphere Model (SCAM). Our approach is motivated by previous cloud-resolving modeling studies, which have documented the gradual transition between shallow and deep convection and its possible importance for the simulated precipitation diurnal cycle. Analysis of the LES reveals that differences between shallow and deep convection, regarding cloud-base properties as well as entrainment/detrainment rates, can be related to the evaporation of precipitation. Parameterizing such effects and accordingly modifying the University of Washington shallow convection scheme, it is found that the new unified scheme can represent both shallow and deep convection as well as tropical and continental convection. Compared to the default SCAM version, the new scheme especially improves relative humidity, cloud cover and mass flux profiles. The new unified scheme also removes the well-known too early onset and peak of convective precipitation over mid-latitude continental areas.

Convective processes profoundly affect the global water and energy balance of our planet but remain a challenge for global climate modeling. Here we develop and investigate the suitability of a unified convection scheme, capable of handling both shallow and deep convection, to simulate cases of tropical oceanic convection, mid-latitude continental convection, and maritime shallow convection. To that aim, we employ large-eddy simulations (LES) as a benchmark to test and refine a unified convection scheme implemented in the Single-column Community Atmosphere Model (SCAM). Our approach is motivated by previous cloud-resolving modeling studies, which have documented the gradual transition between shallow and deep convection and its possible importance for the simulated precipitation diurnal cycle. Analysis of the LES reveals that differences between shallow and deep convection, regarding cloud-base properties as well as entrainment/detrainment rates, can be related to the evaporation of precipitation. Parameterizing such effects and accordingly modifying the University of Washington shallow convection scheme, it is found that the new unified scheme can represent both shallow and deep convection as well as tropical and mid-latitude continental convection. Compared to the default SCAM version, the new scheme especially improves relative humidity, cloud cover and mass flux profiles. The new unified scheme also removes the well-known too early onset and peak of convective precipitation over mid-latitude continental areas.

Security techniques like cryptography and authentication can fail to protect a network once a node is compromised. Hence, trust establishment continuously monitors and evaluates node behavior to detect malicious and compromised nodes. However, just like other security schemes, trust establishment is also vulnerable to attack. Moreover, malicious nodes might misbehave intelligently to trick trust establishment schemes. Unfortunately, attack-resistance and robustness issues with trust establishment schemes have not received much attention from the research community. Considering the vulnerability of trust establishment to different attacks and the unique features of sensor nodes in wireless sensor networks, we propose a lightweight and robust trust establishment scheme. The proposed trust scheme is lightweight thanks to a simple trust estimation method. The comprehensiveness and flexibility of the proposed trust estimation scheme make it robust against different types of attack and misbehavior. Performance evaluation under different types of misbehavior and on-off attacks shows that the detection rate of the proposed trust mechanism is higher and more stable compared to other trust mechanisms. PMID:25806875

Ever since its inception 100 years back, multiple choice items have been widely used as a method of assessment. It has certain inherent limitations such as inability to test higher cognitive skills, element of guesswork while answering, and issues related with marking schemes. Various marking schemes have been proposed in the past but they are not balanced, skewed, and complex, which are based on mathematical calculations which are typically not within the grasp of medical personnel. Type X questions has many advantages being easy to construct, can test multiple concepts/application/facets of a topic, cognitive skill of various level of hierarchy can be tested, and unlike Type K items, they are free from complicated coding. In spite of these advantages, they are not in common use due to complicated marking schemes. This is the reason we explored the aspects of methods of evaluation of multiple correct options multiple choice questions and came up with the simple, practically applicable, nonstringent but logical scoring system for the same. The rationale of the illustrated marking scheme is that it takes into consideration the distracter recognition ability of the examinee rather than relying on the ability only to select the correct response. Thus, examinee's true knowledge is tested, and he is rewarded accordingly for selecting a correct answer and omitting a distracter. The scheme also penalizes for not recognizing a distracter thus controlling guessing behavior. It is emphasized that if the illustrated scoring scheme is adopted, then Type X questions would come in common practice. PMID:27127312

Ever since its inception 100 years back, multiple choice items have been widely used as a method of assessment. It has certain inherent limitations such as inability to test higher cognitive skills, element of guesswork while answering, and issues related with marking schemes. Various marking schemes have been proposed in the past but they are not balanced, skewed, and complex, which are based on mathematical calculations which are typically not within the grasp of medical personnel. Type X questions has many advantages being easy to construct, can test multiple concepts/application/facets of a topic, cognitive skill of various level of hierarchy can be tested, and unlike Type K items, they are free from complicated coding. In spite of these advantages, they are not in common use due to complicated marking schemes. This is the reason we explored the aspects of methods of evaluation of multiple correct options multiple choice questions and came up with the simple, practically applicable, nonstringent but logical scoring system for the same. The rationale of the illustrated marking scheme is that it takes into consideration the distracter recognition ability of the examinee rather than relying on the ability only to select the correct response. Thus, examinee's true knowledge is tested, and he is rewarded accordingly for selecting a correct answer and omitting a distracter. The scheme also penalizes for not recognizing a distracter thus controlling guessing behavior. It is emphasized that if the illustrated scoring scheme is adopted, then Type X questions would come in common practice. PMID:27127312

We assess the validity of a single step Godunov scheme for the solution of the magnetohydrodynamics equations in more than one dimension. The scheme is second-order accurate and the temporal discretization is based on the dimensionally unsplit Corner Transport Upwind (CTU) method of Colella. The proposed scheme employs a cell-centered representation of the primary fluid variables (including magnetic field) and conserves mass, momentum, magnetic induction and energy. A variant of the scheme, which breaks momentum and energy conservation, is also considered. Divergence errors are transported out of the domain and damped using the mixed hyperbolic/parabolic divergence cleaning technique by Dedner et al. (2002) [11]. The strength and accuracy of the scheme are verified by a direct comparison with the eight-wave formulation (also employing a cell-centered representation) and with the popular constrained transport method, where magnetic field components retain a staggered collocation inside the computational cell. Results obtained from two- and three-dimensional test problems indicate that the newly proposed scheme is robust, accurate and competitive with recent implementations of the constrained transport method while being considerably easier to implement in existing hydro codes.

Transform coding has been used successfully for radiological image compression in the picture archival and communication system (PACS) and other applications. However, it suffers from the artifact known as 'blocking effect' due to division of subblocks, which is very undesirable in the clinical environment. In this paper, we propose a combined-transform coding (CTC) scheme to reduce this effect and achieve better subjective performance. In the combined- transform coding scheme, we first divide the image into two sets that have different correlation properties, namely the upper image set (UIS) and lower image set (LIS). The UIS contains the most significant information and more correlation, and the LIS contains the less significant information. The UIS is compressed noiselessly without dividing into blocks and the LIS is coded by conventional block transform coding. Since the correlation in UIS is largely reduced (without distortion), the inter-block correlation, and hence the 'blocking effect,' is significantly reduced. This paper first describes the proposed CTC scheme and investigates its information-theoretic properties. Then, computer simulation results for a class of AP view chest x-ray images are presented. The comparison between the CTC scheme and conventional Discrete Cosine Transform (DCT) and Discrete Walsh-Hadmad Transform (DWHT) is made to demonstrate the performance improvement of the proposed scheme. The advantages of the proposed CTC scheme also include (1) no ringing effect due to no error propagation across the boundary, (2) no additional computation and (3) the ability to hold distortion below a certain threshold. In addition, we found that the idea of combined-coding can also be used in noiseless coding, and slight improvement in the compression performance can also be achieved if used properly. Finally, we point out that this scheme has its advantages in medical image transmission over a noisy channel or the packet-switched network in case of

The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An's scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An's scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability. PMID:24350272

The cloud processes play an important role in all forms of precipitation. Its proper representation is one of the challenging tasks in mesoscale numerical simulation. Studies have revealed that mesoscale feature require proper initialization which may likely to improve the convective system rainfall forecasts. Understanding the precipitation process, model initial condition accuracy and resolved/sub grid-scale precipitation processes representation, are the important areas which needed to improve in order to represent the mesoscale features properly. Various attempts have been done in order to improve the model performance through grid resolution, physical parameterizations, etc. But it is the physical parameterizations which provide a convective atmosphere for the development and intensification of convective events. Further, physical parameterizations consist of cumulus convection, surface fluxes of heat, moisture, momentum, and vertical mixing in the planetary boundary layer (PBL). How PBL and Cumulus schemes capture the evolution of thunderstorm have been analysed by taking thunderstorm cases occurred over Kolkata, India in the year 2011. PBL and cumulus schemes were customized for WSM-6 microphysics because WSM series has been widely used in operational forecast. Results have shown that KF (PBL scheme) and WSM-6 (Cumulus Scheme) have reproduced the evolution of surface variable such as CAPE, temperature and rainfall very much like observation. Further, KF and WSM-6 scheme also provided the increased moisture availability in the lower atmosphere which was taken to higher level by strong vertical velocities providing a platform to initiate a thunderstorm much better. Overestimation of rain in WSM-6 occurs primarily because of occurrence of melting and freezing process within a deeper layer in WSM-6 scheme. These Schemes have reproduced the spatial pattern and peak rainfall coverage closer to TRMM observation. It is the the combination of WSM-6, and KF schemes

Progress toward a stable and efficient numerical treatment for the compressible Favre-Reynolds-averaged Navier-Stokes equations with a Reynolds-stress model (RSM) is presented. The mean-flow and the Reynolds stress model equations are discretized using finite differences on a curvilinear coordinates mesh. The convective flux is approximated by a third-order upwind biased MUSCL scheme. The diffusive flux is approximated using second-order central differencing, based on a full-viscous stencil. The novel time-marching approach relies on decoupled, implicit time integration, that is, the five mean-flow equations are solved separately from the seven Reynolds-stress closure equations. The key idea is the use of the unconditionally positive-convergent implicit scheme (UPC), originally developed for two-equation turbulence models. The extension of the UPC scheme for RSM guarantees the positivity of the normal Reynolds-stress components and the turbulence (specific) dissipation rate for any time step. Thanks to the UPC matrix-free structure and the decoupled approach, the resulting computational scheme is very efficient. Special care is dedicated to maintain the implicit operator compact, involving only nearest neighbor grid points, while fully supporting the larger discretized residual stencil. Results obtained from two- and three-dimensional numerical simulations demonstrate the significant progress achieved in this work toward optimally convergent solution of Reynolds stress models. Furthermore, the scheme is shown to be unconditionally stable and positive.

Several different reactivity control schemes are considered for future space nuclear reactor power systems. Each of these control schemes uses a combination of boron carbide absorbers and/or beryllium oxide reflectors to achieve sufficient reactivity swing to keep the reactor subcritical during launch and to provide sufficient excess reactivity to operate the reactor over its expected 7-15 year lifetime. The size and shape of the control system directly impacts the size and mass of the space reactor's reflector and shadow shield, leading to a tradeoff between reactivity swing and total system mass. This paper presents a trade study of drum, shutter, and petal control schemes based on reactivity swing and mass effects for a representative fast-spectrum, gas-cooled reactor. For each control scheme, the dimensions and composition of the core are constant, and the reflector is sized to provide $5 of cold-clean excess reactivity with each configuration in its most reactive state. The advantages and disadvantages of each configuration are discussed, along with optimization techniques and novel geometric approaches for each scheme.

The diffusive characteristics of two upwind schemes, multi-dimensional fluctuation splitting and locally one-dimensional finite volume, are compared for scalar advection-diffusion problems. Algorithms for the two schemes are developed for node-based data representation on median-dual meshes associated with unstructured triangulations in two spatial dimensions. Four model equations are considered: linear advection, non-linear advection, diffusion, and advection-diffusion. Modular coding is employed to isolate the effects of the two approaches for upwind flux evaluation, allowing for head-to-head accuracy and efficiency comparisons. Both the stability of compressive limiters and the amount of artificial diffusion generated by the schemes is found to be grid-orientation dependent, with the fluctuation splitting scheme producing less artificial diffusion than the finite volume scheme. Convergence rates are compared for the combined advection-diffusion problem, with a speedup of 2.5 seen for fluctuation splitting versus finite volume when solved on the same mesh. However, accurate solutions to problems with small diffusion coefficients can be achieved on coarser meshes using fluctuation splitting rather than finite volume, so that when comparing convergence rates to reach a given accuracy, fluctuation splitting shows a speedup of 29 over finite volume.